CN114255275A - Map construction method and computing device - Google Patents

Map construction method and computing device Download PDF

Info

Publication number
CN114255275A
CN114255275A CN202010960450.0A CN202010960450A CN114255275A CN 114255275 A CN114255275 A CN 114255275A CN 202010960450 A CN202010960450 A CN 202010960450A CN 114255275 A CN114255275 A CN 114255275A
Authority
CN
China
Prior art keywords
map
grid
laser
laser point
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010960450.0A
Other languages
Chinese (zh)
Inventor
王舜垚
胡伟龙
陈超越
潘杨杰
李旭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010960450.0A priority Critical patent/CN114255275A/en
Priority to PCT/CN2021/116601 priority patent/WO2022052881A1/en
Publication of CN114255275A publication Critical patent/CN114255275A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The embodiment of the application discloses a method and computing equipment for constructing a map, wherein the constructed map can be applied to the field of laser processing in the field of automatic driving, and can be particularly applied to intelligent running intelligent agents (such as intelligent automobiles, intelligent networked automobiles and automatic driving automobiles), and the method comprises the following steps: the method comprises the steps of extracting target features of each frame of acquired laser point cloud, constructing a function fitting each target feature, obtaining a limiting condition corresponding to each function, constructing a main map by the functions and the limiting conditions corresponding to the functions, constructing a sub-map (obtained by splicing the sub-maps of the occupied grids corresponding to the laser point clouds of each frame) according to the laser point cloud of each frame, constructing the sub-map which is the Occupied Grid Map (OGM), constructing a composite laser map of the main map and the OGM by establishing an index of the OGM on the main map, and keeping more feature information for matching and positioning, wherein the composite laser map is low in storage capacity requirement.

Description

Map construction method and computing device
Technical Field
The present application relates to the field of laser processing, and in particular, to a method and a computing device for constructing a map.
Background
The positioning technology is one of key technologies of automatic driving, and is to realize accurate positioning of an automatic driving automobile by fusing various positioning means and various sensor data, so that the automatic driving automobile obtains an accurate position of the automatic driving automobile. Accurate positioning is an indispensable function for automatically driving an automobile, and laser point clouds obtained by laser sensors such as a laser radar and a three-dimensional laser scanner are widely applied to accurate positioning because the laser sensors have high measurement accuracy.
The premise of realizing accurate positioning through the laser sensor is that a map constructed based on laser point cloud (which can be referred to as a laser map for short) is obtained, and then positioning is realized according to the laser point cloud obtained by the laser sensor in real time and the laser map in a matching manner. At present, in the implemented scheme, there are two main ways of constructing a laser map: one is that the original laser point cloud is directly constructed into a laser map, and the laser point cloud obtained by a laser sensor in real time is directly matched with the laser map during subsequent positioning; the other method is to compress the three-dimensional laser point cloud into two-dimensional information, and construct a two-dimensional Occupancy Grid Map (OGM) according to the compressed two-dimensional information, and the two-dimensional OGM constitutes a laser map.
However, the two methods have defects, the storage capacity required by the laser map obtained by the first method is too large, engineering multiplexing is difficult, and the second method compresses three-dimensional laser point cloud into two-dimensional information, so that many characteristics are lost, and subsequent positioning accuracy is affected.
Disclosure of Invention
The embodiment of the application provides a map construction method and computing equipment, which are used for constructing a main map and a composite laser map occupying a grid map by establishing an index occupying the grid map on the main map, so that the storage capacity of the composite laser map is reduced, and more characteristic information is reserved for subsequent matching positioning.
Based on this, the embodiment of the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a method for constructing a map, where the map may be applied to a laser processing field in an automatic driving field, for example, may be applied to an intelligent agent (e.g., an intelligent vehicle, an intelligent internet vehicle) that travels intelligently, and the method includes: the computing equipment firstly obtains laser point cloud data (also referred to as laser point cloud for short) required by map building, and performs feature extraction on the obtained laser point cloud to obtain target features, wherein the target features are laser points extracted from the laser point cloud data and meeting preset conditions, and the laser points comprise coordinates of the laser points and reflection intensity of the laser points. For example, a laser sensor in a standard pose respectively acquires a frame of laser point cloud at different geographic positions, n frames of laser point clouds are obtained in total, and then feature extraction is performed on the n frames of laser point clouds by computing equipment to obtain target features; or the laser sensor in the standard pose may send the laser point cloud to the computing device for feature extraction every time the laser sensor acquires a frame of laser point cloud at different geographic positions until all n frames of laser point clouds are processed, and the method for processing the laser point cloud by the computing device is not limited specifically here. After extracting corresponding target features for the laser point clouds, a function fitting each target feature can be constructed, and a constraint condition of the function is obtained, for example, assuming that 3 target features are extracted from the first frame of laser point clouds, 2 of which are line features and 1 of which is a surface feature, functions fitting the 3 target features (3 functions in total) can be constructed respectively, and a constraint condition of each function (3 constraint conditions in total) can be obtained, similarly, the processing can be performed for each frame of laser point clouds, and functions and constraint conditions corresponding to the target features of all n frames of laser point clouds can be obtained, and the obtained functions and constraint conditions constitute a main map. For example, assuming that there are 100 frames of laser point clouds and 800 target features extracted from the laser point clouds, 800 functions and restriction conditions corresponding to the 800 functions are obtained, and the 800 target features and the 800 restriction conditions constitute a main map, it should be noted that the 800 target features extracted in common are target features that have been filtered and combined, for example, 10 target features and 8 target features are extracted from different 2 frames of laser point clouds respectively, wherein there may be a same object (e.g., a same street lamp, a road block, etc.) characterized by partial target features, then the same target features need to be combined into one target feature first, and it is assumed that 2 target features extracted from a previous frame of laser point cloud to 10 target features are the same object as 2 target features extracted from a next frame of laser point cloud, then only 2 target features in one copy need to be retained, and the subsequent extracted target features are all processed in this way, which is not described herein again. The computing device will also construct a sub-map, which is an occupancy grid sub-map (OGM), for each frame of acquired laser point cloud. After each frame of laser point cloud is processed, a main map and an OGM are obtained, and at this time, the obtained main map and the OGM need to be associated to form a composite laser map.
In the above embodiment of the application, the computing device performs two operations on the acquired laser point cloud, one is target feature extraction, a function fitting each target feature is constructed, and a restriction condition corresponding to each function is obtained, the functions and the restriction conditions corresponding to the functions form a main map, and the other is a sub-map OGM constructed according to the laser point cloud, and then, an index of the OGM is established on the main map, so that a composite laser map of the main map and the OGM is constructed, the storage capacity of the composite laser map is reduced, and meanwhile, more feature information is retained for subsequent matching positioning.
In one possible design of the first aspect, the computing device constructing the OGM from the laser point cloud may be: the computing device constructs a corresponding sub map of the occupancy grid for each frame of acquired laser point cloud, or constructs a corresponding sub map of the occupancy grid for several frames of continuous laser point clouds, for example, assuming that there are 100 frames of laser point clouds, then 100 sub maps of the occupancy grid can be constructed (which can be constructed in a one-to-one manner, or in a many-to-one manner, but not limited thereto, and this is merely an illustration), and the process of constructing each sub map of the occupancy grid is as follows: after the length and the width of the occupied grid sub-map (namely, the size of the occupied grid sub-map) and the grid resolution are set, the computing equipment projects each frame of laser point cloud in the obtained laser coordinate system to the corresponding occupied grid sub-map, if no laser point exists in a certain grid, the grid is considered to be empty, and at least one laser point exists, the grid is considered to have an obstacle correspondingly. Thus, the probability that a grid is empty is represented as p (s ═ 1), the sum of the probabilities of having an obstacle represented as p (s ═ 0), and 1, and then the computing device performs a series of mathematical transformations on each frame of laser point cloud projected onto the submap of the occupied grid, which is located in the occupied state or in the vacant state according to the probability of whether each grid is occupied, wherein the center position of the occupied grid map is the origin O of the occupied grid map. Similarly, the above processing is performed on each frame of laser point cloud, so that all n frames of laser point clouds respectively correspond to one occupied grid map (n in total), and then the occupied grid sub-maps corresponding to the n frames of laser point clouds are spliced to obtain a complete OGM.
In the above embodiments of the present application, how to construct corresponding sub maps of the occupancy grids from the laser point clouds and splice the sub maps of the occupancy grids into a complete OGM is specifically described, which has realizability.
In one possible design of the first aspect, the computing device indexing the OGM on the master map may be: first, the computing device converts the central position (i.e., the origin) of the occupied grid map corresponding to each laser point cloud into a coordinate value under a transverse ink transfer grid system (UTM) coordinate system, and then adds each origin coordinate value on the main map as an index label of the occupied grid sub-map corresponding to each frame of laser point cloud.
In the above embodiments of the present application, a specific implementation manner of how to establish a relationship between the main map and the OGM is described, that is, each index tag occupying a grid sub-map is added to the main map, and the implementation manner is easy to implement and simple and convenient to operate.
In a possible design of the first aspect, the constraint that the computing device obtains the function may be: obtaining the value range of the function corresponding to the independent variable; or, obtaining a value of a target independent variable in the function, wherein the target independent variable comprises a coordinate of a target laser point, and the target laser point belongs to the target characteristics.
In the above embodiments of the present application, specific expressions of the limitations of several functions are given, and flexibility and selectivity are provided.
In one possible design of the first aspect, the OGM may include: the height of an obstacle in a first grid and the average value of the reflection intensity of a laser spot falling in the first grid, the first grid being any occupied grid in the occupied grid map, that is, the average height of a laser spot falling in the grid (i.e., the average height of an obstacle in the grid) and the average reflection intensity (i.e., the average value of the reflection intensity of a laser spot falling in the grid) are stored in each occupied grid (i.e., the first grid) in the OGM obtained by the embodiment of the present application.
In the above embodiments of the present application, the provided OGM is different from the existing OGM in that: not only the average height of the obstacle occupying the grid is stored, but also the average reflection intensity of the laser spot falling on the grid, and existing OGMs store only the average height of the obstacle, while the reflection intensity of the laser spot is stored elsewhere. The embodiment of the application has the advantages of storage as follows: easy to find and more convenient in the practical application process.
In one possible design of the first aspect, the computing device may store as shaping data the height of the grid obstacle in the OGM, the average reflected intensity of the laser spot falling into the occupancy grid in the OGM, and both the height of the grid obstacle in the OGM and the average reflected intensity of the laser spot falling into the occupancy grid, wherein the shaping data is numerical data that does not contain a fractional part, and the shaping data is used only to represent integers and is stored in binary form.
In the above embodiments of the present application, it is stated that the data occupying the grid storage in the OGM may be shaping data, and the existing schemes store data in a floating-point type manner, and the shaping data occupies less storage space than the floating-point type data (theoretically, the storage capacity occupied by the shaping data is 1/4 of the floating-point type data), so that the embodiment of the present application has the advantages that: storage capacity can be saved.
In one possible design of the first aspect, each occupied grid in the OGM obtained in the embodiments of the present application can be classified into a partially occupied grid and a fully occupied grid, where the fully occupied grid refers to an obstacle (e.g., a general building such as an office building, a residential building, etc.) extending from the ground to a certain height, and the partially occupied grid refers to a building occupying a part of space such as a bridge opening, a tunnel, an overpass, an air crosswalk, etc., and may also be referred to as a suspended obstacle. For both types of occupancy, when an occupied grid in an OGM is fully occupied, i.e., the obstacle of the grid is a general obstacle, the height of the obstacle stored in the grid is the height of the upper edge of the obstacle from the ground; when an occupied grid in the OGM is partially occupied, i.e., the obstacles of the grid are flying obstacles, the height of the obstacles stored in the grid is the first height of the lower edge of the obstacle from the ground and the second height of the upper edge of the obstacle from the ground.
In the above embodiments of the present application, the improved OGM provided in the examples of the present application is different from the existing OGM in that: obstacles occupying grids are classified into general buildings and suspended obstacles, different heights of different types of obstacles are stored, the average reflection intensity of laser points occupying the grids is also stored, and the existing OGM is considered to be completely occupied if the grids are occupied. The advantage of storing the height of the obstacles in this way in the embodiment of the application is that: more detailed characteristics of the barrier are reserved, and the accuracy of subsequent positioning is improved.
In a possible design of the first aspect, the target feature in this embodiment is substantially a plurality of specific laser points extracted from a frame of laser point cloud, and the extracted target features are generally a line feature and a surface feature, where the line feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same straight line, and the surface feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same plane.
In the above embodiments of the present application, some conditions met by the extracted target features are specifically set forth, and the extracted target features have key features beneficial to subsequent positioning.
A second aspect of embodiments of the present application provides a computing device having functionality to implement the method of the first aspect or any one of the possible implementation manners of the first aspect. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
A third aspect of the embodiments of the present application provides a computing device, which may include a memory, a processor, and a bus system, where the memory is configured to store a program, and the processor is configured to call the program stored in the memory to execute the method of the first aspect or any one of the possible implementation manners of the first aspect of the embodiments of the present application.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the first aspect or any one of the possible implementations of the first aspect.
A fifth aspect of embodiments of the present application provides a computer program, which, when run on a computer, causes the computer to perform the method of the first aspect or any one of the possible implementation manners of the first aspect.
A sixth aspect of embodiments of the present application provides a chip, where the chip includes at least one processor and at least one interface circuit, the interface circuit is coupled to the processor, the at least one interface circuit is configured to perform a transceiving function and send an instruction to the at least one processor, and the at least one processor is configured to execute a computer program or an instruction, where the at least one processor has a function of implementing the method according to the first aspect or any one of the possible implementations of the first aspect, and the function may be implemented by hardware, software, or a combination of hardware and software, and the hardware or software includes one or more modules corresponding to the above function. In addition, the interface circuit is used for communicating with other modules besides the chip, for example, the interface circuit can send the composite laser map obtained by the on-chip processor to various intelligent running (such as unmanned driving, auxiliary driving and the like) intelligent bodies for motion planning (such as driving behavior decision, global path planning and the like).
Drawings
Fig. 1 is a schematic diagram of OGMs of different resolutions provided by embodiments of the present application;
FIG. 2 is a schematic diagram of an OGM constructed in a certain area according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of the general architecture of an autonomous vehicle provided by an embodiment of the present application;
FIG. 4 is a schematic structural diagram of an autonomous vehicle provided by an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a method for constructing a map according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of extracting target features and constructing a function based on a frame of laser point cloud according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of 9 occupancy grid maps constructed by 9 frames of laser point clouds according to the embodiment of the present application and spliced into an OGM;
FIG. 8 is a schematic diagram of the type of OGM storage data obtained by the construction provided by the embodiment of the present application;
FIG. 9 is another schematic diagram of the type of OGM storage data obtained by the construction provided by the embodiment of the present application;
fig. 10 is a schematic diagram of a process for constructing a composite laser map according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram of an actual application process of a constructed composite laser map according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a computing device provided by an embodiment of the present application;
fig. 13 is another schematic diagram of a computing device provided in an embodiment of the present application.
Detailed Description
The embodiment of the application provides a map construction method and computing equipment, which are used for constructing a main map and a composite laser map occupying a grid map by establishing an index occupying the grid map on the main map, so that the storage capacity of the composite laser map is reduced, and more characteristic information is reserved for subsequent matching positioning.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the various embodiments of the application and how objects of the same nature can be distinguished. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments of the present application relate to many related knowledge about laser point clouds, maps, etc., and in order to better understand the scheme of the embodiments of the present application, the following first introduces related terms and concepts that may be related to the embodiments of the present application. It is to be understood that the terminology and the concepts related thereto may be interpreted in a limited manner based on the specific context of the embodiment of the application, and is not intended to be limiting, since the scope of the present application will be limited only by the specific context, and will be different depending on the embodiment and the specific context, which is not limited herein.
(1) Laser point cloud
The laser point cloud may also be referred to as laser point cloud data, laser information received by a laser sensor such as a laser radar or a three-dimensional laser scanner is presented in the form of a point cloud, a point data set of an appearance surface of a measured object obtained by a measuring instrument is referred to as a point cloud, if the measuring instrument is a laser sensor, the obtained point cloud is referred to as a laser point cloud (generally, 32 lines of laser have tens of thousands of laser points at the same time), the laser information contained in the laser point cloud may be referred to as [ x, y, z, intensity ], and the laser information represents a three-dimensional coordinate of a target position, on which each laser point is projected, in a laser coordinate system and a reflection intensity of the laser point.
(2) Transverse ink transfer grid system (UTM) coordinate system
The UTM coordinate is a planar rectangular coordinate, and such a grid system and the projection on which it is based have been widely used for topographic maps, as a grid of reference for satellite imagery and natural resource databases, and for other applications requiring precise positioning, for example, the UTM coordinate is generally used for precise positioning of autonomous vehicles.
The UTM projection is an ellipsoid with a transverse positive axis tangent to the ellipsoid, and the central line of the ellipsoid lies on the equatorial plane of the ellipsoid and passes through the ellipsoid particles. Thereby projecting a point on the ellipsoid onto the cylindroid. The two secant circles have unchanged length on the UTM projection diagram, namely 2 standard meridian circles. The middle of the two secant circles is a central meridian circle, the projected length of the central meridian is 0.9996 times of that before projection, and the scale factor k is the projected length/the actual length before projection. The difference in longitude between the standard cut and the central meridian is 1.6206 deg., i.e., 1 deg. 37' 14.244 ". The UTM longitudinal zones range from 1 to 60, with east-west spans of 58 zones of 6. The longitude zone encompasses all regions of the earth ranging from 80 ° S to 84 ° N in latitude. There are a total of 20 UTM latitude regions, each region spanning 8 ° north and south, identified using the letters C through X (where the letters I and O are absent). A. The B, Y, Z regions are not within the system, they cover the south and north regions.
The representation format of the UTM coordinates is: the latitude area of the longitude area is north of east, where east represents the projected distance from the center meridian of the longitude area and north represents the projected distance from the equator. Both values are in meters. For example, the result of using UTM to represent longitude/latitude coordinates (61.44, 25.40) is 35V 4146686812844, while the result of representing longitude/latitude coordinates (-47.04, -73.48) is 18G 6154714789269.
(3) Occupancy Grid Map (OGM)
OGM is a commonly used map representation method for robots, which often use laser sensors, and the sensor data is noisy, for example, how far away from the robot the front obstacle is detected by the laser sensors, it is impossible to detect an accurate value, for example, if the accurate value is 4 meters, then the obstacle detected at the current moment is 3.9 meters, but the position detected at the next moment is 4.1 meters, and the position of both distances cannot be considered as an obstacle, in order to solve the problem, the OGM is used, as shown in fig. 1, the OGM with two different resolutions, the black dots are laser dots, all the laser dots mapped in the OGM constitute a laser point cloud, in practical applications, the size of the OGM is generally 300 × 300, that is, 300 × 300 small grids (i.e., grids), the size of each grid (i.e., the length is wide, which means how many meters each grid corresponds to in the vehicle coordinate system) is called the resolution of OGM, the higher the resolution, the smaller the grid size, the fewer the laser points that the laser sensor acquired at a certain moment fall on a certain grid, as shown in the left diagram of fig. 1, the 4 laser points that fall on the gray bottom grid (row 6, column 11 of the left diagram of fig. 1), whereas the lower the resolution, the larger the grid size, the fewer the laser points that the laser sensor acquired at the same moment fall on a certain grid, as shown in the right diagram of fig. 1, the 9 laser points that fall on the gray bottom grid (row 4, column 7 of the right diagram of fig. 1). In general maps, a certain point on the map has an obstacle or does not have an obstacle, but in the OGM, at a certain time, if there is no laser point in a certain grid, the grid is considered to be empty, and if there is at least one laser point, the grid is considered to have an obstacle. Therefore, the probability that a grid is empty is represented as p (s ═ 1), an obstacle is represented as p (s ═ 0), the sum of the probabilities of the two is 1, then laser point clouds acquired at different times are mapped into the OGM, and the grid is positioned in an occupied state or an idle state according to the probability of whether the grid is occupied or not through a series of mathematical transformations. It should be noted that, in general, the center of the OGM is the origin of the OGM, which is indicated by the triangle shown in the left side of fig. 1.
For the sake of understanding, a two-dimensional OGM (grid is generally in the order of decimeters) constructed according to the embodiment of the present application is illustrated below, and an average height of a laser spot falling into the grid (i.e., an average height of an obstacle in the grid) and an average reflection intensity are stored in each occupied grid, and referring to fig. 2, fig. 2 illustrates an OGM constructed according to the embodiment of the present application in a certain area.
Embodiments of the present application are described below with reference to the accompanying drawings. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The map constructed based on the laser point cloud in the embodiment of the application can be applied to a scene of performing motion planning (for example, driving behavior decision, global path planning and the like) on various intelligent traveling (for example, unmanned driving, auxiliary driving and the like) intelligent bodies, the overall architecture of the automatic driving vehicle is firstly explained by taking the intelligent bodies as automatic driving vehicles as examples, specifically refer to fig. 3, fig. 3 illustrates a top-down layered architecture, and defined interfaces can be arranged among systems for transmitting data among the systems so as to ensure the real-time performance and the integrity of the data. The following briefly introduces the various systems:
(1) environment sensing system
The environmental perception is the most basic part in the intelligent driving vehicle, and no matter the driving behavior decision or the global path planning is made, the corresponding judgment, decision and planning are carried out on the basis of the environmental perception according to the real-time perception result of the road traffic environment, so that the intelligent driving of the vehicle is realized.
The environment sensing system mainly utilizes various sensors to obtain related environment information so as to complete construction of an environment model and knowledge expression of a traffic scene, the used sensors comprise a camera, a single-line radar (SICK), a four-line radar (IBEO), a three-dimensional laser radar (HDL-64E) and the like, wherein the camera is mainly responsible for traffic light detection, lane line detection, road sign detection, vehicle identification and the like; the laser sensor is mainly responsible for detection, identification and tracking of dynamic/static obstacles and accurate positioning of the laser sensor, for example, laser emitted by a three-dimensional laser radar generally collects external environment information at the frequency of 10FPS, returns a laser point cloud at each moment, and finally sends the acquired real-time laser point cloud to an autonomous decision-making system for further decision-making and planning.
(2) Autonomous decision making system
The autonomous decision system is a key component in an intelligent driving vehicle and mainly comprises two core subsystems of behavior decision and motion planning, wherein the behavior decision subsystem mainly obtains a global optimal driving route by operating a global planning layer to clarify a specific driving task, and outputs information of positions, directions and the like of objects around the vehicle according to real-time each frame of laser point cloud sent by an environment perception system according to current real-time road information (namely real-time environment perception information in fig. 3) sent by the environment perception system, and is matched with a map (namely the laser map in fig. 3) constructed in advance by adopting a matching algorithm, so that the accurate positioning of the vehicle is realized. Among the commonly used matching algorithms, there are direct point cloud matching (e.g., an ICP (iterative closest point) algorithm), probability matching (e.g., a Normal Distribution Transform (NDT) algorithm), filter matching (e.g., histogram filtering), feature matching, and the like.
And finally, based on road traffic rules and driving experience, deciding a reasonable driving behavior according to the positioning of the self vehicle and the information such as the position, orientation and the like of surrounding objects, sending the driving behavior instruction to a motion planning subsystem, and planning a feasible driving track based on indexes such as safety, stability and the like by the motion planning subsystem according to the received driving behavior instruction and the current environment perception information and sending the feasible driving track to a control system.
(3) Control system
The control system is in particular also divided into two parts: the system comprises a control subsystem and an execution subsystem, wherein the control subsystem is used for converting a feasible driving track generated by the autonomous decision system into specific execution instructions of each execution module and transmitting the specific execution instructions to the execution subsystem; the execution subsystem receives the execution instruction from the control subsystem and then sends the execution instruction to each control object to reasonably control the steering, braking, accelerator, gear and the like of the vehicle, so that the vehicle automatically runs to complete corresponding driving operation.
It should be noted that the general architecture of the autonomous vehicle shown in fig. 3 is only illustrative, and in practical applications, more or fewer systems/subsystems or modules may be included, and each system/subsystem or module may include multiple components, which is not limited herein.
For further understanding of the present solution, based on the general architecture of the autonomous vehicle described in fig. 3, in the embodiment of the present application, the specific functions of the internal structures of the autonomous vehicle will be described with reference to fig. 4, please refer to fig. 4, fig. 4 is a schematic structural diagram of the autonomous vehicle provided in the embodiment of the present application, the autonomous vehicle 100 is configured in a fully or partially autonomous driving mode, for example, the autonomous vehicle 100 may control itself while in the autonomous driving mode, and may determine the current state of the vehicle and its surrounding environment through human operation, determine the possible behavior of at least one other vehicle in the surrounding environment, determine the confidence level corresponding to the possibility that the other vehicle performs the possible behavior, and control the autonomous vehicle 100 based on the determined information. The autonomous vehicle 100 may also be placed into operation without human interaction while the autonomous vehicle 100 is in the autonomous mode.
Autonomous vehicle 100 may include various subsystems such as a travel system 102, a sensor system 104 (e.g., camera, SICK, IBEO, lidar, etc. of FIG. 3, all belonging to a module in sensor system 104), a control system 106, one or more peripherals 108, and a power supply 110, a computer system 112, and a user interface 116. Alternatively, the autonomous vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the autonomous vehicle 100 may be interconnected by wires or wirelessly.
The travel system 102 may include components that provide powered motion to the autonomous vehicle 100. In one embodiment, the travel system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
The engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine composed of a gasoline engine and an electric motor, and a hybrid engine composed of an internal combustion engine and an air compression engine. The engine 118 converts the energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 119 may also provide energy to other systems of the autonomous vehicle 100. The transmission 120 may transmit mechanical power from the engine 118 to the wheels 121. The transmission 120 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 120 may also include other devices, such as a clutch. Wherein the drive shaft may comprise one or more shafts that may be coupled to one or more wheels 121.
The sensor system 104 may include a number of sensors that sense information about the environment surrounding the autonomous vehicle 100. For example, the sensor system 104 may include a positioning system 122 (which may be a global positioning GPS system, a compass system, or other positioning system), an Inertial Measurement Unit (IMU) 124, a radar 126, a laser range finder 128, and a camera 130. The sensor system 104 may also include sensors that are monitored for internal systems of the autonomous vehicle 100 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). The sensing data from one or more of these sensors can be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a key function of safe operation of the autonomous vehicle 100. In the embodiment of the present application, the laser sensor is a sensing module that is very important in the sensor system 104.
Where the positioning system 122 may be used to estimate the geographic location of the autonomous vehicle 100, in this embodiment, a laser sensor may be used as one of the positioning systems 122 to achieve precise positioning of the autonomous vehicle 100, and the IMU 124 may be used to sense changes in the position and orientation of the autonomous vehicle 100 based on inertial acceleration. In one embodiment, IMU 124 may be a combination of an accelerometer and a gyroscope. The radar 126 may utilize radio signals to sense objects within the surrounding environment of the autonomous vehicle 100, which may be embodied as millimeter wave radar or lidar. In some embodiments, in addition to sensing objects, radar 126 may also be used to sense the speed and/or heading of an object. The laser rangefinder 128 may use a laser to sense objects in the environment in which the autonomous vehicle 100 is located. In some embodiments, the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, among other system components. The camera 130 may be used to capture multiple images of the surrounding environment of the autonomous vehicle 100. The camera 130 may be a still camera or a video camera.
The control system 106 is for controlling the operation of the autonomous vehicle 100 and its components. The control system 106 may include various components including a steering system 132, a throttle 134, a braking unit 136, a computer vision system 140, a line control system 142, and an obstacle avoidance system 144.
Wherein the steering system 132 is operable to adjust the heading of the autonomous vehicle 100. For example, in one embodiment, a steering wheel system. The throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the autonomous vehicle 100. The brake unit 136 is used to control the deceleration of the autonomous vehicle 100. The brake unit 136 may use friction to slow the wheel 121. In other embodiments, the brake unit 136 may convert the kinetic energy of the wheel 121 into an electric current. The brake unit 136 may also take other forms to slow the rotational speed of the wheels 121 to control the speed of the autonomous vehicle 100. The computer vision system 140 may be operable to process and analyze images captured by the camera 130 to identify objects and/or features in the environment surrounding the autonomous vehicle 100. The objects and/or features may include traffic signals, road boundaries, and obstacles. The computer vision system 140 may use object recognition algorithms, Structure From Motion (SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system 140 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The routing control system 142 is used to determine the travel route and travel speed of the autonomous vehicle 100. In some embodiments, the route control system 142 may include a lateral planning module 1421 and a longitudinal planning module 1422, the lateral planning module 1421 and the longitudinal planning module 1422 being used to determine a travel route and a travel speed for the autonomous vehicle 100 in conjunction with data from the obstacle avoidance system 144, the GPS 122, and one or more predetermined maps, respectively. Obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise negotiate obstacles in the environment of autonomous vehicle 100, which may be embodied as actual obstacles and virtual moving objects that may collide with autonomous vehicle 100. In one example, the control system 106 may additionally or alternatively include components other than those shown and described. Or may reduce some of the components shown above.
The autonomous vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through peripherals 108. The peripheral devices 108 may include a wireless communication system 146, an in-vehicle computer 148, a microphone 150, and/or speakers 152. In some embodiments, the peripheral devices 108 provide a means for a user of the autonomous vehicle 100 to interact with the user interface 116. For example, the onboard computer 148 may provide information to a user of the autonomous vehicle 100. The user interface 116 may also operate the in-vehicle computer 148 to receive user input. The in-vehicle computer 148 may be operated via a touch screen. In other cases, peripheral devices 108 may provide a means for autonomous vehicle 100 to communicate with other devices located within the vehicle. For example, the microphone 150 may receive audio (e.g., voice commands or other audio input) from a user of the autonomous vehicle 100. Similarly, the speaker 152 may output audio to a user of the autonomous vehicle 100. The wireless communication system 146 may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system 146 may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system 146 may communicate using a Wireless Local Area Network (WLAN). In some embodiments, the wireless communication system 146 may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 146 may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The power supply 110 may provide power to various components of the autonomous vehicle 100. In one embodiment, power source 110 may be a rechargeable lithium ion or lead acid battery. One or more battery packs of such batteries may be configured as a power source to provide power to various components of the autonomous vehicle 100. In some embodiments, the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
Some or all of the functions of the autonomous vehicle 100 are controlled by the computer system 112. The computer system 112 may include at least one processor 113, the processor 113 executing instructions 115 stored in a non-transitory computer readable medium, such as the memory 114. The computer system 112 may also be a plurality of computing devices that control individual components or subsystems of the autonomous vehicle 100 in a distributed manner. The processor 113 may be any conventional processor, such as a commercially available Central Processing Unit (CPU). Alternatively, the processor 113 may be a dedicated device such as an Application Specific Integrated Circuit (ASIC) or other hardware-based processor. Although fig. 1 functionally illustrates a processor, memory, and other components of the computer system 112 in the same block, those skilled in the art will appreciate that the processor, or memory, may actually comprise multiple processors, or memories, that are not stored within the same physical housing. For example, the memory 114 may be a hard drive or other storage medium located in a different enclosure than the computer system 112. Thus, references to processor 113 or memory 114 are to be understood as including references to a collection of processors or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In various aspects described herein, the processor 113 may be located remotely from the autonomous vehicle 100 and in wireless communication with the autonomous vehicle 100. In other aspects, some of the processes described herein are executed on a processor 113 disposed within the autonomous vehicle 100 while others are executed by the remote processor 113, including taking the steps necessary to execute a single maneuver.
In some embodiments, the memory 114 may contain instructions 115 (e.g., program logic), and the instructions 115 may be executed by the processor 113 to perform various functions of the autonomous vehicle 100, including those described above. The memory 114 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the travel system 102, the sensor system 104, the control system 106, and the peripheral devices 108. In addition to instructions 115, memory 114 may also store data such as road maps, route information, the location, direction, speed of the vehicle, and other such vehicle data, among other information. Such information may be used by the autonomous vehicle 100 and the computer system 112 during operation of the autonomous vehicle 100 in autonomous, semi-autonomous, and/or manual modes. A user interface 116 for providing information to or receiving information from a user of the autonomous vehicle 100. Optionally, the user interface 116 may include one or more input/output devices within the collection of peripheral devices 108, such as a wireless communication system 146, an in-vehicle computer 148, a microphone 150, and a speaker 152.
The computer system 112 may control the functions of the autonomous vehicle 100 based on inputs received from various subsystems (e.g., the travel system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering system 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control over many aspects of the autonomous vehicle 100 and its subsystems.
Alternatively, one or more of these components described above may be mounted or associated separately from the autonomous vehicle 100. For example, the memory 114 may exist partially or completely separate from the autonomous vehicle 100. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 4 should not be construed as limiting the embodiment of the present application. Autonomous vehicles traveling on a roadway, such as autonomous vehicle 100 above, may identify objects within their surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the autonomous vehicle 100 or a computing device associated with the autonomous vehicle 100, such as the computer system 112, the computer vision system 140, the memory 114 of fig. 4, may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each identified object depends on the behavior of each other, so it is also possible to predict the behavior of a single identified object taking all identified objects together into account. The autonomous vehicle 100 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle 100 is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the autonomous vehicle 100, such as the lateral position of the autonomous vehicle 100 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth. In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the autonomous vehicle 100 to cause the autonomous vehicle 100 to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle 100 (e.g., cars in adjacent lanes on a road).
The autonomous vehicle 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, an amusement car, a playground vehicle, construction equipment, a trolley, a golf cart, a train, a trolley, etc., and the embodiment of the present invention is not particularly limited.
An embodiment of the present application provides a method for constructing a map, where the constructed map may be applied to a scene in which an agent (e.g., a general architecture and each structural functional module of an autonomous vehicle corresponding to fig. 3 and 4) that intelligently travels (e.g., unmanned driving, assisted driving, and the like) performs motion planning (e.g., driving behavior decision, global path planning, and the like), please refer to fig. 5, where fig. 5 is a flowchart of the method for constructing a map provided in the embodiment of the present application, and may include the following steps:
501. and extracting target features based on the laser point cloud data, wherein the target features are laser points which are extracted from the laser point cloud data and meet preset conditions.
The computing equipment firstly obtains laser point clouds required by map building, and performs feature extraction on each frame of obtained laser point clouds to obtain target features, wherein the target features are laser points extracted from the laser point cloud data and accord with preset conditions, and the laser points comprise coordinates of the laser points and reflection intensity of the laser points. For example, a laser sensor in a standard pose respectively acquires a frame of laser point cloud at different geographic positions, n frames of laser point clouds are obtained in total, and then feature extraction is performed on each frame of the n frames of laser point clouds by computing equipment to obtain target features; or the laser sensor in the standard pose may send the laser point cloud to the computing device for feature extraction every time the laser sensor acquires a frame of laser point cloud at different geographic positions until all n frames of laser point clouds are processed, and the method for processing the laser point cloud by the computing device is not limited specifically here.
In some embodiments of the present application, the essence of the target feature described in the embodiments of the present application is to extract some specific laser points from a frame of laser point cloud, and the extracted target feature is generally a line feature and a surface feature, where the line feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same straight line, and the surface feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same plane.
In the embodiment of the application, the method for extracting the target features from the laser point cloud is generally implemented by adopting various screening means, and specifically can be summarized as laser point cloud curvature feature extraction, that is, by calculating the laser point cloud curvature and filtering and screening according to the curvature, which laser points are located on the same plane and which laser points are located on the same straight line in one frame of laser point cloud are determined.
502. And constructing a function fitting the target characteristics, and obtaining a limiting condition of the function, wherein the function and the limiting condition form a main map.
After extracting the corresponding target features for each frame of laser point cloud, a function fitting each target feature may be constructed, and the constraint conditions of the function may be obtained, for example, assuming that 3 target features are extracted from the first frame of laser point cloud, where 2 are line features and 1 is a surface feature, then functions fitting the 3 target features (total 3 functions) may be constructed, and the constraint conditions of each function (total 3 constraint conditions) may be obtained, and similarly, the processing may be performed for each frame of laser point cloud, and then the functions and constraint conditions corresponding to the target features of all n frames of laser point clouds may be obtained, and the obtained functions and constraint conditions constitute the main map. For example, assuming that there are 100 frames of laser point clouds from which 800 target features are extracted, 800 functions and 800 constraints corresponding to the functions are obtained, and the 800 target features and the 800 constraints constitute a main map.
It should be noted here that the 800 extracted target features are target features that have been subjected to screening and merging, for example, 10 target features and 8 target features are extracted from different 2 frames of laser point clouds respectively, where there may be a same object (e.g., a same street lamp, a road block, etc.) represented by a part of the target features, then the same target features need to be merged into one target feature first, assuming that 2 target features extracted from a previous frame of laser point cloud to 10 target features are the same object as 2 target features extracted from a subsequent frame of laser point cloud to 8 target features, then only 2 target features of one of the target features need to be retained, and the subsequent extraction of target features is processed in this way, which is not described herein again.
It should be noted that, in some embodiments of the present application, each frame of laser point cloud is obtained, a target feature is extracted from the currently obtained frame of laser point cloud (assuming that 3 target features are extracted), a function fitting the target feature is constructed, and a limiting condition corresponding to the function is obtained (for example, 3 functions and 3 limiting conditions are obtained by construction), and the laser point cloud obtained for each frame is subjected to the processing until all the laser point clouds are processed; in other embodiments of the present application, all n frames (for example, 100 frames) of laser point clouds may be acquired, then target feature extraction is performed on all the n frames of laser point clouds (assuming that 800 target features after screening are extracted altogether), and then a function fitting all the target features and a constraint condition corresponding to the function are constructed (for example, 800 functions and 800 constraint conditions are constructed).
To facilitate understanding of the above steps 501 and 502, for example, please refer to fig. 6, and if the left diagram of fig. 6 is a frame of laser point cloud (visualization) acquired by the computing device and corresponding to a certain geographic location, the computing device first extracts a target feature from the acquired frame of laser point cloud, that is, finds which laser points are on the same plane and which laser points are on the same straight line. After the target features are extracted, the corresponding functions are fitted according to the target features, fig. 6 illustrates two extracted target features as an example (a dozen or more target features may be extracted from an actual frame of laser point cloud, which is only illustrated here), and if one of the two extracted target features extracted by the computing device is a line feature and the other is a surface feature, the computing device performs fitting on the two extracted target features respectively, if a function f shown in fig. 6 is obtained by fitting the line feature, and a function g shown in fig. 6 is obtained by fitting the surface feature, if the function f and the function g have no limiting conditions, the function f represents a straight line without a line length, and the function g is a plane without a boundary, therefore, the limiting conditions of the two functions need to be obtained respectively, and the limiting conditions are that the obtained function can just extract the fitted target features, the constrained function f and constrained function g, as shown in fig. 6, just fit the extracted target features.
It should be noted that, in some embodiments of the present application, the limiting condition of the function may be a value range of the argument corresponding to the function. For convenience of understanding, still taking fig. 6 as an example for illustration, assuming that an expression of a function f obtained by fitting is f (x) ax + b, where x is a three-dimensional coordinate of the laser spot, and a and b are arguments, values of the arguments a and b (within a fitting error range) can be determined according to the extracted target feature, but the function f in which the argument value is determined is a straight line in a three-dimensional space extending wirelessly, and therefore, a minimum interval in which the value of the extracted target feature three-dimensional coordinate falls, that is, a value interval of the argument corresponding to the function f, can be used as a constraint condition of the function f.
It should be noted that, in some embodiments of the present application, the constraint condition of the function may also be coordinates of some key laser points, for example, coordinates of some key laser points located at both ends of the target feature (the target feature is fitted with the function f) in fig. 6, and at corner points of a plane of the target feature (the target feature is fitted with the function g) in fig. 6. The specific expression of the function restriction condition is not limited to the specific expression here.
It should be noted that, in some embodiments of the present application, the coordinate system of each function and constraint condition constituting the main map may adopt a universal transverse graphite transit vehicle grid system (UTM) coordinate system.
It should be noted that, since the main map is composed of a plurality of functions and corresponding restrictions, each function may be numbered, that is, an ID is assigned, so as to facilitate subsequent searching. The numbering is not particularly limited herein.
It should be further noted that, in some embodiments of the present application, feature information for visual positioning may also be added to the main map, that is, perception information (for example, picture information captured by a camera installed on an autonomous vehicle in real time) acquired by multiple different types of sensors is fused on the main map, so that subsequent positioning is more accurate.
503. And constructing an occupation grid map according to the laser point cloud data.
Two operations are required to be performed on each acquired frame of laser point cloud, one is to extract target features, construct a function fitting each target feature, and obtain a limiting condition corresponding to each function, so as to obtain a main map (as described in steps 501 to 502), and the other is to construct a sub-map, which is an occupancy grid sub-map (OGM).
Here, to describe in detail how to construct an OGM according to each frame of laser point cloud, a computing device constructs a corresponding sub map of an occupancy grid for each frame of acquired laser point cloud, or constructs a corresponding sub map of an occupancy grid for several frames of continuous laser point clouds, for example, assuming that there are 100 frames of laser point clouds in total, then 100 sub maps of the occupancy grid can be constructed (which can be constructed one-to-one, or many-to-one, but is not limited to this case), and the process of constructing each sub map of the occupancy grid is as follows: after the length and the width of the occupied grid sub-map (namely, the size of the occupied grid sub-map) and the grid resolution are set, the computing equipment projects each frame of laser point cloud in the obtained laser coordinate system to the corresponding occupied grid sub-map, if no laser point exists in a certain grid, the grid is considered to be empty, and at least one laser point exists, the grid is considered to have an obstacle correspondingly. Thus, the probability that a grid is empty is represented as p (s ═ 1), the sum of the probabilities of having an obstacle represented as p (s ═ 0), and 1, and then the computing device performs a series of mathematical transformations on each frame of laser point cloud projected onto the submap of the occupied grid, which is located in the occupied state or in the vacant state according to the probability of whether each grid is occupied, wherein the center position of the occupied grid map is the origin O of the occupied grid map. Similarly, the above processing is performed on each frame of laser point cloud, so that all n frames of laser point clouds respectively correspond to one occupied grid map (n in total), and then the occupied grid sub-maps corresponding to the n frames of laser point clouds are spliced to obtain a complete OGM.
For convenience of understanding, the following example is illustrated, and referring to fig. 7 in particular, fig. 7 illustrates that 9 occupied grid maps constructed corresponding to 9 frames of laser point clouds are spliced into one OGM, where O1 to O9 are respectively the origins of the 9 occupied grid maps, and the whole OGM is obtained by sequentially splicing according to the magnitude sequence of the origin coordinates, and the OGM constitutes a sub-map of the map provided in the embodiment of the present application.
It should be noted that, in some embodiments of the present application, the constructed sub-map may be an OGM as described in the related concept, that is, an OGM obtained by storing, in each occupied grid of the OGM, an average height of a laser spot falling on the grid (i.e., an average height of an obstacle in the grid) and an average reflection intensity (i.e., an average value of reflection intensities of laser spots falling on the grid), and specifically, referring to fig. 8, fig. 8 illustrates that a certain occupied grid of the OGM stores an average height h1 (based on the ground) and an average reflection intensity R1 of a corresponding obstacle, where O is an origin of the OGM, that is, a central point.
It should be noted here that the OGM provided in the embodiment of the present application, as shown in fig. 8, is different from the existing OGM in that: not only the average height of the obstacle occupying the grid is stored, but also the average reflection intensity of the laser spot falling on the grid, and existing OGMs store only the average height of the obstacle, while the reflection intensity of the laser spot is stored elsewhere. The embodiment of the application has the advantages of storage as follows: easy to find and more convenient in the practical application process.
It should also be noted that in some embodiments of the present application, the constructed sub-map may be an improved OGM, that is, each occupied grid in the obtained OGM can be divided into a partial occupancy and a full occupancy, where the full occupancy refers to an obstacle (e.g., a general building such as an office building, a residential building, etc.) extending from the ground to a certain height, and the partial occupancy refers to a building occupying a part of the space, such as a bridge opening, a tunnel, a viaduct, an air crosswalk, etc., and may also be referred to as a suspended obstacle. For both types of occupancy, when an occupied grid in an OGM is fully occupied, i.e., the obstacle of the grid is a general obstacle, the height of the obstacle stored in the grid is the height of the upper edge of the obstacle from the ground; when an occupied grid in the OGM is partially occupied, i.e., the obstacles of the grid are flying obstacles, the height of the obstacles stored in the grid is the first height of the lower edge of the obstacle from the ground and the second height of the upper edge of the obstacle from the ground. For ease of understanding, referring specifically to fig. 9, fig. 9 illustrates that two grids in the OGM are divided into fully occupied and partially occupied, and these two grids are respectively stored as "h 2, R2" and "(h 0, h3), R3" as shown in fig. 9, wherein h2 refers to the height value of the obstacle of the corresponding grid extending from the ground up to the top of the obstacle, and R2 is the average reflection intensity of the laser spot falling within the grid; h0 is the first height from the ground of the lower edge of the obstacle corresponding to the grid, h3 is the second height from the ground of the upper edge of the obstacle corresponding to the grid, and R3 is the average reflection intensity of the laser points falling in the grid. It should be noted that in some embodiments of the present application, the OGM provided in the embodiments of the present application can further enrich the storage of the height of the obstacle, for example, for a multi-layer suspended obstacle, a height division of h01, h02, h03, ·, h0n can also be provided.
It should be noted that the improved OGM shown in fig. 9 provided in the embodiments of the present application is different from the existing OGM in that: obstacles occupying grids are classified into general buildings and suspended obstacles, different heights of different types of obstacles are stored, the average reflection intensity of laser points occupying the grids is also stored, and the existing OGM is considered to be completely occupied if the grids are occupied. The advantage of storing the height of the obstacles in this way in the embodiment of the application is that: more detailed characteristics of the barrier are reserved, and the accuracy of subsequent positioning is improved.
It should also be noted that in some embodiments of the present application, the height of the grid obstacle in the OGM can be stored as shaping data, the average reflection intensity of the laser spot falling into the occupancy grid in the OGM can be stored as shaping data, and both the height of the grid obstacle in the OGM and the average reflection intensity of the laser spot falling into the occupancy grid can be stored as shaping data. The shaping data is numerical data that does not include a fractional part, and the shaping data is used to represent an integer and is stored in a binary form. The examples show that: taking 0.1m (meter) dispersion as an example, the data of int8 can express a height of 0-25.6 m, assuming that the height of a certain obstacle occupying the grid is 6.7789m, the height of the existing OGM storing the obstacle is floating point type, the existing OGM is directly stored as floating point type data 6.7789m, and in the embodiment of the present application, the existing OGM is stored as shaping data 68, since the data is 0.1m dispersion, the shaping data 68 represents 6.8 m.
It should be further noted that, in some embodiments of the present application, step 503 may be performed before step 501, step 503 may also be performed after step 502, and step 503 may also be performed simultaneously with step 501, which is not limited herein.
504. An index occupying the grid map is built on the master map.
After each frame of laser point cloud is processed, a main map and an OGM are obtained, and then the obtained main map and the OGM need to be connected to form a composite laser map. Specifically, the computing device may index the OGM on the master map, and one implementation may be: firstly, the computing equipment converts the central position (namely, the origin) of the occupied grid map corresponding to each laser point cloud into a coordinate value under a UTM coordinate system, and then, each origin coordinate value is added to the main map to serve as an index label of the occupied grid sub-map corresponding to each frame of laser point cloud.
For ease of understanding, still taking fig. 7 as an example for explanation, in fig. 7, there are 9 occupied grid sub-maps with origins O1 to O9, respectively, and first, the computing device converts the coordinates of the 9 origins on the grid map into coordinates (9 coordinates in total) in the UTM coordinate system, and then stores the respective origin coordinates as index tags in the main map. In practical applications, the autonomous vehicle is also self-positioned in the UTM coordinate system.
In the above embodiment of the present application, the computing device performs two operations on each frame of acquired laser point cloud, as shown in fig. 10, one operation is target feature extraction, a function fitting each target feature is constructed, and a constraint condition corresponding to each function is obtained, the functions and the constraint conditions corresponding to the functions form a main map, and the other operation is constructing a sub-map (obtained by splicing occupied grid sub-maps corresponding to each frame of laser point cloud), the constructed sub-map is an occupied grid sub-map (OGM), and then, by establishing an index of the OGM on the main map, a composite laser map of the main map and the OGM is constructed, so that the storage capacity of the composite laser map is reduced, and meanwhile, more feature information is retained for subsequent matching and positioning.
Based on the composite laser map obtained in the foregoing embodiment of the present application, how to accurately position the autonomous vehicle according to the obtained composite laser map is described below, specifically referring to fig. 11, first, a laser sensor mounted on the autonomous vehicle acquires laser point cloud in real time, performs preprocessing on the acquired laser point cloud at the current time, and simultaneously an IMU on the autonomous vehicle senses attitude information of the autonomous vehicle, such as a position and an orientation change of the autonomous vehicle, based on an inertial acceleration, and then, based on the composite laser map constructed in advance, a positioning result of the autonomous vehicle is obtained by combining a matching algorithm, and a positioning process may perform two stages of processing, such as matching with a main map first, and then matching with an OGM according to an index tag, thereby achieving accurate positioning. The general matching algorithm includes iterative neighboring point matching, feature matching, filter matching, probability matching, and the like. The map directly formed by the original laser point clouds can adopt various matching algorithms due to the fact that original information of the laser point clouds is reserved, and the OGMs currently adopted in the industry can only carry out filtering matching and probability matching.
It should be noted here that, when constructing the composite laser map, a standard pose of a standard vehicle is used for construction, and in the actual positioning process, the initial pose of the autonomous vehicle needs to be adjusted to be as close to the standard pose as possible, so that the positioning accuracy is higher.
In the above embodiments of the present application, compared with a method of directly constructing an original laser point cloud into a laser map, the requirement of the composite laser map provided in the embodiments of the present application on a storage space is reduced, and the composite laser map is lighter; compared with the existing mode of compressing three-dimensional laser point cloud into two-dimensional information and constructing a two-dimensional OGM according to the compressed two-dimensional information, the composite laser map provided by the embodiment of the application improves the OGM, so that the OGM has more detailed characteristics. In addition, in a wider scene around some roads, the original OGMs are more uniform, and at this time, only the original OGMs are used for matching, which often brings a larger positioning error (because the original OGMs are located at different positions of the wider scene, some adjacent local positions in the OGMs look similar), but the master map in the composite laser map provided by the embodiment of the application can provide line features and surface features such as light poles and guideboards, so as to improve the matching accuracy.
On the basis of the embodiment corresponding to fig. 5, in order to better implement the above-mentioned solution of the embodiment of the present application, the following also provides a related device for implementing the above-mentioned solution. Referring to fig. 12 in particular, fig. 12 is a schematic structural diagram of a computing device 1200 according to an embodiment of the present disclosure, where the computing device 1200 may be deployed in various intelligent traveling (e.g., unmanned driving, assisted driving, and the like) agents (e.g., autonomous driving vehicles, assisted driving vehicles, and the like in wheeled mobile devices) to construct a composite laser map, so that the intelligent agents perform positioning based on the constructed composite laser map; the computing device 1200 may also be an independent terminal device, such as a mobile phone, a personal computer, a tablet, and other smart devices, configured to construct a composite laser map and send the constructed composite laser map to various smart agents (e.g., an autonomous vehicle, an assisted vehicle, and the like in a wheeled mobile device) that are intelligently driven (e.g., unmanned, assisted, and the like) for positioning the smart agents. The computing device 1200 may include: the system comprises an extraction module 1201, a first construction module 1202, a second construction module 1203 and an index module 1204, wherein the extraction module 1201 is used for extracting a target feature based on laser point cloud data, the target feature is a laser point which is extracted from the laser point cloud data and meets a preset condition, and the laser point comprises the coordinates of the laser point and the reflection intensity of the laser point; a first constructing module 1202, configured to construct a function fitting the target feature and obtain a constraint condition of the function, where the function and the constraint condition form a main map; a second construction module 1203 for constructing an Occupancy Grid Map (OGM) from the laser point cloud data; an indexing module 1204 is configured to establish an index of the OGM on the master map.
In the above embodiment of the present application, the computing device 1200 performs two operations on the acquired laser point clouds, one is to extract target features through the extraction module 1201, further construct a function fitting each target feature through the first construction module 1202, and obtain a constraint condition corresponding to each function, where the functions and the constraint conditions corresponding to the functions form a main map, and the other is to construct a sub-map according to each frame of laser point cloud through the second construction module 1203, where the constructed sub-map is an OGM, and then establish an index of the OGM on the main map through the index module 1204, so as to construct a composite laser map of the main map and the OGM, thereby reducing the storage capacity of the composite laser map, and simultaneously retaining more feature information for subsequent matching and positioning.
In one possible design, the second building module 1203 is specifically configured to: constructing a first occupation grid sub-map according to first laser point cloud data, wherein the first laser point cloud data belongs to any one or more frames in the acquired laser point cloud; and splicing the first occupation grid sub-map corresponding to the constructed first laser point cloud data to obtain the occupation grid map. For example, assuming a total of 100 frames of laser point clouds, 100 sub-maps of the occupied grid can be constructed, and the process of constructing each sub-map of the occupied grid is as follows: after the length and width of the sub-map of the occupied grid (i.e., the size of the sub-map of the occupied grid) and the resolution of the grid are set, the computing device 1200 projects each frame of the obtained laser point cloud in the laser coordinate system to the corresponding sub-map of the occupied grid, and if there is no laser point in a certain grid, the grid is considered to be empty, and if there is at least one laser point, the grid is considered to have an obstacle. Thus, the probability that it is empty is represented as p (s ═ 1) for a grid, the probability sum of both is 1, and the obstacle is represented as p (s ═ 0), and then the computing device 1200 performs a series of mathematical transformations on each frame of the laser point cloud projected onto the occupied grid map, which is positioned in the occupied state or in the vacant state according to the probability of whether each grid is occupied, where the center position of the occupied grid map is the origin O of the occupied grid map. Similarly, the above processing is performed on each frame of laser point cloud, so that all n frames of laser point clouds correspond to one occupied grid map (i.e. the first occupied grid map, which is n in total), and then the occupied grid sub-maps corresponding to the n frames of laser point clouds are spliced, so as to obtain a complete OGM.
In the above embodiment of the present application, how the second construction module 1203 constructs corresponding occupied grid sub-maps from the laser point cloud data, and splices the occupied grid sub-maps into a complete OGM is specifically described, which has realizability.
In one possible design, the indexing module 1204 is specifically configured to: firstly, converting the central position (namely, the origin) of the grid map occupied by each laser point cloud into a coordinate value under a UTM coordinate system, and then adding each origin coordinate value on the main map as an index label of the grid sub-map occupied by each frame of laser point cloud.
In the above embodiments of the present application, a specific implementation manner of how to establish a relationship between the main map and the OGM is described, that is, each index tag occupying a grid sub-map is added to the main map, and the implementation manner is easy to implement and simple and convenient to operate.
In one possible design, the first building block 1202 is specifically configured to: obtaining the value range of the function corresponding to the independent variable; or, obtaining a value of a target independent variable in the function, wherein the target independent variable comprises a coordinate of a target laser point, and the target laser point belongs to the target characteristic.
In the above embodiments of the present application, specific expressions of the limitations of several functions are given, and flexibility and selectivity are provided.
In one possible design, the OGM includes: the height of an obstacle in a first grid and the average value of the reflection intensity of a laser spot falling in the first grid, the first grid being any occupied grid in the occupied grid map, that is, the average height of a laser spot falling in the grid (i.e., the average height of an obstacle in the grid) and the average reflection intensity (i.e., the average value of the reflection intensity of a laser spot falling in the grid) are stored in each occupied grid (i.e., the first grid) in the OGM obtained by the embodiment of the present application.
In the above embodiments of the present application, the provided OGM is different from the existing OGM in that: not only the average height of the obstacle occupying the grid is stored, but also the average reflection intensity of the laser spot falling on the grid, and existing OGMs store only the average height of the obstacle, while the reflection intensity of the laser spot is stored elsewhere. The embodiment of the application has the advantages of storage as follows: easy to find and more convenient in the practical application process.
In one possible design, the height of an occupied grid obstacle in the OGM can be stored as shaping data, the average reflected intensity of a laser spot falling into the occupied grid in the OGM can be stored as shaping data, and both the height of an occupied grid obstacle in the OGM and the average reflected intensity of a laser spot falling into the occupied grid can be stored as shaping data.
In the above embodiments of the present application, it is stated that the data occupying the grid storage in the OGM may be shaping data, and the existing schemes store data in a floating-point type manner, and the shaping data occupies less storage space than the floating-point type data (theoretically, the storage capacity occupied by the shaping data is 1/4 of the floating-point type data), so that the embodiment of the present application has the advantages that: storage capacity can be saved.
In one possible design, each occupied grid in the OGM obtained in the embodiments of the present application can be classified into a partially occupied grid and a fully occupied grid, where the fully occupied grid refers to an obstacle (e.g., a general building such as an office building and a residential building) extending from the ground to a certain height, and the partially occupied grid refers to a building occupying a part of the space such as a bridge, a tunnel, a viaduct, an air crosswalk, and the like, and may also be referred to as a suspended obstacle. For both types of occupancy, when an occupied grid in an OGM is fully occupied, i.e., the obstacle of the grid is a general obstacle, the height of the obstacle stored in the grid is the height of the upper edge of the obstacle from the ground; when an occupied grid in the OGM is partially occupied, i.e., the obstacles of the grid are flying obstacles, the height of the obstacles stored in the grid is the first height of the lower edge of the obstacle from the ground and the second height of the upper edge of the obstacle from the ground.
In the above embodiments of the present application, the improved OGM provided in the examples of the present application is different from the existing OGM in that: obstacles occupying grids are classified into general buildings and suspended obstacles, different heights of different types of obstacles are stored, the average reflection intensity of laser points occupying the grids is also stored, and the existing OGM is considered to be completely occupied if the grids are occupied. The advantage of storing the height of the obstacles in this way in the embodiment of the application is that: more detailed characteristics of the barrier are reserved, and the accuracy of subsequent positioning is improved.
In one possible design, the target feature in this embodiment of the present application is substantially a laser point extracted from a frame of laser point cloud, and the extracted target feature is generally a line feature and a surface feature, where the line feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same straight line, and the surface feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same plane.
In the above embodiments of the present application, some conditions met by the extracted target features are specifically set forth, so that the extracted target features have key features beneficial to subsequent positioning.
It should be noted that, the contents of information interaction, execution process, and the like between modules/units in the computing device described in the embodiment corresponding to fig. 12 are based on the same concept as the method embodiment corresponding to fig. 5 in the present application, and specific contents may refer to the description in the foregoing method embodiment in the present application, and are not described again here.
Referring to fig. 13, fig. 13 is a schematic structural diagram of a computing device provided in the embodiment of the present application, and for convenience of description, only portions related to the embodiment of the present application are shown, and details of the method portion of the embodiment of the present application are not disclosed. The computing device 1300 may be deployed with modules of the computing device described in the embodiment corresponding to fig. 12 for implementing functions of the computing device in the embodiment corresponding to fig. 12, and specifically, the computing device 1300 is implemented by one or more servers, and the computing device 1300 may have relatively large differences due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1322 (e.g., one or more CPUs) and a memory 1332, and one or more storage media 1330 (e.g., one or more mass storage devices) for storing applications 1342 or data 1344. Memory 1332 and storage medium 1330 may be, among other things, transitory or persistent storage. The program stored on the storage medium 1330 may include one or more modules (not shown), each of which may include a sequence of instructions for operating on the exercise device. Still further, central processor 1322 may be disposed in communication with storage medium 1330, executing a sequence of instruction operations in storage medium 1330 on computing device 1300.
Computing device 1300 can also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input-output interfaces 1358, and/or one or more operating systems 1341, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
In this embodiment of the application, the steps executed by the computing device in the embodiment corresponding to fig. 5 may be implemented based on the structure shown in fig. 13, and details are not repeated here.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general-purpose hardware, and certainly can also be implemented by special-purpose hardware including special-purpose integrated circuits, special-purpose CPUs, special-purpose memories, special-purpose components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, an exercise device, or a network device) to execute the methods described in the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, training device, or data center to another website site, computer, training device, or data center via wired (e.g., coaxial cable, fiber optics, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a training device, a data center, etc., that incorporates one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.

Claims (19)

1. A method of constructing a map, comprising:
extracting target features based on the laser point cloud data, wherein the target features are laser points extracted from the laser point cloud data and accord with preset conditions, and the laser points comprise coordinates of the laser points and reflection intensity of the laser points;
constructing a function fitting the target characteristics, and obtaining a limiting condition of the function, wherein the function and the limiting condition form a main map;
constructing an occupation grid map according to the laser point cloud data;
establishing an index of the occupancy grid map on the master map.
2. The method of claim 1, wherein the constructing an occupancy grid map from the laser point cloud data comprises:
constructing a first occupation grid sub-map according to first laser point cloud data, wherein the first laser point cloud data belongs to any one or more frames in the laser point cloud data;
and splicing the constructed first occupation grid map to obtain the occupation grid map.
3. The method of claim 2, wherein said indexing said occupancy grid map on said master map comprises:
converting the center position of the constructed first occupation grid sub-map into coordinate values under a universal transverse ink card grid system (UTM) coordinate system;
and adding the coordinate values on the main map as index tags of the constructed first occupation grid sub-map.
4. A method according to any of claims 1-3, wherein the constraints on deriving the function include:
obtaining a value interval of the function corresponding to the independent variable;
or the like, or, alternatively,
and obtaining a value of a target independent variable in the function, wherein the target independent variable comprises a coordinate of a target laser point, and the target laser point belongs to the target characteristic.
5. The method of any of claims 1-4, wherein the occupancy grid map comprises:
the height of an obstacle in a first grid, which is any occupied grid within the occupied grid map, and the average of the reflected intensity of the laser spot falling into the first grid.
6. The method of claim 5,
the height of the obstacle in the first grid is stored as shaping data;
and/or the presence of a gas in the gas,
the average value of the reflected intensity of the laser spots falling in said first grid is stored as shaping data.
7. The method according to any one of claims 5-6, wherein when the obstacle in the first grid is a flying obstacle, the height of the obstacle comprises:
a first height of a lower edge of the barrier from the ground and a second height of an upper edge of the barrier from the ground.
8. The method of any one of claims 1-7, wherein the target feature comprises:
line features for indicating that the laser points extracted from the laser point cloud data are located on the same straight line;
and/or the presence of a gas in the gas,
surface features to indicate that the laser points extracted from the laser point cloud data are located on the same plane.
9. A computing device, comprising:
the extraction module is used for extracting target characteristics based on the laser point cloud data, wherein the target characteristics are laser points which are extracted from the laser point cloud data and meet preset conditions, and the laser points comprise coordinates of the laser points and reflection intensity of the laser points;
the first construction module is used for constructing a function fitting the target characteristics and obtaining the limiting conditions of the function, and the function and the limiting conditions form a main map;
the second construction module is used for constructing an occupation grid map according to the laser point cloud data;
and the index module is used for establishing the index of the occupation grid map on the main map.
10. The apparatus according to claim 9, characterized in that said second building block is specifically configured to:
constructing a first occupation grid sub-map according to first laser point cloud data, wherein the first laser point cloud data belongs to any one or more frames in the laser point cloud data;
and splicing the constructed first occupation grid map to obtain the occupation grid map.
11. The device according to claim 10, wherein the indexing module is specifically configured to:
converting the center position of the constructed first occupation grid sub-map into coordinate values under a universal transverse ink card grid system (UTM) coordinate system;
and adding the coordinate values on the main map as index tags of the constructed first occupation grid sub-map.
12. The apparatus according to any one of claims 9 to 11, characterized in that said first building module is particularly adapted to:
obtaining a value interval of the function corresponding to the independent variable;
or the like, or, alternatively,
and obtaining a value of a target independent variable in the function, wherein the target independent variable comprises a coordinate of a target laser point, and the target laser point belongs to the target characteristic.
13. The apparatus of any of claims 9-12, wherein the occupancy grid map comprises:
the height of an obstacle in a first grid, which is any occupied grid within the occupied grid map, and the average of the reflected intensity of the laser spot falling into the first grid.
14. The apparatus of claim 13,
the height of the obstacle in the first grid is stored as shaping data;
and/or the presence of a gas in the gas,
the average value of the reflected intensity of the laser spots falling in said first grid is stored as shaping data.
15. The apparatus according to any of claims 13-14, wherein when the obstacle in the first grid is a flying obstacle, the height of the obstacle comprises:
a first height of a lower edge of the barrier from the ground and a second height of an upper edge of the barrier from the ground.
16. The apparatus of any of claims 9-15, wherein the target feature comprises:
line features for indicating that the laser points extracted from the laser point cloud data are located on the same straight line;
and/or the presence of a gas in the gas,
surface features to indicate that the laser points extracted from the laser point cloud data are located on the same plane.
17. A computing device comprising a processor coupled to a memory, the memory storing program instructions that, when executed by the processor, implement the method of any of claims 1-8.
18. A chip system, characterized in that the chip system comprises a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute a computer program or instructions such that the method of any of claims 1-8 is performed.
19. A computer-readable storage medium comprising a program which, when run on a computer, causes the computer to perform the method of any one of claims 1-8.
CN202010960450.0A 2020-09-14 2020-09-14 Map construction method and computing device Pending CN114255275A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010960450.0A CN114255275A (en) 2020-09-14 2020-09-14 Map construction method and computing device
PCT/CN2021/116601 WO2022052881A1 (en) 2020-09-14 2021-09-06 Map construction method and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010960450.0A CN114255275A (en) 2020-09-14 2020-09-14 Map construction method and computing device

Publications (1)

Publication Number Publication Date
CN114255275A true CN114255275A (en) 2022-03-29

Family

ID=80632632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010960450.0A Pending CN114255275A (en) 2020-09-14 2020-09-14 Map construction method and computing device

Country Status (2)

Country Link
CN (1) CN114255275A (en)
WO (1) WO2022052881A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117405130A (en) * 2023-12-08 2024-01-16 新石器中研(上海)科技有限公司 Target point cloud map acquisition method, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965756B (en) * 2023-03-13 2023-06-06 安徽蔚来智驾科技有限公司 Map construction method, device, driving device and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
CN108171780A (en) * 2017-12-28 2018-06-15 电子科技大学 A kind of method that indoor true three-dimension map is built based on laser radar
CN108319655B (en) * 2017-12-29 2021-05-07 百度在线网络技术(北京)有限公司 Method and device for generating grid map
CN110274602A (en) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 Indoor map method for auto constructing and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117405130A (en) * 2023-12-08 2024-01-16 新石器中研(上海)科技有限公司 Target point cloud map acquisition method, electronic equipment and storage medium
CN117405130B (en) * 2023-12-08 2024-03-08 新石器中研(上海)科技有限公司 Target point cloud map acquisition method, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2022052881A1 (en) 2022-03-17

Similar Documents

Publication Publication Date Title
US20210262808A1 (en) Obstacle avoidance method and apparatus
CN110543814B (en) Traffic light identification method and device
EP4071661A1 (en) Automatic driving method, related device and computer-readable storage medium
WO2021103511A1 (en) Operational design domain (odd) determination method and apparatus and related device
WO2021000800A1 (en) Reasoning method for road drivable region and device
WO2021238306A1 (en) Method for processing laser point cloud and related device
US20220215639A1 (en) Data Presentation Method and Terminal Device
US20220019845A1 (en) Positioning Method and Apparatus
WO2021189210A1 (en) Vehicle lane changing method and related device
CN112639882A (en) Positioning method, device and system
WO2022142839A1 (en) Image processing method and apparatus, and intelligent vehicle
EP4307251A1 (en) Mapping method, vehicle, computer readable storage medium, and chip
WO2022052881A1 (en) Map construction method and computing device
JP2023534406A (en) Method and apparatus for detecting lane boundaries
CN112810603B (en) Positioning method and related product
EP4134769A1 (en) Method and apparatus for vehicle to pass through boom barrier
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN115398272A (en) Method and device for detecting passable area of vehicle
US20220309806A1 (en) Road structure detection method and apparatus
CN115056784B (en) Vehicle control method, device, vehicle, storage medium and chip
CN114764980B (en) Vehicle turning route planning method and device
WO2021159397A1 (en) Vehicle travelable region detection method and detection device
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
CN115508841A (en) Road edge detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination