CN112988922A - Perception map construction method and device, computer equipment and storage medium - Google Patents

Perception map construction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112988922A
CN112988922A CN201911291831.8A CN201911291831A CN112988922A CN 112988922 A CN112988922 A CN 112988922A CN 201911291831 A CN201911291831 A CN 201911291831A CN 112988922 A CN112988922 A CN 112988922A
Authority
CN
China
Prior art keywords
region
area
map
coded
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911291831.8A
Other languages
Chinese (zh)
Inventor
胡荣东
文驰
谢林江
李敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Intelligent Driving Research Institute Co Ltd
Original Assignee
Changsha Intelligent Driving Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Intelligent Driving Research Institute Co Ltd filed Critical Changsha Intelligent Driving Research Institute Co Ltd
Priority to CN201911291831.8A priority Critical patent/CN112988922A/en
Publication of CN112988922A publication Critical patent/CN112988922A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Instructional Devices (AREA)
  • Navigation (AREA)

Abstract

The application relates to a perception map construction method, a perception map construction device, computer equipment and a storage medium. The method comprises the following steps: the method comprises the steps of obtaining a background map corresponding to an area to be mapped, obtaining road information in the background map, determining an area of interest corresponding to the road information, obtaining an area to be coded in the area of interest, wherein the area to be coded comprises at least one of an elevation difference road section area, a traffic light indication area, a pedestrian crossing area and a traffic guiding area, and coding the area to be coded to obtain a perception map. Map constituent elements are simplified, only information of an area needing sensing in an automatic driving process is reserved, data size is reduced to the great extent, quick reading of a sensing map is facilitated in an application process, effective distinguishing marks of at least one type of an elevation difference road section area, a traffic signal light indicating area, a pedestrian crosswalk area and a guiding area in an area of interest are achieved, and a data processing process in the sensing information obtaining process is simplified.

Description

Perception map construction method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of automatic driving perception technology, and in particular, to a perception map construction method, apparatus, computer device, and storage medium.
Background
With the development of the automatic driving technology, the sensing system plays an increasingly important role in the automatic driving process, in the automatic driving sensing system, sensors such as binocular, laser radar and millimeter wave radar need a prerequisite region of interest (ROI) to reduce false detection of obstacles outside the road, and meanwhile, early warning needs to be carried out on the regions such as traffic lights and zebra stripes.
However, in the conventional method, accurate prerequisite information to be acquired, such as an area of interest and the like, generally needs to be constructed in advance to form a high-precision map, and the high-precision map is generally generated by combining vehicle detection information and positioning information and mainly faces to vehicle positioning services.
Disclosure of Invention
Therefore, it is necessary to provide a perception map construction method, an apparatus, a computer device, and a storage medium capable of simplifying a perception information acquisition process for solving the technical problems of a complex perception information acquisition processing process, a large calculation overhead, and the like.
A perception map construction method comprises the following steps:
acquiring a background map corresponding to a region to be mapped;
acquiring road information in a background map, and determining an interested area corresponding to the road information;
acquiring a region to be coded in the region of interest, wherein the region to be coded comprises at least one of an elevation difference road section region, a traffic signal lamp indicating region, a pedestrian crossing region and a diversion region;
and coding the area to be coded to obtain a perception map.
In one embodiment, the area to be encoded comprises an elevation difference road section area which is composed of a first road section carrying first elevation information and a second road section carrying second elevation information;
the coding processing of the region to be coded comprises the following steps:
identifying an overlapping area of the first road segment and the second road segment;
determining a first region to be coded corresponding to the overlapping region, and determining a second region to be coded corresponding to the overlapping region removed in the first road section and a third region to be coded corresponding to the overlapping region removed in the second road section according to the overlapping region;
and respectively carrying out coding processing on the first region to be coded, the second region to be coded and the third region to be coded.
In one embodiment, the road information includes lane lines; acquiring road information in a background map, and determining an interested area corresponding to the road information comprises the following steps:
identifying a lane line in a background map based on a lane line detection algorithm;
determining a marginal lane line in the plurality of lane lines according to the position relationship among the lane lines;
and determining the region of interest according to the edge lane line.
In one embodiment, acquiring the road information in the background map, and determining the region of interest corresponding to the road information includes:
acquiring road edge data in a background map;
and determining the region of interest according to the passable region formed by the road edge data.
In one embodiment, the obtaining of the background map corresponding to the region to be mapped includes:
acquiring point cloud data of an area to be mapped and positioning information of the point cloud data;
and projecting the point cloud data to an area to be mapped according to the positioning information to obtain a background map.
In one embodiment, the projecting the point cloud data to the area to be mapped according to the positioning information to obtain the background map comprises:
acquiring a grid map corresponding to a region to be mapped;
projecting the point cloud data to a corresponding grid in the grid map according to the positioning information;
calculating the average reflection intensity value of each grid according to the reflection intensity value corresponding to the point cloud data;
and constructing a background map according to the average reflection intensity value.
In one embodiment, the obtaining of the grid map corresponding to the region to be mapped includes:
acquiring a corresponding horizontal coordinate interval of a region to be mapped in a world coordinate system;
and rasterizing the horizontal coordinate interval according to preset raster parameters to obtain a raster image corresponding to the region to be mapped.
In one embodiment, constructing the background map based on the average reflected intensity value comprises:
determining the gray value of each grid according to the average reflection intensity value, wherein the first value range corresponding to the gray value of each grid is a proper subset of the gray value range;
constructing a background map according to the gray value of each grid;
the encoding processing is carried out on the interested region to obtain the perception map, and the method comprises the following steps:
determining a gray value of the region of interest, wherein the intersection of a second value range corresponding to the gray value of the region of interest and the first value range is empty;
performing binarization processing on each gray value according to a value range to which each gray value in the background map belongs to obtain an initial perception map, wherein the value range comprises a first value range and a second value range;
and coding the interested region in the initial perception map to obtain the perception map.
A perceptual mapping apparatus, the apparatus comprising:
the background map acquisition module is used for acquiring a background map corresponding to the region to be mapped;
the interesting area determining module is used for acquiring road information in the background map and determining an interesting area corresponding to the road information;
the device comprises a to-be-coded area acquisition module, a coding module and a coding module, wherein the to-be-coded area acquisition module is used for acquiring the to-be-coded area in an area of interest, and the to-be-coded area comprises at least one of an elevation difference road section area, a traffic signal lamp indication area, a pedestrian crossing area and a diversion area;
and the coding processing module is used for coding the area to be coded to obtain the perception map.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a background map corresponding to a region to be mapped;
acquiring road information in a background map, determining that the road information corresponds to and determining an area of interest;
acquiring a region to be coded in the region of interest, wherein the region to be coded comprises at least one of an elevation difference road section region, a traffic signal lamp indicating region, a pedestrian crossing region and a diversion region;
and coding the area to be coded to obtain a perception map.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a background map corresponding to a region to be mapped;
acquiring road information in a background map, and determining an interested area corresponding to the road information;
acquiring a region to be coded in the region of interest, wherein the region to be coded comprises at least one of an elevation difference road section region, a traffic signal lamp indicating region, a pedestrian crossing region and a diversion region;
and coding the area to be coded to obtain a perception map.
According to the perception map construction method, the device, the computer equipment and the storage medium, the background map corresponding to the region to be constructed is obtained, the interested region in the background map is determined by detecting the lane line in the background map, the data of other unnecessary regions are removed, the region to be coded in the interested region is obtained through the interested region, the region to be coded in the interested region is coded, so that the perception map capable of being used for perception based on coding information is obtained, and the elevation difference road section region, the traffic signal lamp indication region, the pedestrian crossing region and the flow guide region in the perception map are accurately distinguished. On one hand, the perception map obtained by adopting the construction processing process simplifies map constituent elements, only information of an area needing perception in the automatic driving process is reserved, the data volume of the map is greatly reduced, and quick reading of the perception map is facilitated in the application process.
Drawings
FIG. 1 is a diagram of an application scenario of a perceptual mapping method in one embodiment;
FIG. 2 is a flow diagram of a perceptual map construction method in one embodiment;
FIG. 3 is a flowchart illustrating a step of encoding a road section region with a difference in elevation in the perceptual map construction method according to an embodiment;
FIG. 4 is a flow chart illustrating a perceptual map construction method according to another embodiment;
FIG. 5 is a schematic diagram of a region of interest in a perceptual mapping method in one embodiment;
FIG. 6 is a flow chart illustrating a perceptual map construction method according to yet another embodiment;
FIG. 7 is a flow chart illustrating a perceptual mapping method according to yet another embodiment;
FIG. 8 is a diagram of a perception map obtained by a perception map construction method in one embodiment;
FIG. 9 is a block diagram of a perceptual mapping means in one embodiment;
FIG. 10 is a diagram illustrating the internal components of a computer device, according to one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The perception map construction method provided by the application can be applied to the application environment shown in fig. 1. In which the in-vehicle terminal 102 communicates with the server 104 through a network. The vehicle-mounted inertial navigation positioning system and the laser radar in the vehicle-mounted terminal 102 scan back and forth in the area to be mapped, ensure that all areas needing to be sensed are scanned, then the laser radar point cloud data obtained by scanning is sent to a server 104, an inertial navigation positioning system sends collected inertial navigation positioning data to the server 104, the server 104 determines the range size of the area to be mapped according to the inertial navigation positioning data, the laser radar point cloud data is projected to the corresponding position of the area to be mapped to obtain a background map, then road information in the background map is acquired, an interesting area corresponding to the road information is determined, an area to be coded in the interesting area is acquired, the area to be coded comprises at least one of an elevation difference road section area, a traffic light indication area, a pedestrian crossing area and a guiding area, and the area to be coded is coded to obtain the perception map. The server 104 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a perceptual mapping method is provided, which is exemplified by applying the method to the server in fig. 1, and includes the following steps S210 to S240.
S210, obtaining a background map corresponding to the region to be mapped.
The area to be mapped refers to an area in which a perception map is required to be constructed in advance according to the automatic driving range of the vehicle. The background map is a map constructed based on the collected point cloud data. It is understood that the background map may be a map constructed in advance, or may be a map constructed in real time. The construction of background map can be realized through on-vehicle navigation positioning system and laser radar that is used to, and on-vehicle navigation positioning system can be the directional navigation system that fixes a position of inertia who installs on the vehicle, through on-vehicle navigation positioning system that is used to, can gather the vehicle through the removal of vehicle in the location data position of world coordinate system, gesture information including 6 degrees of freedom: x, Y, Z roll angle, heading angle, and pitch angle. And point cloud data acquired by the laser radar can be projected into the corresponding region to be mapped according to the positioning data information. The laser radar is a laser radar remote sensing device which uses a laser as a transmitting light source and adopts a photoelectric detection technical means, and the laser radar can be used for collecting point cloud data comprising a reflection intensity value. The vehicle-mounted laser radar is mainly used for collecting road surface point cloud data such as lane lines, zebra crossings and the like on a road surface and other point cloud data around the road surface. In one embodiment, the scanning distance of the lidar meets the requirement of the point cloud data acquisition range of the road currently acquired by the vehicle, for example, the scanning distance of the lidar is 90 meters. The range size of the area to be mapped can be determined according to inertial navigation positioning data acquired by a vehicle-mounted inertial navigation positioning system, and the point cloud data acquired by a laser radar is projected to the corresponding position of the area to be mapped by utilizing the inertial navigation positioning data, so that a background map is obtained.
S220, obtaining the road information in the background map, and determining the region of interest corresponding to the road information.
The region of interest refers to a region in which obstacle analysis is required during automatic driving of the vehicle, and specifically may be a lane range of a road surface. The road information may include one or both of lane line and edge data, which refers to a boundary line on a road surface that distinguishes a lane, a sidewalk, a greenbelt, a median, and other parts of a road. In an embodiment, the road edge data in the background map can be realized in a manual marking mode, and is suitable for areas where the lane lines are not obvious or do not exist. The lane line is a boundary line for dividing a lane on a road surface, and is used for ensuring that the vehicle is within lane limitation during driving so as to avoid collision with other vehicles due to crossing the lane. The lane lines in the background map may be implemented based on a lane line detection algorithm, and specifically, the lane line detection may be implemented by using a lane line detection algorithm such as an SCNN algorithm, a VPGNET algorithm, a Lanenet algorithm, or the like. In one embodiment, the data processing procedure of lane line detection may include: and sequentially carrying out Gaussian denoising processing, cany contour detection and hough transformation processing on the background map so as to detect lane lines in the background map, marking the detected lane lines, determining edge lane lines in the lane lines according to the position relation between the lane lines, and finally determining the region of interest according to the edge lane lines and the lane edge data.
And S230, acquiring a region to be coded in the region of interest, wherein the region to be coded comprises at least one of an elevation difference road section region, a traffic signal lamp indicating region, a pedestrian crossing region and a guiding region.
The elevation difference road section area refers to a road section in which two or more road sections are crossed when viewed from a plane map, but actual intersection conditions do not exist due to the height difference among the road sections. For example, an area formed by the road sections above the bridge and the road sections below the bridge is an elevation difference road section. Traffic lights include motor vehicle lights, non-motor vehicle lights, crosswalk lights, direction indicator lights (arrow lights), lane lights, flashing warning lights, and particularly, traffic light indication zones include zones that need to respond to traffic lights accordingly, such as a distance of 20 meters from crosswalk to crosswalk. The pedestrian crossing area includes an area formed by zebra stripes on a road. The diversion area refers to an area formed by diversion lines. The diversion line is mainly in the form of one or more white V-shaped lines or twill line areas arranged according to the terrain of the intersection. It is understood that the area to be encoded may include one or more of an elevation difference road segment area, a traffic light indication area, a pedestrian crossing area, and a traffic guiding area, and may be determined according to the identification result. In other embodiments, the region to be encoded may also include other regions that need to be noticed during the automatic driving process, which is not limited herein.
S240, coding the area to be coded to obtain a perception map.
The encoding process refers to a process of numbering different types of regions to be encoded. The encoding processing procedures of the different types of regions to be encoded may be different, for example, the high-level-difference road segment region may be encoded according to the overlapping region and the non-overlapping region, and for example, all pedestrian crossing regions in the region to be encoded may be encoded in a unified manner. By encoding the area to be encoded, the perception map carrying the encoded information can be obtained.
In the automatic driving process of the vehicle, the perception map can be quickly read, and the area needing to be perceived by the obstacle can be quickly determined based on the perception map and the coded information in the perception map. For example, the overlap area of the on-bridge section and the off-bridge section is coded to 3, the rest of the sections on the bridge is coded to 2, and the rest of the sections under the bridge is coded to 1. When the vehicle runs on the bridge, only the obstacle information corresponding to the codes 2 and 3 is sensed, and other obstacle information is filtered. Obstacle information encoded as 1 and 3 is perceived when under the bridge.
According to the perception map construction method, the background map corresponding to the area to be constructed is obtained, the interested area in the background map is determined by detecting the lane line in the background map, data of other unnecessary areas are removed, the area to be coded in the interested area is obtained for the interested area, and the area to be coded is coded, so that the perception map capable of being used for perception based on coding information is obtained, and the elevation difference road section area, the traffic signal lamp indication area, the pedestrian crossing area and the flow guide area in the perception map are accurately distinguished. On one hand, the perception map obtained by adopting the construction processing process simplifies map constituent elements, only information of an area needing perception in the automatic driving process is reserved, the data volume of the map is greatly reduced, and quick reading of the perception map is facilitated in the application process.
In one embodiment, as shown in fig. 3, the area to be encoded includes an elevation difference road segment area composed of a first road segment carrying first elevation information and a second road segment carrying second elevation information. The encoding process of the region to be encoded includes steps S310 to S330.
S310, an overlapping area of the first road segment and the second road segment is identified.
S320, determining a first region to be coded corresponding to the overlapping region, and determining a second region to be coded corresponding to the overlapping region removed in the first road section and a third region to be coded corresponding to the overlapping region removed in the second road section according to the overlapping region.
S330, respectively encoding the first region to be encoded, the second region to be encoded and the third region to be encoded.
Elevation refers to the distance from a point to the absolute base along the direction of the plumb line. In one embodiment, taking an overpass as an example, the first road segment carrying the first elevation information may be an overpass road segment, and the second road segment carrying the second elevation information may be an underpass road segment. The road sections above the bridge and the road sections below the bridge have overlapped areas with the same horizontal positions and different elevation data, and because the map is a plan view, when a vehicle is below the bridge, the road surface of the bridge above the map is not in the sensing area, but the range above the bridge and below the bridge cannot be distinguished in the map. In order to solve the problem, firstly, according to the horizontal position information of the first road section and the horizontal position information of the second road section, an overlapping area of the first road section and the second road section with the same horizontal position information is determined, and according to the range of the overlapping area, a second area to be coded in the first road section except the overlapping area and a third area to be coded in the second road section except the overlapping area are determined. The second region to be encoded and the third region to be encoded may be a preset range with the overlapping region as the center. For example, the 50-meter range along two different extension directions of the first road segment with the overlapping area as a boundary in the first road segment is marked as the second region to be encoded. It is understood that the 50-meter range is only used for explaining the present embodiment, and in other embodiments, the extended range may be other data, and is not limited herein. For another example, when two separated bidirectional lanes are provided on the bridge and one unseparated lane is provided under the bridge, two overlapped regions are provided on the two lanes under the bridge and the single lane under the bridge, the third region to be encoded under the bridge processes the two regions along the second road section with different extending directions, and the region between the two overlapped regions also needs to be marked as the third region to be encoded, so that all road sections in the region of the road section with the height difference are marked, and omission is avoided. By encoding the area with the elevation difference road section, the corresponding interested area can be determined according to the current elevation data of the vehicle in the application process, and the data interference of the overlapped area is avoided.
In one embodiment, the road information includes lane lines; acquiring road information in a background map, and determining an interested area corresponding to the road information comprises the following steps: steps S410 to S430.
S410, identifying the lane lines in the background map based on a lane line detection algorithm.
And S420, determining the edge lane line in the plurality of lane lines according to the position relation among the lane lines.
And S430, determining the region of interest according to the edge lane line.
The lane detection algorithm is an algorithm for determining a lane line in a background map by identifying the background map, and may specifically be one of lane line detection algorithms such as an SCNN algorithm, a VPGNET algorithm, a Lanenet algorithm, and the like. By means of the lane detection algorithm, lane lines on the same road surface can be identified, since there may be multiple lanes, for example, four lanes or six lanes, on the same road surface. For most road sections, the position relations of all lanes on the same road surface are parallel, lane lines at the edge positions, namely the lane line at the leftmost side or the rearmost side, can be determined according to the position relations among all the lanes, and the passable range formed by the edge lane lines is the region of interest.
In one embodiment, acquiring the road information in the background map, and determining the region of interest corresponding to the road information includes: acquiring road edge data in a background map; and determining the region of interest according to the passable region formed by the road edge data.
In the road building and maintenance process, not all roads have lane lines, the passable range of the road can be determined by acquiring the road edge information in the background map, and the range formed by the road edge data is the region of interest. In an embodiment, the road edge data may be manually labeled data based on a background map, or may be data obtained by identifying a reference such as a road edge stone.
In one embodiment, the lane information includes lane line and road edge data, the obtaining of the road information in the background map, and the determining of the region of interest corresponding to the road information includes: and identifying the lane lines in the background map based on a lane line detection algorithm. And determining the edge lane line in the plurality of lane lines according to the position relation among the lane lines. Determining a road position without an edge lane line, acquiring the road edge information of the road position, and determining an interested area according to the edge lane line and the road edge data.
For areas with no or insignificant edge lane lines, the edge lane lines may be replaced with curb data. After the edge lane lines are determined, the road position or the road section of which the background map does not contain the edge lane lines is determined, for the road position or the road section, the road edge data of the road position can be acquired in a way of manually marking the road edge or identifying reference objects such as road edge stones, and then the range formed by the two identified edge lane lines and the acquired road edge data is the region of interest. Based on the region of interest, a region of interest (ROI) map based on a background map can be constructed. In other embodiments, for areas where lane blurring or the like is not obvious, the region of interest may be calibrated by manually calibrating the information such as the road edge.
In one embodiment, an obvious lane line is detected in a background map by using a lane line detection algorithm, a range between edge lane lines is an ROI region, and the ROI region is manually marked in a region where the lane line is not obvious, wherein a gray value of a pixel corresponding to the ROI region is 255, so as to obtain an ROI map based on the background map, as shown in fig. 5, a white region is the ROI region.
In one embodiment, as shown in fig. 6, the obtaining of the background map corresponding to the region to be mapped includes steps S610 to S620.
S610, point cloud data of the area to be mapped and positioning information of the point cloud data are obtained.
And S620, projecting the point cloud data to an area to be mapped according to the positioning information to obtain a background map.
The point cloud data can be obtained by scanning of a vehicle-mounted laser radar, and the positioning information can be obtained by setting a vehicle-mounted inertial navigation positioning system of the same vehicle. The region to be mapped is a region formed by the scanning range of a vehicle-mounted inertial navigation positioning system of the vehicle. Firstly, the maximum value and the minimum value of the area on a horizontal plane, namely the maximum value and the minimum value X of the X axis and the Y axis are calculated by using inertial navigation positioning informationmax,Xmin,Ymax,Ymin. The point cloud data includes a reflection intensity value PintensityThe positioning information of the point cloud data comprises Px,Py,PzProjecting the point cloud data acquired by the laser radar to a world coordinate system in combination with the positioning information of the vehicle-mounted inertial navigation positioning system to obtain data information P (P) of each point cloud in the world coordinate systemx,Py,Pz,Pintensity) And scanning data in the region to be mapped can be obtained through the point cloud data and the positioning information, so that an accurate background map is constructed and obtained. In other embodiments, the positioning information includes attitude data for 6 degrees of freedom for the X-axis, Y-axis, Z-axis, roll angle, heading angle, and pitch angle.
In an embodiment, as shown in fig. 7, the step S710 to the step S740 are included to project the point cloud data to the area to be mapped according to the positioning information to obtain the background map.
S710, obtaining a grid map corresponding to the region to be mapped.
S720, projecting the point cloud data to a corresponding grid in the grid map according to the positioning information.
And S730, calculating the average reflection intensity value of each grid according to the reflection intensity value corresponding to the point cloud data.
And S740, constructing a background map according to the average reflection intensity value.
In one embodiment, the maximum and minimum values X according to the X and Y axesmax,Xmin,Ymax,YminDetermining the range D of X-axis and Y-axis on the horizontal plane of the region to be mappedx=Xmax-Xmin,Dy=Ymax-YminAnd rasterizing the horizontal area, and determining the grid number m in the X-axis direction and the Y-axis direction as D according to preset grid parameters such as grid interval size Dsx/Ds,n=Dy/Ds. Then, the laser radar point cloud data is projected to a world coordinate system in combination with the positioning information, and information P (P) of each point cloud under the world coordinate system is obtainedx,Py,Pz,Pintensity) And further determining the grid position i ═ P (corresponding to the point cloud in the grid map) according to the positioning informationx-Xmin)/Ds,j=(Py-Ymin)/Ds) And recording the reflection Intensity value corresponding to the grid as Intensity(i,j)=PintensityNumber of point clouds(i,j)And (1) performing projection processing on all the point clouds, accumulating the total reflection intensity values and the number of the point clouds of each point in the same grid, and finally dividing the total reflection intensity values by the number of the point clouds to calculate the average reflection intensity value of the grid. By carrying out rasterization processing on the area to be mapped and calculating the average reflection intensity value of each grid, the number of parameters in the background map can be reduced, the calculation amount of subsequent processing on the background map is reduced, and the processing efficiency is improved.
In one embodiment, the obtaining of the grid map corresponding to the region to be mapped includes: and acquiring a corresponding horizontal coordinate interval of the region to be mapped in a world coordinate system. And rasterizing the horizontal coordinate interval according to preset raster parameters to obtain a raster image corresponding to the region to be mapped.
The world coordinate system is an absolute coordinate system, and the coordinates of all points on the screen before the user coordinate system is established are determined by the origin of the coordinate system. Based on a world coordinate system, all point cloud data and positioning data can be projected into the same coordinate system, and a horizontal coordinate interval of a region to be mapped in the world coordinate system refers to a maximum value and a minimum value X corresponding to X-axis and Y-axis of a scanned region boundary in the world coordinate systemmax,Xmin,Ymax,Ymin. The preset grid parameter may refer to the spacing size of the grids, such as 10cm, 20cm or other distance parameters, and may also refer to the number of grids. And based on preset grid parameters, rasterizing the horizontal coordinate interval, dividing the region to be mapped into a plurality of grids, and obtaining a grid map corresponding to the region to be mapped. By carrying out rasterization processing on the region to be mapped, centralized processing of data can be realized, subsequent calculation processing amount is reduced, and processing efficiency is improved.
In one embodiment, constructing the background map based on the average reflected intensity value comprises: and determining the gray value of each grid according to the average reflection intensity value, wherein the first value range corresponding to the gray value of each grid is a proper subset of the gray value range. And constructing a background map according to the gray values of the grids. The encoding processing is carried out on the interested region to obtain the perception map, and the method comprises the following steps: and determining the gray value of the region of interest, wherein the intersection of the second value range corresponding to the gray value of the region of interest and the first value range is empty. And carrying out binarization processing on each gray value according to the value range to which each gray value in the background map belongs to obtain an initial perception map, wherein the value range comprises a first value range and a second value range, and coding the interested region in the initial perception map to obtain the perception map.
In the embodiment, taking the first value range as [0,250] as an example, if the gray value corresponding to the average reflection intensity value of a certain grid is greater than 250, marking the gray value of the certain grid as equal to 250, and ensuring that the maximum value of the gray value does not exceed 250, and the average reflection intensity value of each grid corresponds to a gray value, so as to obtain a background map composed of the gray values. The first value range and the second value range form a complete value range of the gray value, and the complete value range of the gray value is [0,255 ]. If the first value range is [0,250], the second value range is (250, 255], since the value range corresponding to the gray value of each region in the constructed background map is the first value range, after the region of interest in the background map is determined, the gray value of the region of interest is marked as a gray value selected from the second value range, for example, the gray value selected from the second value range is 255, the gray value of the region of interest corresponding to the region of interest is marked as 255, the display result corresponding to the gray value 255 is white, the ROI map based on the background map can be obtained by gray value processing, the white region in the ROI map is the region, then, a binary map is obtained based on the ROI map, the binary map is an image formed by only two different data, for example, a black and white ROI map, the gray value of each region in the ROI map is first subjected to binarization processing, for example, binarization processing is performed according to whether the gray value is in a first value range or a second value range, for example, the gray value is greater than 250, that is, 1, otherwise, 0, so as to obtain a binary map based on the ROI, where the binary map is an initial perception map, the 1 in the initial perception map is an interested region, that is, an ROI region, and 0 is a non-interested region.
In an application example, the vehicle-mounted inertial navigation positioning system and the laser radar scan back and forth in the area to be mapped, all areas needing sensing are ensured to be scanned, and positioning information of the vehicle in the area to be mapped and scanning point clouds of the laser radar on the area to be mapped are obtained at the moment. Specifically, the method includes the steps that firstly, the maximum value and the minimum value of a to-be-mapped area on a horizontal plane are calculated through inertial navigation positioning information, the positioning information comprises position information of a vehicle under a world coordinate system, a horizontal area formed by corresponding ranges of an X axis and a Y axis on the horizontal plane of the area is determined, then the horizontal area is rasterized, and the number of grids in the X axis direction and the Y axis direction is determined according to a preset grid interval Ds. Then combining the laser radar point cloud data with the positioning informationProjecting the point clouds to a world coordinate system to obtain data information P (P) of each point cloud under the world coordinate systemx,Py,Pz,Pintensity) And then determining the grid position of the point in the grid map according to the position information, determining the reflection intensity value corresponding to the point cloud in the grid, performing the processing on all the point clouds, and finally calculating the reflection intensity mean value of each grid, marking the reflection intensity mean value of the grid to be equal to 250 if the reflection intensity mean value is greater than 250, ensuring that the maximum value does not exceed 250, and then constructing and obtaining the background map according to the gray value corresponding to each grid.
In the background map, firstly, an obvious lane line is detected by using a lane line detection algorithm, the area with the unobvious lane line or no lane line is processed by marking the road edge data, the edge lane line is determined according to the position relation between the detected lane lines, and the ROI area is determined according to the edge lane line and the road edge data. At this time, the gray values of all the regions in the background map are [0,250], and then the gray value of the corresponding range of the ROI region is marked as 255, so that the ROI map based on the background map is obtained.
And (3) carrying out binarization processing based on the ROI map, firstly carrying out binarization processing on each gray value according to whether the value of the gray value in the ROI map is greater than 250, wherein the pixel value is greater than 250, namely 1, or else 0, and obtaining a ROI-based binary image. The binary map is an initial perception map, 1 is an ROI area, 0 is a non-interest area, for a position with a height difference, such as a position above and below a bridge, the height below the bridge is h1, the height above the bridge is h2, at this time, when a vehicle is below the bridge, a road surface of the bridge above is not in the perception area, but the ROI of the binary map cannot distinguish the range below the bridge above and below the bridge, so that a coding mechanism is introduced, as shown in fig. 8, the ROI below the bridge above and below the bridge is divided into 3 coding areas, which are respectively 1, 2, and 3, gray values corresponding to each coding area are different, the area coded as 3 is a common area, that is, no matter how much the height of the vehicle is the ROI area that needs attention, the area coded as 1 is the ROI area that needs attention for the position with the height of h1, and the area coded as 2 is the ROI area that needs attention for the position with the height of h 2. Meanwhile, if areas such as traffic lights and the like need to be marked, the intersection needing to be sensed in the sensing map can be coded, the intersection area is coded into a code number corresponding to the traffic lights and the like, corresponding coding can be carried out on zebra crossing, a diversion area and the like, and finally all areas to be coded are coded to obtain the sensing map.
It should be understood that although the various steps in the flowcharts of fig. 2-4, 6-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4, 6-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 9, there is provided a perception map construction apparatus including: a background map obtaining module 910, a region of interest determining module 920, a region to be encoded obtaining module 930, and an encoding processing module 940. Wherein:
the background map obtaining module 910 is configured to obtain a background map corresponding to the area to be mapped.
The region of interest determining module 920 is configured to obtain road information in the background map, and determine a region of interest corresponding to the road information.
The to-be-coded area obtaining module 930 is configured to obtain an area to be coded in the area of interest, where the area to be coded includes at least one of an elevation difference road section area, a traffic light indication area, a pedestrian crossing area, and a traffic guiding area.
And the encoding processing module 940 is configured to perform encoding processing on the region to be encoded to obtain the perception map.
In one embodiment, the area to be encoded includes an elevation difference road segment area composed of a first road segment carrying first elevation information and a second road segment carrying second elevation information. The encoding processing module is further configured to identify an overlapping region of the first road segment and the second road segment. And determining a first region to be coded corresponding to the overlapping region, and determining a second region to be coded corresponding to the overlapping region removed in the first road section and a third region to be coded corresponding to the overlapping region removed in the second road section according to the overlapping region. And respectively carrying out coding processing on the first region to be coded, the second region to be coded and the third region to be coded.
In one embodiment, the road information includes lane lines; the interesting region determining module is also used for identifying the lane lines in the background map based on a lane line detection algorithm; determining a marginal lane line in the plurality of lane lines according to the position relationship among the lane lines; and determining the region of interest according to the edge lane line.
In one embodiment, the region of interest determining module is further configured to obtain road edge data in the background map; and determining the region of interest according to the passable region formed by the road edge data. .
In one embodiment, the background map obtaining module is further configured to obtain point cloud data of the area to be mapped and location information of the point cloud data. And projecting the point cloud data to an area to be mapped according to the positioning information to obtain a background map.
In an embodiment, the background map obtaining module is further configured to obtain a grid map corresponding to the region to be mapped. And projecting the point cloud data to a corresponding grid in the grid map according to the positioning information. And calculating the average reflection intensity value of each grid according to the reflection intensity value corresponding to the point cloud data. And constructing a background map according to the average reflection intensity value.
In an embodiment, the background map obtaining module is further configured to obtain a corresponding horizontal coordinate interval of the region to be mapped in a world coordinate system. And rasterizing the horizontal coordinate interval according to preset raster parameters to obtain a raster image corresponding to the region to be mapped.
In one embodiment, the background map obtaining module is further configured to construct the background map according to the average reflection intensity value, including: and determining the gray value of each grid according to the average reflection intensity value, wherein the first value range corresponding to the gray value of each grid is a proper subset of the gray value range. And constructing a background map according to the gray values of the grids.
The perception map construction device is further used for determining a gray value of the region of interest, wherein an intersection of a second value range corresponding to the gray value of the region of interest and the first value range is empty. And carrying out binarization processing on each gray value according to the value range to which each gray value in the background map belongs to obtain an initial perception map, wherein the value range comprises a first value range and a second value range, and coding the interested region in the initial perception map to obtain the perception map.
According to the perception map construction device, the background map corresponding to the area to be mapped is obtained, the interested area in the background map is determined by detecting the lane line in the background map, data of other unnecessary areas are removed, the area to be coded in the interested area is obtained for the interested area, and the area to be coded is coded, so that the perception map capable of being used for perception based on coding information is obtained, and the elevation difference road section area, the traffic signal lamp indication area, the pedestrian crossing area and the flow guide area in the perception map are accurately distinguished. On one hand, the perception map obtained by adopting the construction processing process simplifies map constituent elements, only information of an area needing perception in the automatic driving process is reserved, the data volume of the map is greatly reduced, and quick reading of the perception map is facilitated in the application process.
For specific limitations of the perception map construction device, reference may be made to the above limitations of the perception map construction method, which are not described herein again. The various modules in the perceptual mapping means described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is for storing perceptual map building data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a perceptual mapping method.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program:
acquiring a background map corresponding to a region to be mapped;
acquiring road information in a background map, and determining an interested area corresponding to the road information;
acquiring a region to be coded in the region of interest, wherein the region to be coded comprises at least one of an elevation difference road section region, a traffic signal lamp indicating region, a pedestrian crossing region and a diversion region;
and coding the area to be coded to obtain a perception map.
In one embodiment, the area to be encoded includes an elevation difference road segment area composed of a first road segment carrying first elevation information and a second road segment carrying second elevation information. The processor, when executing the computer program, further performs the steps of:
identifying an overlapping area of the first road segment and the second road segment;
determining a first region to be coded corresponding to the overlapping region, and determining a second region to be coded corresponding to the overlapping region removed in the first road section and a third region to be coded corresponding to the overlapping region removed in the second road section according to the overlapping region;
and respectively carrying out coding processing on the first region to be coded, the second region to be coded and the third region to be coded.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
identifying a lane line in a background map based on a lane line detection algorithm;
determining a marginal lane line in the plurality of lane lines according to the position relationship among the lane lines;
and determining the region of interest according to the edge lane line.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring road edge data in a background map;
and determining the region of interest according to the passable region formed by the road edge data.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring point cloud data of an area to be mapped and positioning information of the point cloud data;
and projecting the point cloud data to an area to be mapped according to the positioning information to obtain a background map.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a grid map corresponding to a region to be mapped;
projecting the point cloud data to a corresponding grid in the grid map according to the positioning information;
calculating the average reflection intensity value of each grid according to the reflection intensity value corresponding to the point cloud data;
and constructing a background map according to the average reflection intensity value.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a corresponding horizontal coordinate interval of a region to be mapped in a world coordinate system;
and rasterizing the horizontal coordinate interval according to preset raster parameters to obtain a raster image corresponding to the region to be mapped.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the gray value of each grid according to the average reflection intensity value, wherein the first value range corresponding to the gray value of each grid is a proper subset of the gray value range;
constructing a background map according to the gray value of each grid;
determining a gray value of the region of interest, wherein the intersection of a second value range corresponding to the gray value of the region of interest and the first value range is empty;
performing binarization processing on each gray value according to a value range to which each gray value in the background map belongs to obtain an initial perception map, wherein the value range comprises a first value range and a second value range;
and coding the interested region in the initial perception map to obtain the perception map.
The computer equipment for realizing the perception map construction method obtains the background map corresponding to the region to be constructed, determines the region of interest in the background map by detecting the lane lines in the background map, removes the data of other unnecessary regions, obtains the region of interest to be coded in the region of interest, and codes the region to be coded to obtain the perception map capable of being perceptually applied based on the coded information, and accurately distinguishes the elevation difference road section region, the traffic signal lamp indicating region, the pedestrian crossing region and the flow guide region in the perception map. On one hand, the perception map obtained by adopting the construction processing process simplifies map constituent elements, only information of an area needing perception in the automatic driving process is reserved, the data volume of the map is greatly reduced, and quick reading of the perception map is facilitated in the application process.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a background map corresponding to a region to be mapped;
acquiring road information in a background map, and determining an interested area corresponding to the road information;
acquiring a region to be coded in the region of interest, wherein the region to be coded comprises at least one of an elevation difference road section region, a traffic signal lamp indicating region, a pedestrian crossing region and a diversion region;
and coding the area to be coded to obtain a perception map.
In one embodiment, the area to be encoded includes an elevation difference road segment area composed of a first road segment carrying first elevation information and a second road segment carrying second elevation information. The computer program when executed by the processor further realizes the steps of:
identifying an overlapping area of the first road segment and the second road segment;
determining a first region to be coded corresponding to the overlapping region, and determining a second region to be coded corresponding to the overlapping region removed in the first road section and a third region to be coded corresponding to the overlapping region removed in the second road section according to the overlapping region;
and respectively carrying out coding processing on the first region to be coded, the second region to be coded and the third region to be coded.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring road edge data in a background map;
and determining the region of interest according to the passable region formed by the road edge data.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring point cloud data of an area to be mapped and positioning information of the point cloud data;
and projecting the point cloud data to an area to be mapped according to the positioning information to obtain a background map.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a grid map corresponding to a region to be mapped;
projecting the point cloud data to a corresponding grid in the grid map according to the positioning information;
calculating the average reflection intensity value of each grid according to the reflection intensity value corresponding to the point cloud data;
and constructing a background map according to the average reflection intensity value.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a corresponding horizontal coordinate interval of a region to be mapped in a world coordinate system;
and rasterizing the horizontal coordinate interval according to preset raster parameters to obtain a raster image corresponding to the region to be mapped.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the gray value of each grid according to the average reflection intensity value, wherein the first value range corresponding to the gray value of each grid is a proper subset of the gray value range;
constructing a background map according to the gray value of each grid;
determining a gray value of the region of interest, wherein the intersection of a second value range corresponding to the gray value of the region of interest and the first value range is empty;
performing binarization processing on each gray value according to a value range to which each gray value in the background map belongs to obtain an initial perception map, wherein the value range comprises a first value range and a second value range;
and coding the interested region in the initial perception map to obtain the perception map.
The computer-readable storage medium for implementing the perception map construction method obtains a background map corresponding to a region to be constructed, determines a region of interest in the background map by detecting lane lines in the background map, removes data of other unnecessary regions, obtains a region to be encoded in the region of interest from the region of interest, and encodes the region to be encoded to obtain a perception map capable of being perceptually applied based on encoded information, and accurately distinguishes an elevation difference road segment region, a traffic signal light indication region, a pedestrian crossing region and a flow guide region in the perception map. On one hand, the perception map obtained by adopting the construction processing process simplifies map constituent elements, only information of an area needing perception in the automatic driving process is reserved, the data volume of the map is greatly reduced, and quick reading of the perception map is facilitated in the application process.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A perceptual map construction method, the method comprising:
acquiring a background map corresponding to a region to be mapped;
acquiring road information in the background map, and determining an interested area corresponding to the road information;
acquiring a region to be coded in the region of interest, wherein the region to be coded comprises at least one of an elevation difference road section region, a traffic signal lamp indicating region, a pedestrian crossing region and a diversion region;
and coding the area to be coded to obtain a perception map.
2. The method according to claim 1, wherein the area to be encoded comprises an elevation difference road segment area consisting of a first road segment carrying first elevation information and a second road segment carrying second elevation information;
the encoding processing of the region to be encoded comprises:
identifying an overlap region of the first road segment and the second road segment;
determining a first region to be coded corresponding to the overlapping region, and determining a second region to be coded corresponding to the overlapping region removed from the first road section and a third region to be coded corresponding to the overlapping region removed from the second road section according to the overlapping region;
and respectively carrying out coding processing on the first region to be coded, the second region to be coded and the third region to be coded.
3. The method of claim 1, wherein the road information comprises a lane line; the obtaining of the road information in the background map and the determining of the region of interest corresponding to the road information include:
identifying a lane line in the background map based on a lane line detection algorithm;
determining a marginal lane line in the plurality of lane lines according to the position relationship among the lane lines;
and determining an interested area according to the edge lane line.
4. The method according to claim 1, wherein the obtaining of the road information in the background map and the determining of the region of interest corresponding to the road information comprise:
acquiring road edge data in the background map;
and determining an interested area according to the passable area formed by the road edge data.
5. The method according to claim 1, wherein the obtaining of the background map corresponding to the region to be mapped comprises:
acquiring point cloud data of an area to be mapped and positioning information of the point cloud data;
and projecting the point cloud data to the area to be mapped according to the positioning information to obtain a background map.
6. The method of claim 5, wherein the projecting the point cloud data to the area to be mapped according to the positioning information to obtain a background map comprises:
acquiring a grid map corresponding to the region to be mapped;
projecting the point cloud data to a corresponding grid in the grid map according to the positioning information;
calculating the average reflection intensity value of each grid according to the reflection intensity value corresponding to the point cloud data;
and constructing a background map according to the average reflection intensity value.
7. The method according to claim 6, wherein the obtaining of the grid map corresponding to the region to be mapped comprises:
acquiring a corresponding horizontal coordinate interval of the region to be mapped in a world coordinate system;
and rasterizing the horizontal coordinate interval according to preset raster parameters to obtain a raster image corresponding to the region to be mapped.
8. The method of claim 6, wherein constructing a background map from the average reflected intensity values comprises:
determining the gray value of each grid according to the average reflection intensity value, wherein a first value range corresponding to the gray value of each grid is a proper subset of the gray value range;
constructing a background map according to the gray value of each grid;
the encoding processing of the region of interest to obtain the perception map includes:
determining a gray value of the region of interest, wherein the intersection of a second value range corresponding to the gray value of the region of interest and the first value range is empty;
performing binarization processing on each gray value according to a value range to which each gray value in the background map belongs to obtain an initial perception map, wherein the value range comprises the first value range and the second value range;
and coding the interested region in the initial perception map to obtain the perception map.
9. A perceptual map building apparatus, the apparatus comprising:
the background map acquisition module is used for acquiring a background map corresponding to the region to be mapped;
the interesting region determining module is used for acquiring lane line and road edge data in the background map and determining an interesting region according to the lane line and the road edge data;
the area to be coded acquiring module is used for acquiring an area to be coded in the area of interest, wherein the area to be coded comprises at least one of an elevation difference road section area, a traffic signal lamp indicating area, a pedestrian crossing area and a guiding area;
and the coding processing module is used for coding the area to be coded to obtain the perception map.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN201911291831.8A 2019-12-16 2019-12-16 Perception map construction method and device, computer equipment and storage medium Pending CN112988922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911291831.8A CN112988922A (en) 2019-12-16 2019-12-16 Perception map construction method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911291831.8A CN112988922A (en) 2019-12-16 2019-12-16 Perception map construction method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112988922A true CN112988922A (en) 2021-06-18

Family

ID=76343093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911291831.8A Pending CN112988922A (en) 2019-12-16 2019-12-16 Perception map construction method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112988922A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113587937A (en) * 2021-06-29 2021-11-02 阿波罗智联(北京)科技有限公司 Vehicle positioning method and device, electronic equipment and storage medium
CN114228714A (en) * 2022-02-28 2022-03-25 北京清研宏达信息科技有限公司 Bus longitudinal automatic driving control method and control system for BRT
WO2023040437A1 (en) * 2021-09-18 2023-03-23 北京京东乾石科技有限公司 Curbstone determination method and apparatus, and device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008115175A1 (en) * 2007-03-19 2008-09-25 Richard Alan Altes Beam design for synthetic aperture position/velocity estimation
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN108981726A (en) * 2018-06-09 2018-12-11 安徽宇锋智能科技有限公司 Unmanned vehicle semanteme Map building and building application method based on perceptual positioning monitoring
CN108985230A (en) * 2018-07-17 2018-12-11 深圳市易成自动驾驶技术有限公司 Method for detecting lane lines, device and computer readable storage medium
CN110470311A (en) * 2019-07-08 2019-11-19 浙江吉利汽车研究院有限公司 A kind of ground drawing generating method, device and computer storage medium
CN110502973A (en) * 2019-07-05 2019-11-26 同济大学 A kind of roadmarking automation extraction and recognition methods based on vehicle-mounted laser point cloud

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008115175A1 (en) * 2007-03-19 2008-09-25 Richard Alan Altes Beam design for synthetic aperture position/velocity estimation
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN108981726A (en) * 2018-06-09 2018-12-11 安徽宇锋智能科技有限公司 Unmanned vehicle semanteme Map building and building application method based on perceptual positioning monitoring
CN108985230A (en) * 2018-07-17 2018-12-11 深圳市易成自动驾驶技术有限公司 Method for detecting lane lines, device and computer readable storage medium
CN110502973A (en) * 2019-07-05 2019-11-26 同济大学 A kind of roadmarking automation extraction and recognition methods based on vehicle-mounted laser point cloud
CN110470311A (en) * 2019-07-08 2019-11-19 浙江吉利汽车研究院有限公司 A kind of ground drawing generating method, device and computer storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113587937A (en) * 2021-06-29 2021-11-02 阿波罗智联(北京)科技有限公司 Vehicle positioning method and device, electronic equipment and storage medium
WO2023040437A1 (en) * 2021-09-18 2023-03-23 北京京东乾石科技有限公司 Curbstone determination method and apparatus, and device and storage medium
CN114228714A (en) * 2022-02-28 2022-03-25 北京清研宏达信息科技有限公司 Bus longitudinal automatic driving control method and control system for BRT
CN114228714B (en) * 2022-02-28 2022-05-27 北京清研宏达信息科技有限公司 Bus longitudinal automatic driving control method and control system for BRT

Similar Documents

Publication Publication Date Title
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN109470254B (en) Map lane line generation method, device, system and storage medium
CN109791052B (en) Method and system for classifying data points of point cloud by using digital map
CN111291676B (en) Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
US9652980B2 (en) Enhanced clear path detection in the presence of traffic infrastructure indicator
CN110210280B (en) Beyond-visual-range sensing method, beyond-visual-range sensing system, terminal and storage medium
US8611585B2 (en) Clear path detection using patch approach
US8699754B2 (en) Clear path detection through road modeling
RU2571871C2 (en) Method of determining road boundaries, shape and position of objects on road, and device therefor
CN112988922A (en) Perception map construction method and device, computer equipment and storage medium
JPWO2007083494A1 (en) Graphic recognition apparatus, graphic recognition method, and graphic recognition program
US10836356B2 (en) Sensor dirtiness detection
CN112740225B (en) Method and device for determining road surface elements
CN111091037A (en) Method and device for determining driving information
US8520952B2 (en) System and method for defining a search window
JPWO2018180081A1 (en) Degraded feature identifying apparatus, degraded feature identifying method, degraded feature identifying program, and computer-readable recording medium recording the degraded feature identifying program
CN117315024A (en) Remote target positioning method and device and electronic equipment
KR102316818B1 (en) Method and apparatus of updating road network
US11555928B2 (en) Three-dimensional object detection with ground removal intelligence
Tarel et al. 3d road environment modeling applied to visibility mapping: an experimental comparison
Eckelmann et al. Empirical Evaluation of a Novel Lane Marking Type for Camera and LiDAR Lane Detection.
KR102373733B1 (en) Positioning system and method for operating a positioning system for a mobile unit
US20230266469A1 (en) System and method for detecting road intersection on point cloud height map
JP2022121579A (en) Data structure for map data
CN116935344A (en) Road boundary polygonal contour construction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618