CN115661394A - Method for constructing lane line map, computer device and storage medium - Google Patents
Method for constructing lane line map, computer device and storage medium Download PDFInfo
- Publication number
- CN115661394A CN115661394A CN202211670682.8A CN202211670682A CN115661394A CN 115661394 A CN115661394 A CN 115661394A CN 202211670682 A CN202211670682 A CN 202211670682A CN 115661394 A CN115661394 A CN 115661394A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- lane line
- map
- image
- line vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention relates to the technical field of automatic driving, in particular to a method for constructing a lane line map, computer equipment and a storage medium, and aims to solve the problem of improving the accuracy of the lane line map. The method comprises the steps of obtaining a point cloud frame and an image frame which are respectively collected by a laser radar and a camera on a vehicle, dyeing the point cloud frame according to color information of the image frame, and splicing the point cloud frame after dyeing to obtain a global point cloud map; performing lane line vector identification through a lane line vector identification model according to the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map to obtain lane line vector information; and finally, establishing a lane line map according to the lane line vector information. By the method, the accuracy of the lane line vector identification can be improved, and the defect that the lane line vector information cannot be accurately obtained by using a laser radar or a camera in the prior art is overcome.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method for constructing a lane line map, computer equipment and a storage medium.
Background
In order to ensure the safety and reliability of automatic driving of a vehicle, it is generally necessary to perform automatic driving control using a lane line map labeled with lane line vector information, and the accuracy of the lane line map can be ensured only by detecting accurate lane line vector information when the lane line map is established.
When the camera is used for detecting the lane line, since the camera mainly uses color information such as RGB to detect, under the condition that the line type of the lane line is complex and is mostly shielded (for example, in environments requiring low-speed driving of vehicles such as a parking lot and a high-speed service area), a large amount of noise exists, the detection accuracy of the lane line is reduced, and accurate lane line vector information cannot be obtained. When the laser radar is used for detecting the lane line, since the laser radar mainly uses information such as laser reflection intensity of the surrounding environment to detect, under the condition that the difference between the laser reflection intensity of the lane line and the laser reflection intensity of the surrounding ground is not large (for example, the lane line is worn or the ground is a cement ground), the lane line cannot be accurately detected according to the laser reflection intensity, and thus accurate lane line vector information cannot be obtained.
Accordingly, there is a need in the art for a new solution to the above-mentioned problems.
Disclosure of Invention
The present invention has been made to overcome the above-mentioned drawbacks, and has been made to provide a method of constructing a lane line map, a computer apparatus, and a storage medium that solve, or at least partially solve, the technical problem of improving the accuracy of the lane line map.
In a first aspect, a method for constructing a lane line map is provided, the method comprising:
acquiring a point cloud frame and an image frame which are respectively acquired by a laser radar and a camera on a vehicle;
according to the color information of the image frame, dyeing the point cloud frame;
splicing the dyed point cloud frames to obtain a global point cloud map;
carrying out lane line vector identification through a lane line vector identification model according to the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map to obtain lane line vector information;
and establishing a lane line map according to the lane line vector information.
In a technical solution of the above method for constructing a lane line map, "identifying a lane line vector according to a lane line vector identification model and color information, laser reflection intensity, and height of a point cloud on the global point cloud map to obtain lane line vector information" specifically includes:
respectively generating a color BEV image, an intensity BEV image and a height BEV image according to the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map;
and carrying out lane line vector identification on the color BEV image, the intensity BEV image and the height BEV image through a lane line vector identification model to obtain lane line vector information.
In a technical solution of the above method for constructing a lane line map, "generating a color BEV image, an intensity BEV image, and a height BEV image according to color information, laser reflection intensity, and height of a point cloud on the global point cloud map" specifically includes:
segmenting the global point cloud map into a plurality of continuous map segments;
and respectively generating a color BEV image, an intensity BEV image and a height BEV image of each map block according to the color information, the laser reflection intensity and the height of the point cloud on each map block.
In one technical solution of the above method for constructing a lane line map, the step of "dividing the global point cloud map into a plurality of continuous map blocks" specifically includes:
acquiring a running track of a vehicle on the global point cloud map;
and cutting the global point cloud map into a plurality of continuous map blocks along the driving track.
In one technical solution of the above method for constructing a lane line map, the step of "dividing the global point cloud map into a plurality of continuous map segments along the travel track" specifically includes:
acquiring a plurality of track points from the driving track according to a preset distance;
in the tangential direction parallel to the driving track, each track point is taken as a central point, and a rectangular segmentation area is respectively defined;
and segmenting the global point cloud map respectively according to each rectangular segmentation area obtained by demarcation to obtain a plurality of continuous map blocks.
In one technical solution of the above method for constructing a lane line map, before the step of "obtaining a plurality of track points from the driving track according to a preset distance", the method further includes determining a distance value of the preset distance and a size of the rectangular split area by:
acquiring constraint conditions of preset map blocks;
respectively determining the distance value of the preset distance and the size of the rectangular segmentation region according to the constraint condition of the preset map block;
wherein the preset map block constraint conditions include:
the method comprises the steps that an overlapping area exists between adjacent map blocks, the area range of the overlapping area is larger than a preset range threshold value, and metric distances corresponding to pixels on a color BEV image, an intensity BEV image and a height BEV image generated according to the map blocks are smaller than a preset distance threshold value.
In a technical solution of the above method for constructing a lane line map, "performing lane line vector recognition on the color BEV image, the intensity BEV image, and the height BEV image by using a lane line vector recognition model to obtain lane line vector information" specifically includes:
acquiring a color BEV image, an intensity BEV image and a height BEV image of each map block;
and respectively carrying out lane line vector identification on the color BEV image, the intensity BEV image and the height BEV image of each map block through a lane line vector identification model to obtain lane line vector information of each map block.
In one technical solution of the above method for constructing a lane line map, "establishing a lane line map according to the lane line vector information" specifically includes:
respectively converting the lane line vector information of each map block into a global coordinate system;
fusing lane line vector information of each map block in a global coordinate system to obtain final lane line vector information;
and establishing a lane line map according to the final lane line vector information.
In one technical solution of the above method for constructing a lane line map, "performing, by using a lane line vector identification model, lane line vector identification on the color BEV image, the intensity BEV image, and the height BEV image to obtain lane line vector information" further includes:
respectively extracting first image features of the intensity BEV image and the height BEV image through a first image feature extraction network in a lane line vector identification model;
extracting a second image feature of the color BEV image through a second image feature extraction network in the lane line vector identification model;
performing feature fusion on the first image features and the second image features through a feature fusion network in a lane line vector recognition model to obtain fusion features;
and carrying out lane line vector identification on the fusion characteristics through an identification network in a lane line identification vector model to obtain lane line vector information.
In a technical solution of the above method for constructing a lane line map, the step of "performing a coloring process on the point cloud frames according to the color information of the image frames" includes respectively performing a coloring process on each point cloud frame in the following manner:
acquiring a nearest neighbor image frame closest to the current point cloud frame in time distance according to the time stamp of each image frame and the time stamp of the current point cloud frame;
projecting the point clouds on the current point cloud frame to the nearest neighbor image frame to obtain a projection point of each point cloud on the current point cloud frame on the nearest neighbor image frame;
and respectively dyeing the point clouds corresponding to each projection point on the current point cloud frame according to the color information of the projection points.
In a technical solution of the above method for constructing a lane line map, the step of "projecting the point cloud on the current point cloud frame onto the nearest neighbor image frame to obtain the projection point of each point cloud on the nearest neighbor image frame on the current point cloud frame" specifically includes:
according to the timestamp of the nearest neighbor image frame, acquiring the exposure time of the nearest neighbor image frame and respectively acquiring two target point cloud frames which are before and after the exposure time and have the closest time distance with the nearest neighbor image frame;
acquiring the laser radar pose at the exposure time according to the laser radar poses of the two target point cloud frames;
acquiring a camera pose at the exposure time based on external parameters between the laser radar and the camera and according to the laser radar pose at the exposure time;
and projecting the point clouds on the current point cloud frame to the nearest neighbor image frame by using the camera pose so as to acquire a projection point of each point cloud on the current point cloud frame on the nearest neighbor image frame.
In one technical solution of the above method for constructing a lane line map, the step of "obtaining the lidar poses at the exposure time according to the lidar poses of the two target point cloud frames" includes:
respectively acquiring the time distance between the time stamp of each target point cloud frame and the exposure time;
respectively determining the pose weight of each target point cloud frame according to the time distance;
according to the pose weight of each target point cloud frame, weighting and calculating the laser radar poses of the two target point cloud frames;
and acquiring the laser radar pose at the exposure time according to the weighted sum calculation result.
In a technical solution of the method for constructing a lane line map, the step of "determining the pose weight of each target point cloud frame according to the time distance" includes:
calculating an inverse proportional value of the time distance;
and determining the pose weight according to the inverse proportion value.
In one technical solution of the above method for constructing a lane line map, before the step of "projecting the point cloud on the current point cloud frame onto the nearest neighbor image frame using the camera pose to obtain the projection point of each point cloud on the nearest neighbor image frame on the current point cloud frame", the method further includes:
and carrying out distortion removal processing on the point cloud on the current point cloud frame.
In one technical solution of the method for constructing a lane line map, the step of "splicing the dyed point cloud frames to obtain a global point cloud map" specifically includes:
splicing the point cloud frames after dyeing to obtain an initial global point cloud map;
and removing the point clouds belonging to the dynamic objects and the point clouds belonging to the non-ground elements on the initial global point cloud map to obtain a final global point cloud map.
In a second aspect, there is provided a computer device comprising a processor and a storage means adapted to store a plurality of program codes, the program codes being adapted to be loaded and run by the processor to perform the method of constructing a lane line map according to any one of the above-described aspects of the method of constructing a lane line map.
In a third aspect, there is provided a computer-readable storage medium having stored therein a plurality of program codes adapted to be loaded and run by a processor to execute the method of constructing a lane line map according to any one of the above-described aspects.
One or more technical schemes of the invention at least have one or more of the following beneficial effects:
according to the technical scheme, the point cloud frame and the image frame which are respectively acquired by the laser radar and the camera on the vehicle can be acquired, the point cloud frame is dyed according to the color information of the image frame, and the point cloud frames after dyeing are spliced after dyeing so as to acquire the global point cloud map; then, carrying out lane line vector identification through a lane line vector identification model according to the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map to obtain lane line vector information; and finally, establishing a lane line map according to the lane line vector information.
The color features on the image frame can be fused to the point cloud frame through dyeing processing, so that the point cloud frame has the point cloud features of the point cloud frame and the color features of the image frame at the same time, and lane line vector information on the global point cloud map can be automatically identified through a lane line vector identification model; in addition, the color information, the laser reflection intensity and the height of the point cloud can respectively represent the color feature, the laser reflection intensity feature and the geometric structure feature of the point cloud, and the three feature information is utilized to identify the lane line vector, so that the accuracy of the lane line vector identification can be obviously improved, and the defect that the lane line vector information cannot be accurately detected by utilizing a laser radar or a camera in the prior art is overcome.
Further, in the technical scheme of the method provided by the invention, when the lane line vector identification is carried out through the lane line vector identification model, a color BEV image, an intensity BEV image and a height BEV image can be respectively generated according to the color information, the laser reflection intensity and the height of point cloud on the global point cloud map, and then the lane line vector identification is carried out on the color BEV image, the intensity BEV image and the height BEV image to obtain the lane line vector information.
The map coordinate system of the global point cloud map is a three-dimensional coordinate system, the image coordinate system of each BEV image is a two-dimensional coordinate system, and the lane line vector recognition of the three-dimensional global point cloud map is converted into the lane line vector recognition of the two-dimensional BEV image, so that the recognition efficiency can be improved; in addition, based on the embodiment, the lane line vector recognition model can be built by directly utilizing the image recognition model in the technical field of image recognition, and the building difficulty and cost of the lane line vector recognition model are reduced.
Drawings
The present disclosure will become more readily understood with reference to the accompanying drawings. As is readily understood by those skilled in the art: these drawings are for illustrative purposes only and are not intended to constitute a limitation on the scope of the present invention. Wherein:
fig. 1 is a flow chart illustrating main steps of a method for constructing a lane line map according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating the main steps of a method for coloring a point cloud frame according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating the main steps of a method for obtaining a global point cloud map according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating the main steps of a method for obtaining lane line vector information according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating main steps of a method for acquiring lane line vector information according to another embodiment of the present invention;
FIG. 6 is a schematic diagram of a rectangular sliced region according to one embodiment of the present invention;
FIG. 7 is a flow chart illustrating the main steps of a method for creating a lane line map based on lane line vector information according to an embodiment of the present invention;
fig. 8 is a main configuration diagram of a computer apparatus according to an embodiment of the present invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "processor" may include hardware, software, or a combination of both. The processor may be a central processing unit, microprocessor, image processor, digital signal processor, or any other suitable processor. The processor has data and/or signal processing functionality. The processor may be implemented in software, hardware, or a combination thereof. Computer readable storage media include any suitable medium that can store program code such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random-access memory, and the like.
An embodiment of a method for constructing a lane line map is explained below.
Referring to fig. 1, fig. 1 is a flow chart illustrating main steps of a method for constructing a lane line map according to an embodiment of the present invention. As shown in fig. 1, the method for constructing a lane line map in the embodiment of the present invention mainly includes the following steps S101 to S105.
Step S101: and acquiring point cloud frames and image frames respectively acquired by a laser radar and a camera on the vehicle. Specifically, in the process that the vehicle runs along the map collecting path, the image frame is collected by a camera on the vehicle, and meanwhile, the point cloud frame is collected by a laser radar on the vehicle.
Step S102: and dyeing the point cloud frame according to the color information of the image frame.
Color information includes, but is not limited to, RGB information.
Through carrying out dyeing to the point cloud frame, can fuse the color information of image frame to the point cloud frame for the point cloud on the point cloud frame not only contains information such as the three-dimensional coordinate and the laser reflection intensity at laser radar coordinate system, can also include color information, even it possesses the point cloud characteristic of laser radar point cloud and the image characteristic of image frame simultaneously.
Step S103: and splicing the point cloud frames after dyeing to obtain a global point cloud map. Specifically, the pose of each point cloud frame may be obtained, and the relative pose (pose variation) between each two adjacent point cloud frames may be obtained according to the pose, and the two adjacent point cloud frames may be stitched together according to the relative pose. By the method, the global point cloud map can be obtained by splicing all the point cloud frames. Wherein, the global coordinate system of the map coordinate system of the global point cloud map, such as the northeast coordinate system.
Step S104: and carrying out lane line vector identification through a lane line vector identification model according to the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map to obtain lane line vector information.
The color information of the point cloud is the color information obtained by the dyeing process in the foregoing step.
The laser reflection intensity of the point cloud is the intensity of an echo signal that is reflected back to the lidar by an environmental point at the point cloud location after receiving the electromagnetic wave sent by the lidar.
The height of the point cloud is the Z-axis coordinate of the point cloud in the lidar coordinate system.
The lane line vector recognition model is a model which is trained in advance by adopting a machine learning algorithm such as a deep learning algorithm and has the capability of recognizing the lane line vector information from the color information, the laser reflection intensity and the height of the point cloud. When the lane line vector information needs to be obtained, the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map can be input into a lane line vector identification model, and the lane line vector information can be obtained by identifying the information through the model. It should be noted that, in the embodiment of the present invention, a conventional model training method may be adopted to train to obtain the lane line vector identification model, as long as the trained lane line vector identification model has the capability of identifying the lane line vector information, which is not repeated in detail in the embodiment of the present invention.
Step S105: and establishing a lane line map according to the lane line vector information.
After the lane line vector information is obtained, the lane line vector information may be marked on the global point cloud map, thereby obtaining the lane line map.
In the method described in the above steps S101 to S105, the color information, the laser reflection intensity, and the height of the point cloud may respectively represent the color feature, the laser reflection intensity feature, and the geometric structure feature of the point cloud, and the lane line vector identification is performed using these three feature information, which can significantly improve the accuracy of the lane line vector identification. In addition, by the method, the lane line vector information on the global point cloud map can be automatically identified through the lane line vector identification model, and then the lane line map can be automatically established.
The following further describes steps S102 to S105.
1. Step S102 will be explained.
Because the data acquisition frequencies of the laser radar and the camera are different, the time stamps of the point cloud frames and the time stamps of the image frames cannot be guaranteed to be completely consistent, and therefore, in order to improve the accuracy of point cloud frame dyeing, the nearest neighbor image frame closest to the point cloud frame in time distance can be selected according to the time stamps to perform dyeing. Specifically, referring to fig. 2, in some embodiments of the above step S102, each point cloud frame may be dyed through the following steps S1021 to S1023.
Step S1021: and acquiring the nearest neighbor image frame closest to the current point cloud frame according to the time stamp of each image frame and the time stamp of the current point cloud frame.
The time distance refers to the difference in time between two timestamps.
Step S1022: and projecting the point clouds on the current point cloud frame to the nearest neighbor image frame to obtain the projection point of each point cloud on the nearest neighbor image frame on the current point cloud frame.
Step S1023: and respectively dyeing the point clouds corresponding to each projection point on the current point cloud frame according to the color information of the projection points. Specifically, the color information of the projection point is stored as the color information of the point cloud in the information included in the point cloud.
Based on the methods described in steps S1021 to S1023, the accuracy of the dyeing process can be improved as much as possible when the data acquisition frequencies of the laser radar and the camera are different.
The following further describes step S1022 and step S1023.
Step S1022 will be explained.
In order to reduce errors caused by point cloud projection due to different data acquisition frequencies of a laser radar and a camera as much as possible, the camera pose of an image frame at the exposure moment can be obtained, and the point cloud is projected by the camera pose. Specifically, in some embodiments of the above step S1022, the point cloud on the point cloud frame may be projected onto the nearest neighbor image frame through the following steps 11 to 14.
Step 11: and acquiring the exposure time of the nearest neighbor image frame according to the time stamp of the nearest neighbor image frame, and respectively acquiring two target point cloud frames which are before and after the exposure time and have the closest time distance with the nearest neighbor image frame.
The duration d0 of the image frame from exposure to output by the camera is usually relatively fixed and can be directly obtained by the attribute parameters of the camera. The time stamp of the image frame represents the time when the image frame is output by the camera, and the exposure time of the image frame can be obtained by subtracting the time length d0 from the time stamp.
Step 12: and acquiring the laser radar pose at the exposure time according to the laser radar poses of the two target point cloud frames.
The target point cloud frames are two point cloud frames closest to the exposure time, and the laser radar poses of the two point cloud frames are closer to the laser radar pose at the exposure time, so that the laser radar pose at the exposure time can be calculated by using the two point cloud frames.
In some embodiments, an interpolation calculation method may be adopted to perform interpolation calculation on the laser radar pose at the exposure time according to the laser radar pose and the timestamp of the target point cloud frame, and the interpolation calculation result is the laser radar pose at the exposure time.
In other embodiments, a weighting and calculating method may be adopted to perform weighting and calculation on the lidar poses of the two target point cloud frames, and the result of the weighting and calculation is taken as the lidar pose at the exposure time. Specifically, in the present embodiment, the laser radar pose at the time of exposure can be acquired through the following steps 121 to 124.
Step 121: and respectively acquiring the time distance between the timestamp of each target point cloud frame and the exposure moment. The time distance is a time difference between the time stamp and the exposure time.
Step 122: and respectively determining the pose weight of each target point cloud frame according to the time distance. The smaller the time distance is, the closer the laser radar pose of the target point cloud frame is to the laser radar pose at the exposure time, so that a larger pose weight can be set, and a smaller pose weight can be set otherwise. To facilitate setting the pose weights, in some preferred embodiments an inverse proportion of the temporal distance may be calculated, from which the pose weights are determined. For example, the inverse proportion value is directly used as the pose weight, or the inverse proportion value is normalized, and the result of the normalization processing is used as the pose weight.
Step 123: and weighting and calculating the laser radar poses of the two target point cloud frames according to the pose weight of each target point cloud frame.
Specifically, a calculation formula for weighting and calculating the laser radar poses of two target point cloud frames may be as shown in the following formula (1).
The meaning of each parameter in the formula (1) is as follows:
which indicates the moment of the exposure to light,the laser radar pose at the moment of exposure is represented,which represents the laser radar coordinate system and which,the global coordinate system is represented, namely the laser radar pose is the pose converted from the laser radar coordinate system to the global coordinate system;representing the target point cloud frame that is closest in temporal distance to the nearest neighbor image frame prior to the exposure time instant,representing the laser radar pose of the target point cloud frame,representing the pose weight of the target point cloud frame;representing the target point cloud frame that is closest in temporal distance to the nearest neighbor image frame after the exposure time instant,the laser radar pose of the target point cloud frame is represented,and representing the pose weight of the target point cloud frame.
Step 124: and acquiring the laser radar pose at the exposure time according to the weighted sum calculation result.
Based on the above steps 121 to 124, the laser radar pose at the exposure time can be obtained by weighting and calculating the laser radar poses of the two target point cloud frames.
The above is the description of step 12, and the description of steps 13 and 14 is continued below.
Step 13: and acquiring the camera pose at the exposure time based on external parameters between the laser radar and the camera and according to the laser radar pose at the exposure time.
Specifically, a calculation formula for obtaining the camera pose at the exposure time according to the pose of the outer participating lidar may be shown in the following formula (2).
The meaning of each parameter in the formula (2) is as follows:
the camera pose at the time of exposure is indicated,the laser radar pose at the moment of exposure is represented,represents an external reference between the lidar and the camera,which represents the coordinate system of the laser radar,representing a camera coordinate system.
Step 14: and projecting the point clouds on the current point cloud frame to the nearest neighbor image frame by using the camera pose so as to acquire the projection point of each point cloud on the current point cloud frame on the nearest neighbor image frame.
Assume that the timestamp of the current point cloud frame is time of dayThe coordinates of a point cloud on the current point cloud frame in the lidar coordinate system are. Obtaining the pose of the cameraThen, the point cloud is projected (converted) from the laser radar coordinate system to the global coordinate system through the following formula (3), then the point cloud is projected (converted) from the global coordinate system to the camera coordinate system at the exposure time through the following formula (4), finally the point cloud is projected (converted) from the camera coordinate system to the pixel coordinate system, and the point cloud is projected to the pixel coordinate systemAnd the projection point of the standard system is used as the projection point of the point cloud on the nearest neighbor image frame.
The meaning of each parameter in the formulas (3) and (4) is as follows:
representing the coordinates after the point cloud is projected from the lidar coordinate system to the global coordinate system,show thatCoordinates projected by the global coordinate system behind the camera coordinate system.
Based on the method in the steps 11 to 14, a more accurate camera pose at the image frame exposure time can be obtained, so that errors caused by point cloud projection due to different data acquisition frequencies of the laser radar and the camera are reduced.
Step S1023 will be explained.
In order to further improve the accuracy of the dyeing process, in some embodiments of step S1023, the point cloud on the current point cloud frame may be first subjected to distortion removal processing, and then the point cloud on the current point cloud frame is projected onto the nearest neighbor image frame by using the pose of the camera, so as to obtain a more accurate projection point, thereby further improving the accuracy of the dyeing process.
Specifically, in this embodiment, each data acquisition time of the IMU (Inertial Measurement Unit), that is, the IMU acquisition time, may be acquired according to the data acquisition frequency of the IMU, the IMU pose of each IMU acquisition time may be acquired, and the IMU pose may be further subjected to coordinate system conversion according to an external reference between the IMU and the lidar, so that the lidar pose of each IMU acquisition time may be acquired. And acquiring the acquisition time of each point cloud on the current point cloud frame respectively, and then performing time interpolation calculation on the acquisition time of each point cloud by using the laser radar pose at the acquisition time of the IMU to obtain the laser radar pose at the acquisition time of each point cloud. And finally, converting all point clouds to the laser radar coordinate system at the starting moment of the current point cloud frame according to the laser radar pose of each point cloud at the collecting moment, so as to obtain a point cloud frame without motion distortion.
2. Step S103 will be explained.
The method for constructing the lane line map mainly acquires the vector information of the lane lines on the global point cloud map, so that objects and map elements irrelevant to the lane lines can be removed in order to improve the map construction efficiency. Specifically, in some embodiments of the above step S103, the global point cloud map may be acquired through the following steps S1031 to S1032 shown in fig. 3.
Step S1031: and splicing the point cloud frames after dyeing to obtain an initial global point cloud map.
Step S1032: and removing the point clouds belonging to the dynamic objects and the point clouds belonging to the non-ground elements on the initial global point cloud map to obtain a final global point cloud map.
Specifically, a conventional dynamic object identification method in the field of point cloud technology may be adopted to identify point clouds belonging to dynamic objects on a global point cloud map. In addition, a conventional ground point cloud obtaining method in the technical field of point cloud can be adopted to obtain ground point cloud on the global point cloud map, a ground plane of the global point cloud map can be obtained through fitting according to the ground point cloud, and point cloud belonging to non-ground elements (such as buildings) can be determined according to the ground plane.
Based on the method described in the above steps S1031 to S1032, only the point cloud related to the lane line mapping task may be retained on the global point cloud map, thereby improving the mapping efficiency.
3. Step S104 will be explained.
In the embodiment of the invention, based on an image recognition method, the lane line vector recognition model is used for performing lane line vector recognition to obtain the lane line vector information. Specifically, referring to fig. 4, the lane line vector information may be acquired through the following steps S1041 to S1042.
Step S1041: and respectively generating a color BEV (Bird's Eye View) image, an intensity BEV image and a height BEV image according to the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map.
Specifically, a point cloud on a global point cloud map is projected to a plane vertical to the height of the point cloud to obtain a plane image, and the plane image is divided into a plurality of grids according to the resolution of a preset BEV image; then, the average value of the color information, the average value of the laser reflection intensity and the average value of the height of the point cloud projected into each grid are respectively counted. Finally, generating a color BEV image based on the plane image and according to the average value of the color information of each grid; generating an intensity BEV image based on the plane image and according to the average value of the laser reflection intensity of each grid; a height BEV image is generated based on the planar image and from the height average for each grid.
It should be noted that, the color information may be quantified by a numerical size, for example, the color information may be an RGB value, and therefore, the average value of the color information of the point cloud in each grid may be calculated according to the color information of the point cloud.
Step S1042: and carrying out lane line vector identification on the color BEV image, the intensity BEV image and the height BEV image through a lane line vector identification model to obtain lane line vector information.
The lane line vector recognition model may be an image recognition model pre-trained using a machine learning algorithm, such as a deep learning algorithm, that has the ability to recognize lane line vector information from the color BEV image, the intensity BEV image, and the height BEV image. When the lane line vector information needs to be obtained, the color BEV image, the intensity BEV image and the height BEV image can be input into a lane line vector recognition model, and the lane line vector information can be obtained by recognizing the above information through the model.
In the embodiment of the invention, the lane line vector identification model can comprise an image feature extraction network and an identification network, wherein the image feature extraction network can be used for extracting the image features of the color BEV image, the intensity BEV image and the height BEV image, and the identification network can be used for performing lane line vector identification on the image features to obtain lane line vector information. Due to the fact that data acquisition frequencies of the laser radar and the cameras are different, image features of the color BEV images cannot be completely aligned with image features of the other two BEV images, and if image feature extraction is carried out on the three BEV images through the same feature extraction network, accuracy of the image features may be reduced.
In some preferred embodiments, two image feature extraction networks (a first image feature extraction network and a second image feature extraction network) and one feature fusion network may be provided, where the first image feature extraction network may be configured to extract image features of the intensity BEV image and the height BEV image, the second image feature extraction network may be configured to extract image features of the color BEV image, and the feature fusion network may be configured to fuse the image features obtained by the two image feature extraction networks and input the fused features into the recognition network for recognition. Further, referring to fig. 5, in the present embodiment, the lane line vector recognition may be performed on the color BEV image, the intensity BEV image, and the height BEV image by using the lane line vector recognition model through the following steps S201 to S204.
Step S201: and respectively extracting first image features of the intensity BEV image and the height BEV image through a first image feature extraction network in the lane line vector identification model.
Step S202: and extracting a second image feature of the color BEV image through a second image feature extraction network in the lane line vector identification model.
Step S203: and performing feature fusion on the first image features and the second image features through a feature fusion network in the lane line vector recognition model to obtain fusion features.
Step S204: and carrying out lane line vector identification on the fusion characteristics through an identification network in the lane line identification vector model to obtain lane line vector information.
Based on the method described in the above steps S201 to S204, the problem that the image features of the color BEV image, the intensity BEV image, and the height BEV image cannot be accurately extracted due to different data acquisition frequencies of the laser radar and the camera can be avoided.
Based on the method described in the above steps S1041 to S1042, the lane line vector information can be obtained by identifying the two-dimensional BEV image without identifying the three-dimensional global point cloud map, and compared with the three-dimensional global point cloud map, identifying the two-dimensional BEV image can reduce the consumption of computing resources and improve the identification efficiency. In addition, the image recognition model in the technical field of image recognition can be directly utilized to build the lane line vector recognition model, and the building difficulty and cost of the lane line vector recognition model are reduced.
Step S1041 and step S1042 are further described below.
Step S1041 will be explained.
Because the coverage area of the global point cloud map is usually large, the lane line is long, if the global point cloud map is converted into a color, intensity and height BEV image as a whole, and the BEV image is input into the lane line vector identification model for identification, the performance requirement on the lane line vector identification model is high, and the identification efficiency may be influenced. In this regard, in some embodiments, color, intensity, and height BEV images may be acquired through the following steps 21 through 22.
Step 21: and cutting the global point cloud map into a plurality of continuous map blocks.
Step 22: and respectively generating a color BEV image, an intensity BEV image and a height BEV image of each map block according to the color information, the laser reflection intensity and the height of the point cloud on each map block.
The method for generating BEV images of map tiles is similar to the method for generating BEV images described in the foregoing step S1041, and the difference is mainly that in the foregoing step S1041, each BEV image is generated according to information such as color of the entire global point cloud map, whereas in the present embodiment, each BEV image belonging to one map tile is generated according to information such as color of the map tile.
Based on the method described in the above steps 21 to 22, the color, intensity and height BEV images of the plurality of map blocks can be obtained, and compared with the color, intensity and height BEV images converted from the global point cloud map as a whole, the size of each BEV image of the map block is much smaller, and the included features are also much smaller, so that the performance requirement on the lane line vector recognition model can be reduced when the recognition is performed through the lane line vector recognition model, and the recognition efficiency can be improved.
Step 21 is further described below.
The driving track generated when the vehicle drives on the lane is generally consistent or similar to the lane track, so that the driving track of the vehicle on the global point cloud map can be obtained, and the global point cloud map is divided into a plurality of continuous map blocks along the driving track, so that each map block can be ensured to contain lane lines. In some preferred embodiments, the global point cloud map may be segmented to form a plurality of continuous map segments through the following steps 211 to 213.
Step 211: and acquiring a plurality of track points from the driving track according to a preset distance.
The number of the track points is consistent with that of the map blocks.
Step 212: and in the tangential direction parallel to the driving track, each track point is taken as a central point to respectively define a rectangular segmentation area.
As shown in fig. 6, 5 trajectory points tp1, tp2, tp3, tp4, and tp5 are obtained from the driving track at a predetermined interval. For each track point, the tangential direction of the driving track at the position of the track point is determined, and then a rectangular segmentation area is set in the direction which takes the track point as a central point and is parallel to the tangential direction. Two rectangular sides of the rectangular segmentation region are parallel to the tangential direction, and the other two rectangular sides are perpendicular to the tangential direction.
In order to improve the accuracy of lane line vector identification of the map block, in some embodiments, the distance value of the preset distance and the size of the rectangular segmentation region may be determined in the following manner.
Firstly, obtaining a constraint condition of a preset map block, wherein the constraint condition comprises that an overlapping area exists between adjacent map blocks, the area range of the overlapping area is larger than a preset range threshold value, and metric distances corresponding to pixels on a color BEV image, an intensity BEV image and a height BEV image generated according to the map blocks are smaller than a preset distance threshold value. And then, respectively determining the distance value of the preset distance and the size of the rectangular segmentation region according to the constraint conditions.
The following describes a method for obtaining metric distances corresponding to pixels on a color, intensity, and height BEV image.
As can be seen from the foregoing step S1041, when the color, intensity, and height BEV image is generated, the point cloud on the global point cloud map is projected onto a plane perpendicular to the point cloud height to obtain a plane image, and then the plane image is divided into a plurality of grids according to the preset resolution of the BEV image, where each grid is a pixel of each BEV image. After the resolution of each BEV image is known, the metric distance corresponding to each pixel can be calculated by using the size of each BEV image.
Through the constraint conditions of the overlapping areas, the adjacent map blocks can be ensured to have larger overlapping areas, the fusion of the lane line vector information of the image blocks is facilitated, the fusion accuracy is improved, and the accurate lane line vector information of the whole global point cloud map is obtained.
By the constraint condition of metric distance, the higher precision of the lane line vector information identified by the image block can be ensured. Thus, a highly accurate lane line map can be obtained when the lane line map is created based on the lane line vector information.
It should be noted that, a person skilled in the art may flexibly set the preset range threshold and the preset distance threshold according to actual requirements, for example, the preset distance threshold may be a metric distance corresponding to each map point on the high-precision map. The high-precision map is a map with the precision of map elements larger than a set value, and the high-precision map can at least provide a lane-level navigation route. The accuracy of map elements in high-precision maps may be on the centimeter level and the accuracy of map elements in non-high-precision maps may be on the meter level. The automatic driving control is carried out on the vehicle based on the high-precision map, the accuracy of navigation path planning can be obviously improved, and meanwhile, the safety and the reliability of the vehicle in the driving process can also be obviously improved.
Step 213: and respectively segmenting the global point cloud map according to each rectangular segmentation area obtained by demarcation to obtain a plurality of continuous map blocks.
Because the rectangular segmentation areas are continuous and one rectangular segmentation area is a map block, the global point cloud map is segmented according to the rectangular segmentation areas, and a plurality of continuous map blocks can be obtained.
Based on the methods described in the above steps 211 to 213, continuous map blocks that accurately cover the lane lines can be obtained, which is beneficial to improving the accuracy of performing the lane line vector recognition on the map blocks.
Step S1042 will be explained.
As can be seen from the above embodiment of step S1041, a plurality of BEV images of different tiles may be acquired, and in this case, lane line vector information may be obtained by performing lane line vector recognition on each BEV image through the following steps 31 to 32.
Step 31: a color BEV image, an intensity BEV image, and a height BEV image are acquired for each map tile.
Step 32: and respectively carrying out lane line vector identification on the color BEV image, the intensity BEV image and the height BEV image of each map block through a lane line vector identification model to obtain lane line vector information of each map block.
Based on the method described in the above steps 31 to 32, lane line vector recognition can be performed on each map block, and lane line vector information of each map block can be accurately obtained.
4. Step S105 will be explained.
As can be seen from the above embodiment of step S1042, the lane line vector information of each map block can be obtained through the lane line vector identification model, in which case, the lane line map can be created through steps S1051 to S1053 shown in fig. 7.
Step S1051: and respectively converting the lane line vector information of each map block into a global coordinate system.
The lane line vector information is obtained by identifying each BEV image of the map block, the lane line vector information is actually the lane line vector information in an image coordinate system, and the map coordinate system of the global point cloud map is the global coordinate system, so the lane line vector information needs to be converted into the global coordinate system firstly, and then the lane line map is established by utilizing the global point cloud map and the lane line vector information.
Specifically, the height BEV image includes height information of the point cloud, that is, coordinates of the point cloud on the Z axis in the laser radar coordinate system, the coordinates of the Z axis may be used as coordinates of the lane line vector information on the Z axis in the local three-dimensional coordinate system of the map block, and coordinates of the lane line vector information on the X axis and the Y axis on any one BEV image may be used as coordinates of the lane line vector information on the X axis and the Y axis in the local three-dimensional coordinate system, respectively, so that three-dimensional coordinates of the lane line vector information on the local three-dimensional coordinate system may be obtained. After the three-dimensional coordinate is obtained, the three-dimensional coordinate is converted into a laser radar coordinate system, and then the laser radar coordinate system is converted into a global coordinate system.
Step S1052: and fusing the lane line vector information of each map block in the global coordinate system to obtain the final lane line vector information.
When the lane line vector information is fused, the lane line vector information of a plurality of map blocks can be directly combined together to form final lane line vector information, the lane line vector information of each map block can be optimized through a factor graph optimization mode or a Kalman Filtering mode and the like to obtain more accurate lane line vector information, and then the lane line vector information is combined together to form the final lane line vector information.
The method of factor graph optimization and kalman filtering is briefly described below.
1. Factor graph optimization
In the embodiment of the invention, a factor graph can be established according to the map blocks, the factor nodes on the factor graph are respectively in one-to-one correspondence with each map block, a constraint item is also set on the factor graph, and the constraint item is used for constraining the lane line vector information of two adjacent map blocks or constraining the lane line vector information of two map blocks forming a loop relation, and the like. Those skilled in the art can flexibly set the specific content of the constraint item according to actual requirements, and the embodiment of the present invention is not particularly limited thereto. In addition, in the embodiment of the present invention, a conventional method for optimizing data based on a factor graph in the technical field of automatic driving may be adopted, and the lane line vector information of each map block is optimized based on the factor graph, which is not specifically limited in the embodiment of the present invention.
2. Kalman filtering
In the embodiment of the invention, a lane line vector information estimation model can be established based on a Kalman filtering theory, the lane line vector information of each map block obtained by the lane line vector identification model is used as an observed quantity, and the optimal lane line vector information is estimated and obtained based on the lane line vector information estimation model and the observed quantity. A person skilled in the art may use a conventional kalman filtering method in the field of automatic driving technology to establish a lane line vector information estimation model and estimate to obtain optimal lane line vector information based on the lane line vector information estimation model and according to the observed quantity, which is not specifically limited in the embodiment of the present invention.
Step S1053: and establishing a lane line map according to the final lane line vector information.
After the final lane line vector information is obtained, the lane line vector information can be marked on the global point cloud map, so that the lane line map is obtained.
Based on the method from the step S1051 to the step S1053, the lane line vector information of the map blocks can be fused to obtain accurate lane line vector information, which is beneficial to obtaining more accurate lane line maps.
It should be noted that, although the foregoing embodiments describe each step in a specific sequence, those skilled in the art may understand that, in order to achieve the effect of the present invention, different steps do not have to be executed in such a sequence, and may be executed simultaneously (in parallel) or in other sequences, and the solution after adjustment and the solution described in the present invention belong to equivalent solutions, and therefore, will also fall into the protection scope of the present invention.
It will be understood by those skilled in the art that all or part of the flow of the method according to the above-described embodiment may be implemented by a computer program, which may be stored in a computer-readable storage medium and used to implement the steps of the above-described embodiments of the method when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying said computer program code, medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer memory, read-only memory, random access memory, electrical carrier signal, telecommunications signal, software distribution medium, or the like. It should be noted that the computer-readable storage medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable storage media may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
Furthermore, the invention also provides computer equipment.
Referring to fig. 8, fig. 8 is a schematic diagram of the main structure of an embodiment of a computer apparatus according to the present invention. As shown in fig. 8, the computer device in the embodiment of the present invention mainly includes a storage device and a processor, the storage device may be configured to store a program for executing the method for constructing a lane line map in the above-described method embodiment, and the processor may be configured to execute a program in the storage device, the program including, but not limited to, a program for executing the method for constructing a lane line map in the above-described method embodiment. For convenience of explanation, only the parts related to the embodiments of the present invention are shown, and specific technical details are not disclosed.
The computer device in the embodiment of the present invention may be a control apparatus device formed including various electronic devices. In some possible implementations, a computer device may include multiple storage devices and multiple processors. The program executing the method for constructing a lane line map according to the above method embodiment may be divided into multiple sub-programs, and each sub-program may be loaded and executed by a processor to perform different steps of the method for constructing a lane line map according to the above method embodiment. Specifically, each sub program may be stored in a different storage device, and each processor may be configured to execute the programs in one or more storage devices to implement the method for constructing a lane line map of the above method embodiment together, that is, each processor executes different steps of the method for constructing a lane line map of the above method embodiment to implement the method for constructing a lane line map of the above method embodiment together.
The multiple processors may be processors disposed on the same device, for example, the computer device may be a high-performance device composed of multiple processors, and the multiple processors may be processors configured on the high-performance device. In addition, the multiple processors may also be processors disposed on different devices, for example, the computer device may be a server cluster, and the multiple processors may be processors on different servers in the server cluster.
Further, the invention also provides a computer readable storage medium.
In an embodiment of a computer-readable storage medium according to the present invention, the computer-readable storage medium may be configured to store a program that executes the method of constructing a lane line map of the above-described method embodiment, and the program may be loaded and executed by a processor to implement the above-described method of constructing a lane line map. For convenience of explanation, only the parts related to the embodiments of the present invention are shown, and specific technical details are not disclosed. The computer-readable storage medium may be a storage device formed by including various electronic devices, and optionally, the computer-readable storage medium is a non-transitory computer-readable storage medium in an embodiment of the present invention.
So far, the technical solution of the present invention has been described in conjunction with one embodiment shown in the accompanying drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (17)
1. A method for constructing a lane line map, the method comprising:
acquiring a point cloud frame and an image frame which are respectively acquired by a laser radar and a camera on a vehicle;
according to the color information of the image frame, dyeing the point cloud frame;
splicing the point cloud frames after dyeing to obtain a global point cloud map;
performing lane line vector identification through a lane line vector identification model according to the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map to obtain lane line vector information;
and establishing a lane line map according to the lane line vector information.
2. The method according to claim 1, wherein the step of obtaining lane line vector information by performing lane line vector identification through a lane line vector identification model according to color information, laser reflection intensity and height of point clouds on the global point cloud map specifically comprises:
respectively generating a color BEV image, an intensity BEV image and a height BEV image according to the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map;
and carrying out lane line vector identification on the color BEV image, the intensity BEV image and the height BEV image through a lane line vector identification model to obtain lane line vector information.
3. The method according to claim 2, wherein the step of generating a color BEV image, an intensity BEV image and a height BEV image from the color information, the laser reflection intensity and the height of the point cloud on the global point cloud map respectively specifically comprises:
cutting the global point cloud map into a plurality of continuous map blocks;
and respectively generating a color BEV image, an intensity BEV image and a height BEV image of each map block according to the color information, the laser reflection intensity and the height of the point cloud on each map block.
4. The method of claim 3, wherein the step of "segmenting the global point cloud map into a plurality of consecutive map segments" comprises:
acquiring a running track of a vehicle on the global point cloud map;
and cutting the global point cloud map into a plurality of continuous map blocks along the driving track.
5. The method according to claim 4, wherein the step of "slicing the global point cloud map into a plurality of consecutive map tiles along the travel trajectory" comprises:
acquiring a plurality of track points from the driving track according to a preset distance;
in the tangential direction parallel to the driving track, each track point is taken as a central point, and a rectangular segmentation area is respectively defined;
and segmenting the global point cloud map respectively according to each rectangular segmentation area obtained by demarcation to obtain a plurality of continuous map blocks.
6. The method according to claim 5, wherein before the step of "obtaining a plurality of track points from the driving track at a preset distance", the method further comprises determining a distance value of the preset distance and a size of the rectangular cutting area by:
acquiring constraint conditions of preset map blocks;
respectively determining the distance value of the preset distance and the size of the rectangular segmentation region according to the constraint condition of the preset map block;
wherein the preset map block constraint conditions include:
the method comprises the steps that an overlapping area exists between adjacent map blocks, the area range of the overlapping area is larger than a preset range threshold value, and metric distances corresponding to pixels on a color BEV image, an intensity BEV image and a height BEV image generated according to the map blocks are smaller than a preset distance threshold value.
7. The method according to claim 3, wherein the step of performing lane line vector recognition on the color BEV image, the intensity BEV image, and the height BEV image by a lane line vector recognition model to obtain lane line vector information specifically comprises:
acquiring a color BEV image, an intensity BEV image and a height BEV image of each map block;
and respectively carrying out lane line vector identification on the color BEV image, the intensity BEV image and the height BEV image of each map block through a lane line vector identification model to obtain lane line vector information of each map block.
8. The method according to claim 7, wherein the step of building a lane line map according to the lane line vector information specifically comprises:
respectively converting the lane line vector information of each map block into a global coordinate system;
fusing lane line vector information of each map block in a global coordinate system to obtain final lane line vector information;
and establishing a lane line map according to the final lane line vector information.
9. The method according to any one of claims 2 to 7, wherein the step of performing lane line vector recognition on the color BEV image, the intensity BEV image, and the height BEV image by a lane line vector recognition model to obtain lane line vector information further comprises:
respectively extracting first image features of the intensity BEV image and the height BEV image through a first image feature extraction network in a lane line vector identification model;
extracting a second image feature of the color BEV image through a second image feature extraction network in the lane line vector identification model;
performing feature fusion on the first image feature and the second image feature through a feature fusion network in a lane line vector recognition model to obtain a fusion feature;
and carrying out lane line vector identification on the fusion characteristics through an identification network in a lane line identification vector model to obtain lane line vector information.
10. The method according to claim 1, wherein the step of "performing a staining process on the point cloud frames according to the color information of the image frames" specifically comprises performing a staining process on each point cloud frame by:
acquiring a nearest neighbor image frame closest to the current point cloud frame in time distance according to the time stamp of each image frame and the time stamp of the current point cloud frame;
projecting the point clouds on the current point cloud frame to the nearest neighbor image frame to obtain a projection point of each point cloud on the current point cloud frame on the nearest neighbor image frame;
and respectively dyeing the point clouds corresponding to each projection point on the current point cloud frame according to the color information of the projection points.
11. The method of claim 10, wherein the step of projecting the point clouds in the current point cloud frame onto the nearest neighbor image frame to obtain the projection points of each point cloud in the nearest neighbor image frame in the current point cloud frame comprises:
according to the timestamp of the nearest neighbor image frame, acquiring the exposure time of the nearest neighbor image frame and respectively acquiring two target point cloud frames which are before and after the exposure time and have the closest time distance with the nearest neighbor image frame;
acquiring the laser radar pose at the exposure time according to the laser radar poses of the two target point cloud frames;
acquiring a camera pose at the exposure time based on external parameters between the laser radar and the camera and according to the laser radar pose at the exposure time;
and projecting the point clouds on the current point cloud frame to the nearest neighbor image frame by using the camera pose so as to acquire a projection point of each point cloud on the current point cloud frame on the nearest neighbor image frame.
12. The method according to claim 11, wherein the step of obtaining the lidar pose at the exposure time from the lidar poses of the two target point cloud frames specifically comprises:
respectively acquiring the time distance between the time stamp of each target point cloud frame and the exposure time;
respectively determining the pose weight of each target point cloud frame according to the time distance;
according to the pose weight of each target point cloud frame, weighting and calculating the laser radar poses of the two target point cloud frames;
and acquiring the laser radar pose at the exposure time according to the weighted sum calculation result.
13. The method according to claim 12, wherein the step of determining the pose weight of each target point cloud frame according to the time distance comprises:
calculating an inverse proportional value of the time distance;
and determining the pose weight according to the inverse proportion value.
14. The method of claim 11, wherein prior to the step of projecting the point clouds in the current point cloud frame onto the nearest neighbor image frame using the camera pose to obtain a projected point of each point cloud in the nearest neighbor image frame on the current point cloud frame, the method further comprises:
and carrying out distortion removal processing on the point cloud on the current point cloud frame.
15. The method according to claim 1, wherein the step of splicing the stained point cloud frames to obtain the global point cloud map specifically comprises:
splicing the point cloud frames after dyeing to obtain an initial global point cloud map;
and removing the point clouds belonging to the dynamic objects and the point clouds belonging to the non-ground elements on the initial global point cloud map to obtain a final global point cloud map.
16. A computer device comprising a processor and a storage means adapted to store a plurality of program codes, characterized in that said program codes are adapted to be loaded and run by said processor to perform the method of constructing a lane line map according to any one of claims 1 to 15.
17. A computer-readable storage medium in which a plurality of program codes are stored, characterized in that the program codes are adapted to be loaded and executed by a processor to perform the method of constructing a lane line map according to any one of claims 1 to 15.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211670682.8A CN115661394A (en) | 2022-12-26 | 2022-12-26 | Method for constructing lane line map, computer device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211670682.8A CN115661394A (en) | 2022-12-26 | 2022-12-26 | Method for constructing lane line map, computer device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115661394A true CN115661394A (en) | 2023-01-31 |
Family
ID=85022225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211670682.8A Pending CN115661394A (en) | 2022-12-26 | 2022-12-26 | Method for constructing lane line map, computer device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115661394A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965756A (en) * | 2023-03-13 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | Map construction method, map construction apparatus, driving apparatus, and medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180181817A1 (en) * | 2015-09-10 | 2018-06-28 | Baidu Online Network Technology (Beijing) Co., Ltd. | Vehicular lane line data processing method, apparatus, storage medium, and device |
WO2021129788A1 (en) * | 2019-12-26 | 2021-07-01 | 广州文远知行科技有限公司 | Map rendering method and apparatus, computer device, and storage medium |
CN113593026A (en) * | 2021-07-30 | 2021-11-02 | 深圳元戎启行科技有限公司 | Lane line marking auxiliary map generation method and device and computer equipment |
CN114076956A (en) * | 2021-11-12 | 2022-02-22 | 北京斯年智驾科技有限公司 | Lane line calibration method based on laser radar point cloud assistance |
CN114445593A (en) * | 2022-01-30 | 2022-05-06 | 重庆长安汽车股份有限公司 | Aerial view semantic segmentation label generation method based on multi-frame semantic point cloud splicing |
CN115372987A (en) * | 2022-08-04 | 2022-11-22 | 广州高新兴机器人有限公司 | Lane line extraction method, device, medium and equipment based on laser radar |
CN115407364A (en) * | 2022-09-06 | 2022-11-29 | 安徽蔚来智驾科技有限公司 | Point cloud map processing method, lane marking data acquisition method, equipment and medium |
CN115451977A (en) * | 2022-09-06 | 2022-12-09 | 安徽蔚来智驾科技有限公司 | Method for acquiring lane marking data, computer device and storage medium |
-
2022
- 2022-12-26 CN CN202211670682.8A patent/CN115661394A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180181817A1 (en) * | 2015-09-10 | 2018-06-28 | Baidu Online Network Technology (Beijing) Co., Ltd. | Vehicular lane line data processing method, apparatus, storage medium, and device |
WO2021129788A1 (en) * | 2019-12-26 | 2021-07-01 | 广州文远知行科技有限公司 | Map rendering method and apparatus, computer device, and storage medium |
CN113593026A (en) * | 2021-07-30 | 2021-11-02 | 深圳元戎启行科技有限公司 | Lane line marking auxiliary map generation method and device and computer equipment |
CN114076956A (en) * | 2021-11-12 | 2022-02-22 | 北京斯年智驾科技有限公司 | Lane line calibration method based on laser radar point cloud assistance |
CN114445593A (en) * | 2022-01-30 | 2022-05-06 | 重庆长安汽车股份有限公司 | Aerial view semantic segmentation label generation method based on multi-frame semantic point cloud splicing |
CN115372987A (en) * | 2022-08-04 | 2022-11-22 | 广州高新兴机器人有限公司 | Lane line extraction method, device, medium and equipment based on laser radar |
CN115407364A (en) * | 2022-09-06 | 2022-11-29 | 安徽蔚来智驾科技有限公司 | Point cloud map processing method, lane marking data acquisition method, equipment and medium |
CN115451977A (en) * | 2022-09-06 | 2022-12-09 | 安徽蔚来智驾科技有限公司 | Method for acquiring lane marking data, computer device and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965756A (en) * | 2023-03-13 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | Map construction method, map construction apparatus, driving apparatus, and medium |
CN115965756B (en) * | 2023-03-13 | 2023-06-06 | 安徽蔚来智驾科技有限公司 | Map construction method, device, driving device and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2858008B1 (en) | Target detecting method and system | |
CN109919944B (en) | Combined superpixel graph-cut optimization method for complex scene building change detection | |
US9465976B1 (en) | Feature reduction based on local densities for bundle adjustment of images | |
CN109635816B (en) | Lane line generation method, apparatus, device, and storage medium | |
CN103814306A (en) | Depth measurement quality enhancement | |
CN103017655B (en) | Method and system for extracting floor area of multi-floor building | |
CN111611853A (en) | Sensing information fusion method and device and storage medium | |
CN109816780B (en) | Power transmission line three-dimensional point cloud generation method and device of binocular sequence image | |
KR20140109790A (en) | Device and method for image processing | |
CN115638787B (en) | Digital map generation method, computer readable storage medium and electronic device | |
Józsa et al. | Towards 4D virtual city reconstruction from Lidar point cloud sequences | |
KR102550233B1 (en) | Meghod and apparatus for generating digital building and terrain model, computer-readable storage medium and computer program | |
CN112541938A (en) | Pedestrian speed measuring method, system, medium and computing device | |
KR101092250B1 (en) | Apparatus and method for object segmentation from range image | |
EP2677462B1 (en) | Method and apparatus for segmenting object area | |
CN115661394A (en) | Method for constructing lane line map, computer device and storage medium | |
CN112149471B (en) | Loop detection method and device based on semantic point cloud | |
CN118244281A (en) | Vision and radar fusion target positioning method and device | |
CN116052100A (en) | Image sensing method, computer device, computer-readable storage medium, and vehicle | |
CN107808160B (en) | Three-dimensional building extraction method and device | |
CN114782496A (en) | Object tracking method and device, storage medium and electronic device | |
Miyama | Fast stereo matching with super-pixels using one-way check and score filter | |
CN115439484B (en) | Detection method and device based on 4D point cloud, storage medium and processor | |
JP7417466B2 (en) | Information processing device, information processing method, and information processing program | |
Kühner et al. | Automatic generation of training data for image classification of road scenes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |