CN115407364A - Point cloud map processing method, lane marking data acquisition method, equipment and medium - Google Patents

Point cloud map processing method, lane marking data acquisition method, equipment and medium Download PDF

Info

Publication number
CN115407364A
CN115407364A CN202211099091.XA CN202211099091A CN115407364A CN 115407364 A CN115407364 A CN 115407364A CN 202211099091 A CN202211099091 A CN 202211099091A CN 115407364 A CN115407364 A CN 115407364A
Authority
CN
China
Prior art keywords
dimensional
point cloud
laser radar
map
dimensional laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211099091.XA
Other languages
Chinese (zh)
Inventor
何雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202211099091.XA priority Critical patent/CN115407364A/en
Publication of CN115407364A publication Critical patent/CN115407364A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/53Determining attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to the technical field of automatic driving, in particular provides a point cloud map processing method, a lane marking data acquisition method, equipment and a medium, and aims to solve the problem of improving the accuracy of lane marking data. The point cloud map processing method comprises the steps of establishing a three-dimensional point cloud map of a driving environment according to three-dimensional laser radar point clouds of the driving environment in the driving process of a vehicle, and determining unclear three-dimensional laser radar point clouds according to the laser reflection intensity of each three-dimensional laser radar point cloud in the three-dimensional point cloud map; and according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle, the unclear three-dimensional laser radar point cloud is colored for visual display. By the method, unclear three-dimensional laser radar point clouds in the driving environment can be visually displayed, the visualization degree of the three-dimensional point cloud map is improved, and therefore the marking efficiency and accuracy of lane elements can be improved.

Description

Point cloud map processing method, lane marking data acquisition method, equipment and medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a point cloud map processing method, a lane marking data acquisition method, equipment and a medium.
Background
The automatic driving technology mainly comprises three key technologies of Perception (Perception), planning (Planning) and Control (Control), wherein the Perception technology is mainly used for determining a real-time local map of a running environment where a vehicle is located and obstacle information, the Planning technology is mainly used for Planning a running track of the vehicle according to the real-time local map and the obstacle information, and the Control technology is mainly used for controlling the vehicle to run according to the planned running track.
In order to improve the accuracy of the real-time local map and the obstacle information and ensure that a vehicle can safely run, at present, a multi-sensor perception model based on a BEV (Bird Eye View) View angle is trained by mainly adopting lane marking data of a lane scene, and then the real-time local map and the obstacle information of a running environment of the vehicle are determined by the trained perception model, wherein the accuracy of the lane marking data greatly influences the accuracy of the perception model and further influences the accuracy of the real-time local map and the obstacle information. Therefore, in order to improve the accuracy of the lane marking data, to improve the accuracy of the real-time local map and the obstacle information, and further to ensure that the vehicle can run safely, the lane marking data of the lane scene must be accurately acquired.
Accordingly, there is a need in the art for a new solution to the above problems.
Disclosure of Invention
In order to overcome the above defects, the present invention provides a point cloud map processing method, a lane marking data acquisition method, a device and a medium, which solve or at least partially solve the technical problem of how to improve the accuracy of lane marking data, so as to improve the accuracy of real-time local map and obstacle information, and further ensure that a vehicle can safely drive.
In a first aspect, there is provided a method of processing a three-dimensional point cloud map, the method comprising:
acquiring three-dimensional laser radar point cloud of a driving environment acquired by a laser radar on a vehicle in the driving process of the vehicle; establishing a three-dimensional point cloud map of the driving environment according to the three-dimensional laser radar point cloud; determining clear three-dimensional laser radar point clouds and unclear three-dimensional laser radar point clouds according to the laser reflection intensity of each three-dimensional laser radar point cloud in the three-dimensional point cloud map; and according to the color information of the two-dimensional image of the driving environment, which is acquired by the image acquisition device on the vehicle, coloring the unclear three-dimensional laser radar point cloud so as to perform visual display.
In a technical solution of the above processing method for a three-dimensional point cloud map, "color processing is performed on the unclear three-dimensional lidar point cloud according to color information of a two-dimensional image of the driving environment acquired by an image acquisition device on the vehicle, so as to perform visual display", the method specifically includes: respectively determining nearest neighbor two-dimensional images respectively corresponding to each three-dimensional laser radar point cloud according to the time stamp respectively corresponding to each three-dimensional laser radar point cloud and the time stamp respectively corresponding to each two-dimensional image acquired by the image acquisition device; and respectively coloring unclear three-dimensional laser radar point clouds in each frame of three-dimensional laser radar point clouds according to the nearest neighbor two-dimensional image.
In one technical solution of the above processing method for the three-dimensional point cloud map, the number of the image acquisition devices on the vehicle is plural, and the step of "respectively performing coloring processing on unclear three-dimensional lidar point clouds in each frame of three-dimensional lidar point clouds according to the nearest neighbor two-dimensional image" specifically includes: for each frame of three-dimensional laser radar point cloud, if the nearest neighbor two-dimensional image corresponding to the current frame of three-dimensional laser radar point cloud comprises two-dimensional images acquired by a plurality of image acquisition devices, selecting the two-dimensional image acquired by the last image acquisition device as the final nearest neighbor two-dimensional image according to the preset arrangement sequence of the image acquisition devices; and according to the final nearest neighbor two-dimensional image, performing coloring treatment on the unclear three-dimensional laser radar point cloud in the current frame three-dimensional laser radar point cloud.
In one embodiment of the above processing method for a three-dimensional point cloud map, "coloring the unclear three-dimensional lidar point cloud according to color information of a two-dimensional image of the driving environment acquired by an image acquisition device on the vehicle, so as to perform visual display" further includes: extracting a preset driving environment Region of Interest (Region of Interest) in the two-dimensional image; and coloring the unclear three-dimensional laser radar point cloud according to the color information of the preset Region of Interest (Region of Interest) of the driving environment.
In one technical scheme of the processing method of the three-dimensional point cloud map, the step of determining a clear three-dimensional lidar point cloud and an unclear three-dimensional lidar point cloud according to the laser reflection intensity of each three-dimensional lidar point cloud in the three-dimensional point cloud map specifically comprises the following steps: respectively determining clear three-dimensional laser radar point clouds and unclear three-dimensional laser radar point clouds in each frame of three-dimensional laser radar point clouds of the three-dimensional point cloud map in a parallel processing mode; correspondingly, the step of coloring the unclear three-dimensional lidar point cloud according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle for visual display further comprises: respectively coloring unclear three-dimensional laser radar point clouds in each frame of three-dimensional laser radar point clouds according to color information of the two-dimensional image of the driving environment, which is acquired by the image acquisition device on the vehicle, in a parallel processing mode;
and/or before the step of determining clear three-dimensional laser radar point clouds and unclear three-dimensional laser radar point clouds according to the laser reflection intensity of each three-dimensional laser radar point cloud in the three-dimensional point cloud map or the step of coloring the unclear three-dimensional laser radar point clouds according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle so as to perform visual display, the method further comprises the following steps: performing ground fitting on the three-dimensional laser radar point cloud in the three-dimensional point cloud map to determine a ground plane of the three-dimensional point cloud map; and removing the three-dimensional laser radar point clouds belonging to the dynamic object in the three-dimensional point cloud map according to the ground plane.
In a second aspect, there is provided a method of obtaining lane marking data, the method comprising:
acquiring three-dimensional laser radar point cloud of a driving environment acquired by a laser radar on a vehicle in the driving process of the vehicle; establishing a three-dimensional point cloud map of the driving environment according to the three-dimensional laser radar point cloud; the processing method of the three-dimensional point cloud map is adopted to carry out coloring processing on unclear three-dimensional laser radar point cloud in the three-dimensional point cloud map; respectively generating a Bird Eye View (Bird Eye View) and a Height Map (Height Map) of the driving environment according to the colored three-dimensional point cloud Map; and marking the lane elements of the driving environment in the driving process of the vehicle according to the Bird Eye View and the Height Map to form four-dimensional lane element marking data containing one-dimensional time information and three-dimensional space position information.
In one technical solution of the method for acquiring lane marking data, the step of generating a Bird's Eye View (Bird Eye View) and a Height Map (Height Map) of the driving environment according to the colored three-dimensional point cloud Map includes: and generating a first Bird Eye View (Bird Eye View) according to the clear three-dimensional laser radar point cloud in the colored three-dimensional point cloud Map, generating a second Bird Eye View (Bird Eye View) according to the unclear three-dimensional laser radar point cloud in the colored three-dimensional point cloud Map, and generating the Height Map (Height Map) according to the colored three-dimensional point cloud Map.
In one embodiment of the above lane marking data obtaining method, the step of marking the lane elements of the driving environment during the driving process of the vehicle according to the Bird's Eye View and the Height Map (Height Map) to form the four-dimensional lane element marking data including one-dimensional time information and three-dimensional spatial position information specifically includes: after responding to the received marking starting instruction, loading and displaying the first aerial View (Bird Eye View) and the Height Map (Height Map); and in the process of loading the first Bird's Eye View (Bird Eye View) and the Height Map (Height Map), if a received label switching command is received, loading and displaying the second Bird's Eye View (Bird Eye View) and the Height Map (Height Map).
In a third aspect, a computer device is provided, comprising a processor and a storage means adapted to store a plurality of program codes adapted to be loaded and run by the processor to perform the method of processing a three-dimensional point cloud map or the method of acquiring lane marking data of any one of the above-mentioned method claims.
In a fourth aspect, a computer-readable storage medium is provided, in which a plurality of program codes are stored, the program codes being adapted to be loaded and run by a processor to execute the method for processing a three-dimensional point cloud map or the method for acquiring lane marking data according to any one of the above-mentioned methods for acquiring lane marking data.
One or more technical schemes of the invention at least have one or more of the following beneficial effects:
according to the technical scheme, the three-dimensional laser radar point cloud of the driving environment acquired by the laser radar on the vehicle in the driving process of the vehicle can be acquired, and then the three-dimensional point cloud map of the driving environment is established according to the three-dimensional laser radar point cloud; determining clear three-dimensional laser radar point clouds and unclear three-dimensional laser radar point clouds according to the laser reflection intensity of each three-dimensional laser radar point cloud in the three-dimensional point cloud map; and finally, according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle, the unclear three-dimensional laser radar point cloud is colored for visual display. Through the embodiment, the unclear three-dimensional laser radar point cloud in the driving environment can be visually displayed, the visualization degree of the three-dimensional point cloud map is improved, and therefore the marking efficiency of marking lane elements can be improved.
In another technical scheme of implementing the invention, the three-dimensional laser radar point cloud of the driving environment acquired by a laser radar on the vehicle in the driving process of the vehicle can be acquired, and a three-dimensional point cloud map of the driving environment is established according to the three-dimensional laser radar point cloud; and then coloring unclear three-dimensional laser radar point clouds in the three-dimensional point cloud Map by adopting the processing method of the three-dimensional point cloud Map, respectively generating a Bird Eye View (Bird Eye View) and a Height Map (Height Map) of a driving environment according to the colored three-dimensional point cloud Map, and finally marking lane elements of the driving environment in the driving process of the vehicle according to the Bird Eye View and the Height Map so as to form four-dimensional lane element marking data containing one-dimensional time information and three-dimensional space position information.
By the embodiment, the marking data of the lane elements in the four-dimensional space (the four-dimensional space formed by time and the three-dimensional space) can be accurately obtained, and after the marking data is used for carrying out model training on the perception model of the vehicle, the perception capability of the perception model can be greatly improved, so that the real-time local map and the obstacle information of the driving environment can be accurately determined in the driving process of the vehicle, and the vehicle can be ensured to be safely driven.
Drawings
The disclosure of the present invention will become more readily understood with reference to the accompanying drawings. As is readily understood by those skilled in the art: these drawings are for illustrative purposes only and are not intended to constitute a limitation on the scope of the present invention. Wherein:
FIG. 1 is a flow chart illustrating the main steps of a method for processing a three-dimensional point cloud map according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating the main steps of a method for creating a three-dimensional point cloud map when a positioning signal of a global positioning system is normal according to an embodiment of the present invention;
FIG. 3 is a flow diagram illustrating the main steps of a method for removing three-dimensional lidar point clouds pertaining to dynamic objects according to one embodiment of the present invention;
FIG. 4 is a flow chart illustrating the main steps of a method for determining a ground plane of a three-dimensional point cloud map according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a polar grid map of a three-dimensional point cloud map according to one embodiment of the invention;
FIG. 6 is a flow chart illustrating the main steps of a method for acquiring lane marking data according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart illustrating the main steps of a method for calibrating pose parameters for coordinate system transformation between a lidar and an image acquisition device according to an embodiment of the present invention;
fig. 8 is a flowchart illustrating main steps of a method for calibrating pose parameters for coordinate system transformation between a laser radar and an image acquisition device according to another embodiment of the present invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "processor" may include hardware, software, or a combination of both. The processor may be a central processing unit, microprocessor, image processor, digital signal processor, or any other suitable processor. The processor has data and/or signal processing functionality. The processor may be implemented in software, hardware, or a combination thereof. The computer readable storage medium includes any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random-access memory, and the like. The term "a and/or B" denotes all possible combinations of a and B, such as a alone, B alone or a and B.
First, an embodiment of a processing method of a three-dimensional point cloud map will be described.
1. First embodiment of processing method of three-dimensional point cloud map
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating main steps of a processing method of a three-dimensional point cloud map according to an embodiment of the invention. As shown in fig. 1, the processing method of the three-dimensional point cloud map in the embodiment of the present invention mainly includes the following steps S101 to S104.
Step S101: and acquiring three-dimensional laser radar point cloud of a driving environment acquired by a laser radar on the vehicle in the driving process of the vehicle.
The three-dimensional lidar point cloud is three-dimensional data determined according to an echo signal which is reflected back to a Laser Radar after an environmental point in a driving environment receives an electromagnetic wave sent by the Laser Radar (Laser Radar) on a vehicle, wherein the three-dimensional data comprises a three-dimensional coordinate of the environmental point in a point cloud coordinate system.
Step S102: and establishing a three-dimensional point cloud map of the driving environment according to the three-dimensional laser radar point cloud.
The laser radar can continuously scan the driving environment in the driving process of the vehicle so as to collect three-dimensional laser radar point clouds, and each frame of three-dimensional laser radar point clouds collected by the laser radar are spliced according to the collection sequence, so that a three-dimensional point cloud map can be obtained.
Step S103: and determining clear three-dimensional laser radar point clouds and unclear three-dimensional laser radar point clouds according to the laser reflection intensity of each three-dimensional laser radar point cloud in the three-dimensional point cloud map.
Specifically, the laser reflection intensity may be compared with a preset intensity threshold; if the laser reflection intensity is smaller than a preset intensity threshold value, the three-dimensional laser radar point cloud is unclear; and if the laser reflection intensity is greater than or equal to the preset intensity threshold value, the three-dimensional laser radar point cloud is clear. Those skilled in the art can flexibly set the specific value of the preset intensity threshold according to actual requirements, and this embodiment is not particularly limited thereto.
In some preferred embodiments, a clear three-dimensional lidar point cloud and an unclear three-dimensional lidar point cloud in each frame of three-dimensional lidar point cloud of the three-dimensional point cloud map can be determined in a parallel processing manner, so as to improve the processing efficiency.
Step S104: and according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle, the unclear three-dimensional laser radar point cloud is colored for visual display.
Specifically, the color information of the two-dimensional image may be added to the unclear point cloud information of the three-dimensional lidar point cloud to realize the coloring process on the three-dimensional point cloud map. Similar to step S103, in some preferred embodiments, the unclear three-dimensional lidar point cloud in each frame of three-dimensional lidar point cloud may be colored according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle in a parallel processing manner, so as to improve the processing efficiency.
Based on the method from the step S101 to the step S104, unclear three-dimensional laser radar point clouds in the driving environment can be visually displayed, and the visualization degree of the three-dimensional point cloud map is improved, so that the marking efficiency of marking lane elements can be improved.
The following further describes the above step S102 and step S104.
In order to ensure the accuracy of the three-dimensional point cloud map, in some embodiments of step S102, for each frame of three-dimensional lidar point cloud when the three-dimensional point cloud map is established, a relative pose (ego-motion) of the current frame of three-dimensional lidar point cloud with respect to the three-dimensional point cloud map established according to the previous frames of three-dimensional lidar point cloud may be determined, and the three-dimensional point cloud map may be updated according to the relative pose and the current frame of three-dimensional lidar point cloud to establish the three-dimensional point cloud map of the driving environment. For the sake of brevity, the "three-dimensional point cloud map established from the previous multi-frame three-dimensional lidar point cloud" is simply referred to as "prior map".
For the first frame of three-dimensional lidar point cloud, no prior map exists, and the first frame of three-dimensional lidar point cloud can be used as a prior map of the second frame of three-dimensional lidar point cloud.
For each non-first-frame three-dimensional laser radar point cloud, after the relative pose of the current-frame three-dimensional laser radar point cloud and the prior map is determined, point cloud registration can be performed on each three-dimensional laser radar point cloud in the current-frame three-dimensional laser radar point cloud and the three-dimensional laser radar point cloud in the prior map according to the relative pose so as to determine which three-dimensional laser radar point cloud in the prior map is matched or corresponding to each three-dimensional laser radar point cloud in the current-frame three-dimensional laser radar point cloud, and finally the three-dimensional laser radar point cloud in the current-frame three-dimensional laser radar point cloud is added into the prior map according to the point cloud registration result, namely, the current-frame three-dimensional laser radar point cloud and the prior map are spliced to form a new map, and the new map becomes the prior map of the next-frame three-dimensional laser radar point cloud.
Specifically, the relative pose of the current frame three-dimensional lidar point cloud and the prior map can be determined in the present embodiment through the following steps 11 to 16.
Step 11: and screening out a first edge characteristic point and a first plane characteristic point from all the three-dimensional laser radar point clouds contained in the current frame three-dimensional laser radar point cloud. Specifically, the roughness of each three-dimensional lidar point cloud can be calculated, the three-dimensional lidar point cloud with the roughness smaller than a preset roughness threshold value is used as a first plane feature point, and the three-dimensional lidar point cloud with the roughness larger than or equal to the preset roughness threshold value is used as a first edge feature point.
It should be noted that, in this embodiment, a conventional method for calculating the roughness of the point cloud of the three-dimensional lidar in the technical field of three-dimensional point cloud processing may be adopted to calculate the roughness of each three-dimensional lidar point cloud, and this embodiment does not specifically limit the above calculation method. In addition, a person skilled in the art may also flexibly set a specific value of the preset roughness threshold according to actual requirements, and this embodiment is not limited in this way.
Step 12: and screening a second edge characteristic point and a second plane characteristic point from the three-dimensional laser radar point cloud contained in the prior map, and establishing a KD-Tree (K-dimensional search Tree) of the three-dimensional laser radar point cloud according to the screened second edge characteristic point and the second plane characteristic point.
The method for screening out the second edge feature point and the second plane feature point from the three-dimensional lidar point cloud contained in the prior map is the same as the method for screening out the first edge feature point and the first plane feature point in the step 11, and the description is omitted here. In addition, a conventional KD-Tree establishing method can be adopted in the embodiment, and the KD-Tree of the three-dimensional laser radar point cloud is established according to the second edge characteristic point and the second plane characteristic point which are screened out.
Step 13: and searching a second edge characteristic point nearest to the first edge characteristic point and a second plane characteristic point nearest to the first plane characteristic point according to KD-Tree of the three-dimensional laser radar point cloud.
Step 14: and respectively determining three-dimensional laser radar point clouds matched with the first edge feature points and three-dimensional laser radar point clouds matched with the first plane feature points on the prior map according to the relative poses, wherein the three-dimensional laser radar point clouds matched with the first edge feature points are simply called as the prior matching edge point clouds, and the three-dimensional laser radar point clouds matched with the first plane feature points are simply called as the prior matching plane point clouds.
The prior matching edge point cloud can be represented as f1 (A1, O), where A1 represents the pose of the first edge feature point, and O represents the relative pose of the current frame three-dimensional lidar point cloud and the prior map.
The prior matching plane point cloud can be represented as f2 (A2, O), wherein A2 represents the pose of the first plane feature point, and O represents the relative pose of the current frame three-dimensional laser radar point cloud and the prior map.
Step 15: and establishing an edge characteristic point loss function according to the prior matching edge point cloud and the second edge characteristic point, and establishing a plane characteristic point loss function according to the prior matching plane point cloud and the second plane characteristic point.
The edge feature point loss function is a distance error equation of a line segment formed by the first matched edge point cloud and the second edge feature point, and the loss value of the edge feature point loss function is 'the distance of the line segment formed by the first matched edge point cloud and the second edge feature point';
the plane characteristic point loss function is a distance error equation of a plane formed by the point cloud of the prior matching plane and the second plane characteristic point, and the loss value of the plane characteristic point loss function is the distance between the point cloud of the prior matching plane and the plane formed by the second plane characteristic point.
Step 16: and respectively taking the loss values of the edge characteristic point loss function and the plane characteristic point loss function smaller than a preset value as targets, carrying out iterative optimization on the relative pose O, and taking the relative pose O with the loss values of the edge characteristic point loss function and the plane characteristic point loss function smaller than the preset value as a final relative pose.
The above is a specific description of one embodiment of the step S102, and another embodiment of the step S102 is described below.
With the continuous update of the map, the accumulated error will also increase, thereby reducing the accuracy of the three-dimensional point cloud map. In order to overcome the above problems, in another embodiment of step S102, a global positioning system installed on a vehicle may be reused, and a three-dimensional point cloud map is established by using the global positioning system when a positioning signal of the global positioning system is normal, so as to improve the accuracy of the three-dimensional point cloud map, and when the positioning signal of the global positioning system is abnormal, a map update is performed according to a relative pose between a current frame three-dimensional lidar point cloud and a previous map, so as to establish the three-dimensional point cloud map.
The global positioning device is a device for positioning by using a satellite navigation positioning technology. In the present embodiment, the Global Satellite Positioning device may be a device that performs Positioning based on a Global Navigation Satellite System (Global Navigation Satellite System), a device that performs Positioning based on a Global Positioning System (Global Positioning System), a device that performs Positioning based on a BeiDou Satellite Navigation System (bei dou Navigation Satellite System), a device that performs Positioning based on an RTK (Real Time Kinematic) Positioning technology, or the like. Those skilled in the art can flexibly select different types of global positioning system according to actual requirements, which is not limited in this embodiment.
The following describes a method for establishing a three-dimensional point cloud map when a positioning signal of a global satellite positioning device is normal and abnormal, respectively.
The method comprises the following steps of (I) establishing a three-dimensional point cloud map when a positioning signal is normal.
As shown in fig. 2, the method for building a three-dimensional point cloud map in the present embodiment mainly includes the following steps S1021 to S1023.
Step S1021: determining a pose parameter T for coordinate system transformation between a global satellite positioning device and a laser radar on a vehicle go . In practical application, after the global satellite positioning device and the laser radar are installed on a vehicle, the position and orientation parameter T for converting the coordinate system between the global satellite positioning device and the laser radar is needed go And (6) calibrating. In the embodiment, the position and attitude parameters T of the global satellite positioning device can be directly obtained go Obtaining the position and attitude parameter T of the global satellite positioning device and the laser radar on the vehicle for converting the coordinate system from the calibration result go
Step S1022: based on pose parameter T go And respectively determining radar positioning poses corresponding to each frame of three-dimensional laser radar point clouds of the driving environment acquired by the laser radar according to the global positioning poses acquired by the global satellite positioning device. In the running process of a vehicle, the error ratio of the radar positioning pose actually output by the laser radar is larger due to vehicle shake, and if a three-dimensional point cloud map of a running environment is established by utilizing the radar positioning pose actually output by the laser radar, the accuracy of the three-dimensional point cloud map is influenced.
Based on the above pose parameter T go The determined radar positioning pose is not the radar positioning pose actually output by the laser radar, but is determined through a coordinate systemAnd converting the obtained radar positioning pose matched with the global positioning pose. Because the positioning precision of the global satellite positioning device is higher, the precision of the radar positioning pose matched by the pose parameter is also higher, and therefore, the three-dimensional point cloud map of the driving environment is established by the radar positioning pose matched by the pose parameter, and the accuracy of the three-dimensional point cloud map is greatly improved.
Step S1023: and splicing each frame of three-dimensional laser radar point cloud according to the radar positioning pose so as to establish a three-dimensional point cloud map of the driving environment. Specifically, the relative positions and postures of two adjacent frames of three-dimensional laser radar point clouds can be respectively determined according to the radar positioning positions and postures respectively corresponding to each frame of three-dimensional laser radar point clouds, the two frames of three-dimensional laser radar point clouds are subjected to point cloud matching according to the relative positions and postures, and the two frames of three-dimensional laser radar point clouds are spliced together according to the point cloud matching result.
In some embodiments, in order to improve the accuracy of the three-dimensional point cloud map, each frame of three-dimensional lidar point cloud may be subjected to point cloud distortion removal, and then the three-dimensional lidar point clouds subjected to the point cloud distortion removal may be spliced to establish the three-dimensional point cloud map of the driving environment. It should be noted that, in this embodiment, a conventional point cloud distortion removal method in the technical field of three-dimensional lidar point clouds may be adopted to respectively perform point cloud distortion removal processing on each frame of three-dimensional lidar point cloud, which is not described in detail in this embodiment.
Based on the method described in the above steps S1021 to S1023, an accurate radar positioning pose can be obtained by using a global satellite positioning device, and then the three-dimensional laser radar point cloud is spliced according to the radar positioning pose, and the accuracy of the three-dimensional point cloud map can be significantly improved when the three-dimensional point cloud map of the driving environment is established.
The following further describes step S1021 and step S1022, respectively.
First, the above step S1021 will be described.
In order to improve the accuracy of the radar positioning pose obtained by performing coordinate system transformation based on the pose parameters, in some embodiments of the above step S1021, the pose parameters T of the global satellite positioning device and the laser radar on the vehicle for coordinate system transformation may be determined through the following steps 21 to 23.
Step 21: and determining radar positioning poses corresponding to each frame of three-dimensional laser radar point clouds acquired by the laser radar during turning of the vehicle according to the pose parameters (initial pose parameters for converting a coordinate system of the laser radar on the vehicle and the global satellite positioning device) and the global positioning poses acquired by the global satellite positioning device during turning of the vehicle. Radar position appearance T o Can be expressed as T o =T go *T g ,T go A pose parameter T representing the transformation of the coordinate system between the global satellite positioning device and the laser radar on the vehicle g And the global positioning pose obtained by the global satellite positioning device is shown.
Compared with straight line driving, the change of the position and the posture (pose) of the vehicle during turning is larger, so that the pose parameter T for converting the coordinate system between the global satellite positioning device and the laser radar on the vehicle can be more accurately determined by utilizing the data during turning of the vehicle go . Similar to step S1022, based on the above-described pose parameter T go The determined radar positioning pose is not the radar positioning pose actually output by the laser radar, but the radar positioning pose matched with the global positioning pose obtained through coordinate system conversion.
Step 22: adopting a Point cloud registration algorithm based on an ICP (Iterative closed Point) algorithm, and aligning a pose parameter T according to radar positioning poses respectively corresponding to each frame of three-dimensional laser radar Point cloud acquired by a laser radar when a vehicle turns go And (6) optimizing.
Specifically, a KD-Tree (search Tree of K dimension) of a three-dimensional laser radar point cloud is established; then searching three-dimensional laser radar point clouds with nearest neighbor matching relation in two adjacent frames of three-dimensional laser radar point clouds based on the KD-Tree; and finally, adopting a Point cloud registration algorithm based on an ICP (Iterative Closest Point) algorithm, and converting the coordinate system of the global satellite positioning device and the laser radar on the vehicle according to the three-dimensional laser radar Point cloud with the nearest neighbor matching relationshipPose parameter T go And (6) optimizing. In some preferred embodiments, the point cloud registration algorithm based on the ICP algorithm may be a point cloud registration algorithm based on the method of point-to-plane in the ICP algorithm. For simplicity of description, the method of point-to-plane is not specifically described in the embodiments of the present invention.
Step 23: the pose parameter T after optimization go As final pose parameter T go I.e., the pose parameter T used in step S1022 go . Based on the method from the step 21 to the step 23, the pose parameter T can be obviously improved go The calibration accuracy.
The above is a further description of step S1021.
The following proceeds to further description of step S1022.
In order to accurately obtain the respective radar positioning poses corresponding to each frame of three-dimensional lidar point cloud, in some embodiments of the above step S1022, the radar positioning poses may be obtained through the following steps 31 to 32.
Step 31: and respectively carrying out interpolation calculation on the global positioning pose obtained by the global satellite positioning device according to the time stamps corresponding to each frame of three-dimensional laser radar point cloud, so as to obtain the global positioning pose under each time stamp. The acquisition time of the global satellite positioning device and the acquisition time of the laser radar are possibly asynchronous, so that the laser radar acquires a new frame of three-dimensional laser radar point cloud at some time, but the global satellite positioning device possibly delays a little time to acquire a new frame of positioning signal, namely the three-dimensional laser radar point cloud actually acquired by the laser radar and the global positioning pose of the positioning signal acquired by the global satellite positioning device are not in one-to-one correspondence. By the method for carrying out the timestamp interpolation calculation on the global positioning poses, the global positioning poses corresponding to each frame of three-dimensional laser radar point cloud can be accurately obtained, and the global positioning poses corresponding to each frame of three-dimensional laser radar point cloud actually collected by the laser radar one by one can be obtained.
Step 32: pose parameter T for converting coordinate system based on global satellite positioning device and laser radar go And respectively determining radar positioning poses corresponding to each frame of three-dimensional laser radar point cloud according to the global positioning poses under each timestamp. Based on the method in the steps 31 to 32, the radar positioning poses corresponding to each frame of three-dimensional laser radar point cloud actually acquired by the laser radar can be accurately obtained.
The above is a further description of step S1022.
And (II) establishing a three-dimensional point cloud map when the positioning signal is abnormal.
Similar to the embodiment of the foregoing step S102, when the positioning signal of the global positioning system is abnormal, the three-dimensional point cloud map of the driving environment may be established in the following manner. And aiming at each frame of three-dimensional laser radar point cloud, determining the relative pose of the current frame of three-dimensional laser radar point cloud relative to the three-dimensional point cloud map established according to the previous frames of three-dimensional laser radar point cloud, and updating the three-dimensional point cloud map according to the relative pose and the current frame of three-dimensional laser radar point cloud so as to establish the three-dimensional point cloud map of the driving environment. For the sake of simplicity of description, the "three-dimensional point cloud map built from the previous multi-frame three-dimensional lidar point clouds" is simply referred to as "prior map".
As can be seen from step 15 in the foregoing embodiment, when determining the relative pose of the current frame three-dimensional lidar point cloud with respect to the previous map, the relative pose is determined by mainly registering geometric features of the point cloud (the distance between the previous matching edge point cloud and the line segment formed by the second edge feature point, and the distance between the previous matching plane point cloud and the plane formed by the second plane feature point). However, when the vehicle runs in a scene with strong repeatability of geometric structural features (such as a highway, a tunnel, or a sea-crossing bridge), the method for registering the geometric structural features of the point cloud to determine the relative pose may cause mismatching. In this regard, in order to overcome the above problems and improve the robustness of the method for determining the relative pose, the relative pose of the three-dimensional lidar point cloud of the current frame with respect to the previous map may be determined through the following steps 41 to 42 in the present embodiment.
Step 41: and predicting the relative pose of the current frame three-dimensional laser radar point cloud relative to a three-dimensional point cloud map established according to the previous multi-frame three-dimensional laser radar point clouds according to the angular speed and the linear speed of the vehicle, and taking the predicted relative pose as a prior relative pose.
Angular velocity refers to the arc of rotation per unit time (e.g., 1 second), and linear velocity refers to the distance traveled per unit time (e.g., 1 second). When the positioning signal of the global satellite positioning device is abnormal, the angular speed and the linear speed of the wheel type odometer of the vehicle CAN be received through a CAN (Controller Area Network) bus; the angular velocity and the linear velocity can be obtained by the global satellite positioning device when the positioning signal of the global satellite positioning device is normal.
The angular velocity and the linear velocity of the vehicle usually do not change suddenly in one or a plurality of scanning periods (short time) of the laser radar, so that the attitude variation of the current frame of three-dimensional laser radar point cloud can be predicted according to the angular velocity of the vehicle in the scanning period corresponding to the previous frame or a plurality of frames of three-dimensional laser radar point cloud, meanwhile, the position variation of the current frame of three-dimensional laser radar point cloud can be predicted according to the linear velocity of the vehicle in the scanning period corresponding to the previous frame or a plurality of frames of three-dimensional laser radar point cloud, and the prediction result of the relative attitude of the current frame of three-dimensional laser radar point cloud to the previous map can be obtained according to the attitude variation and the attitude variation.
Step 42: and determining the relative pose of the current frame three-dimensional laser radar point cloud relative to the three-dimensional point cloud map established according to the previous multi-frame three-dimensional laser radar point cloud based on the prior relative pose. Specifically, the method described in steps 11 to 16 in the foregoing embodiment is first adopted to determine the relative pose of the current frame three-dimensional lidar point cloud with respect to the previous map, and for simplicity of description, the relative pose is simply referred to as the initial relative pose. Then, comparing the initial relative pose with the prior relative pose; if the difference between the initial relative pose and the final relative pose is small, taking the initial relative pose as the final relative pose; and if the difference between the prior relative pose and the final relative pose is large, taking the prior relative pose as the final relative pose. Based on the method in the steps 41 to 42, even if the vehicle runs in a scene with strong geometric structure feature repeatability, the relative pose of the current frame three-dimensional laser radar point cloud relative to the prior map can be accurately obtained.
In addition, in other embodiments, to further improve the robustness of the method for determining the relative pose, before determining the relative pose of the current frame three-dimensional lidar point cloud with respect to the previous map, dynamic object detection may be performed on the current frame three-dimensional lidar point cloud first, the three-dimensional lidar point cloud belonging to the dynamic object in the current frame three-dimensional lidar point cloud is removed according to the detection result, and finally, the relative pose is determined according to the current frame three-dimensional lidar point cloud after the three-dimensional lidar point cloud belonging to the dynamic object is removed.
In the embodiment, a three-dimensional laser radar point cloud sample can be used for carrying out model training on a dynamic object detection model established based on a deep convolution neural network, then, the trained dynamic object detection model is adopted for carrying out dynamic object detection on each frame of three-dimensional laser radar point cloud, the dynamic object detection model can output a detection frame of each dynamic object in each frame of three-dimensional laser radar point cloud, the three-dimensional laser radar point cloud in the detection frame is the three-dimensional laser radar point cloud belonging to the dynamic object, and the three-dimensional laser radar point clouds can be removed.
Further, in this embodiment, in order to avoid drift of the radar positioning pose and/or the three-dimensional point cloud map corresponding to the current frame of three-dimensional laser radar point cloud actually output by the laser radar, after the map update of the three-dimensional point cloud map is performed according to the relative pose of the current frame of three-dimensional laser radar point cloud with respect to the previous map and the current frame of three-dimensional laser radar point cloud to establish the three-dimensional point cloud map of the driving environment, the radar positioning pose corresponding to the current frame of three-dimensional laser radar point cloud may be optimized through the following step 51.
Step 51: and optimizing the radar positioning Pose corresponding to the current frame three-dimensional laser radar point cloud by adopting a Pose optimization method based on a factor Graph (Pose Graph) according to the preset constraint condition of the factor Graph (Pose Graph). It should be noted that the Pose optimization method based on the factor Graph (dose Graph) is a conventional Pose optimization method in the technical field of positioning and mapping, and the embodiment does not describe a specific principle of the method, but only describes constraint conditions adopted by the method. The constraint conditions of the preset factor Graph (position Graph) may include radar positioning Pose constraint conditions and global positioning Pose constraint conditions. The radar positioning pose constraint condition and the global positioning pose constraint condition are explained below.
And (I) radar positioning pose constraint conditions.
In the present embodiment, the radar positioning pose constraint condition is as shown in the following expression (1).
Figure BDA0003834853810000121
The meaning of each parameter in formula (1) is as follows: r is odom Representing the constraint error of the constraint condition of the radar positioning pose,
Figure BDA0003834853810000122
represents the absolute Pose of the i-1 th key frame in the factor Graph (Pose Graph),
Figure BDA0003834853810000123
represents the absolute Pose of the ith keyframe in the factor Graph (Pose Graph),
Figure BDA0003834853810000124
pair of representations
Figure BDA0003834853810000125
The inverse operation is carried out to obtain the inverse value,
Figure BDA0003834853810000126
the relative pose of the ith keyframe with respect to the (i-1) th keyframe is represented, w represents the world coordinate system, and b represents the vehicle coordinate system. Absolute pose refers to the pose relative to the world coordinate system.
The key frame is a frame of three-dimensional lidar point cloud with a radar positioning Pose variation larger than a preset threshold value compared with a previous frame of three-dimensional lidar point cloud, that is, when a factor Graph (Pose Graph) is established, the three-dimensional lidar point cloud with the radar positioning Pose variation larger is established, and the three-dimensional lidar point cloud with the radar positioning Pose variation smaller is removed. Because the influence of the three-dimensional laser radar point cloud with smaller variation of the radar positioning Pose on vehicle positioning and map updating is smaller, and the quantity of the three-dimensional laser radar point cloud is more, the three-dimensional laser radar point cloud with smaller variation of the radar positioning Pose is removed, and the optimization efficiency of Pose optimization based on a factor Graph (Pose Graph) can be obviously improved.
And (II) global positioning pose constraint conditions.
In the present embodiment, the global positioning pose constraint condition is as shown in the following expression (2).
Figure BDA0003834853810000127
The meaning of each parameter in formula (2) is as follows: r is rtk Representing the constraint error of the global positioning pose constraint,
Figure BDA0003834853810000128
representing the global positioning Pose, T, at the time of the timestamp corresponding to the jth keyframe in the factor Graph (Pose Graph) rb Representing the pose parameters of the transformation between the vehicle coordinate system and the global satellite positioning device,
Figure BDA0003834853810000129
represents the absolute Pose of the jth keyframe in the factor Graph (Pose Graph),
Figure BDA00038348538100001210
presentation pair
Figure BDA00038348538100001211
Performing inverse operation, wherein r represents a coordinate system of the global satellite positioning device, j belongs to omega, and omega represents that the laser radar acquires the positioning signal of the global satellite positioning device when the positioning signal is normalThe keyframe in the three-dimensional lidar point cloud. The constraint target of the constraint condition of the preset factor Graph (Pose Graph) is to make the constraint error r odom And constrained error r rtk The sum of (a) and (b) is minimized.
The above is a specific description of step S102 of the embodiment of the method shown in fig. 1.
The following continues with the further description of step S104 of the embodiment of the method shown in fig. 1.
In some embodiments of the above step S104, the unclear three-dimensional lidar point cloud may be colored by the following steps S1041 to S1042.
Step S1041: and respectively determining the nearest neighbor two-dimensional image respectively corresponding to each three-dimensional laser radar point cloud according to the time stamp respectively corresponding to each three-dimensional laser radar point cloud and the time stamp respectively corresponding to each two-dimensional image acquired by the image acquisition device.
Step S1042: and respectively coloring unclear three-dimensional laser radar point clouds in each frame of three-dimensional laser radar point clouds according to the nearest neighbor two-dimensional image. Specifically, firstly, according to a timestamp corresponding to the nearest two-dimensional image, interpolation calculation is carried out on a radar positioning pose obtained by a laser radar, and a radar positioning pose under the timestamp is obtained; then, determining an image positioning pose corresponding to the nearest two-dimensional image based on pose parameters of coordinate system conversion of the laser radar and the image acquisition device and according to the radar positioning pose under the timestamp; determining a projection image point after the unclear three-dimensional laser radar point cloud is projected to the nearest two-dimensional image according to the image positioning pose and the position information of the unclear three-dimensional laser radar point cloud in a radar coordinate system; and finally, performing coloring treatment on the unclear three-dimensional laser radar point cloud according to the color information of the projection image point.
Further, under the condition that the number of the image acquisition devices on the vehicle is multiple and the nearest neighbor two-dimensional image corresponding to the current frame three-dimensional laser radar point cloud comprises two-dimensional images acquired by the image acquisition devices, the two-dimensional image acquired by the last image acquisition device can be selected as a final nearest neighbor two-dimensional image according to a preset arrangement sequence of the image acquisition devices, and then the unclear three-dimensional laser radar point cloud in the current frame three-dimensional laser radar point cloud is colored according to the final nearest neighbor two-dimensional image. Suppose that 6 image capturing devices A, B, C, D, E and F are provided on a vehicle, and the arrangement order of these 6 image capturing devices is ABCDEF. For example, the nearest neighbor two-dimensional image corresponding to the current frame three-dimensional lidar point cloud includes two-dimensional images a, b, c, d, e, and F acquired by six image acquisition apparatuses A, B, C, D, E and F, and since the image acquisition apparatus arranged at the last is F, the two-dimensional image F is taken as the final nearest neighbor two-dimensional image.
For another example, the nearest neighbor two-dimensional image corresponding to the current frame three-dimensional lidar point cloud includes two-dimensional images a, c, and E acquired by three image acquisition apparatuses A, C and E, and since the image acquisition apparatus arranged at the last is E, the two-dimensional image E is taken as the final nearest neighbor two-dimensional image. Based on the methods described in the above steps S1041 to S1042, even when a plurality of image capturing devices are installed on a vehicle, the unclear three-dimensional lidar point cloud can be accurately and reliably colored.
In other embodiments of the above step S104, the unclear three-dimensional lidar point cloud may be colored through the following steps S1043 to S1044.
Step S1043: and extracting a Region of Interest (Region of Interest) of the driving environment preset in the two-dimensional image. The preset Region of Interest (Region of Interest) of the driving environment at least comprises a lane Region in the driving environment, and does not comprise a Region which can interfere with the coloring of the point cloud. The areas where the point cloud coloring may be disturbed may include at least the image area of the vehicle itself and the disturbed area caused by the image exposure. The disturbance area generated by the image exposure may be a disturbance area generated due to a rolling shutter (rolling shutter) effect.
Step S1044: and coloring unclear three-dimensional laser radar point clouds according to the color information of a preset Region of Interest (Region of Interest) of the driving environment. The implementation of this step is similar to the method described in the foregoing step S1042, and is not described herein again.
Now, the first embodiment of the processing method of the three-dimensional point cloud map has been described with reference to the drawings, and the second embodiment of the processing method of the three-dimensional point cloud map is described below.
2. Second embodiment of the method for processing three-dimensional point cloud map
In the method for processing a three-dimensional point cloud map according to the second embodiment of the present invention, the method for processing a three-dimensional point cloud map may include steps S101 to S104 of the method embodiment shown in fig. 1, and the main difference from the method embodiment shown in fig. 1 is that after step S103 is performed and before step S104 is performed, a step of removing a three-dimensional lidar point cloud belonging to a dynamic object is further included. Specifically, referring to fig. 3, fig. 3 illustrates a main flow of a method for removing a three-dimensional lidar point cloud belonging to a dynamic object in an embodiment of the present invention. As shown in fig. 3, the three-dimensional lidar point cloud belonging to the dynamic object may be removed in the embodiment of the present invention by the following steps S201 to S202.
Step S201: and performing ground fitting on the three-dimensional laser radar point cloud in the three-dimensional point cloud map to determine the ground plane of the three-dimensional point cloud map.
Step S202: and removing the three-dimensional laser radar point clouds belonging to the dynamic object in the three-dimensional point cloud map according to the ground plane. Specifically, a three-dimensional lidar point cloud located above the ground plane may be taken as a three-dimensional lidar point cloud belonging to a dynamic object. In some embodiments, after the ground plane is determined, whether the three-dimensional lidar point cloud is a ground point cloud or a non-ground point cloud can be determined according to the ground height of the position corresponding to each three-dimensional lidar point cloud and further according to the point cloud height and the ground height corresponding to each three-dimensional lidar point cloud, and finally all the non-ground point clouds are removed. If the point cloud height is less than or equal to the ground height, the three-dimensional laser radar point cloud is the ground point cloud; otherwise, the three-dimensional lidar point cloud is a non-ground point cloud. Based on the method described in the above steps S201 to S202, the three-dimensional lidar point cloud belonging to the dynamic object in the three-dimensional point cloud map can be removed, so that the interference of the three-dimensional lidar point cloud belonging to the dynamic object on the lane element can be eliminated when the lane element labeling is performed according to the three-dimensional point cloud map, thereby improving the labeling efficiency of the lane element.
The above step S201 will be further explained.
Referring to fig. 4, in some embodiments of the above step S201, the ground plane of the three-dimensional point cloud map may be determined through the following steps S2011 to S2014.
Step S2011: a polar coordinate grid representation method based on a Concentric circle Model (Concentric Zone Model) is adopted, and a polar coordinate grid map of the three-dimensional point cloud map is established by taking a central point of the three-dimensional point cloud map as a pole. A method of representing a polar coordinate grid by a Concentric circle Model (Concentric Zone Model) will be briefly described with reference to a polar coordinate grid diagram shown in fig. 5 as an example.
Specifically, a central point of the three-dimensional point cloud map is used as a pole, a plurality of concentric circle regions are formed along the order of the pole diameter values from small to large, and the pole diameter values corresponding to each concentric circle region may be the same or different. As shown in fig. 5, the four concentric circular regions Z1, Z2, Z3, and Z4 are formed in order of the smaller to the larger pole diameter values and have different pole diameter values. After the plurality of concentric circle regions are determined, gridding division is performed on each concentric circle region again. The polar angle (the range of the polar angle covered by the grids) corresponding to each grid in the same concentric circle region is the same, and the polar angle corresponding to the grids in different concentric circle regions can be different. As shown in fig. 5, the grids in the concentric zones Z2 and Z3 have the same and the smallest polar angle, the concentric zone Z1 having the smaller polar angle and the concentric zone Z4 having the largest polar angle.
Step S2012: and performing plane fitting on the three-dimensional laser radar point cloud in the grid aiming at each grid in the polar coordinate grid graph to obtain a plurality of planes in the grid. In this embodiment, a conventional plane fitting method in the field of plane fitting technology may be adopted to perform plane fitting on the three-dimensional lidar point clouds in each grid. For example, the plane fitting may be performed using the RANSAC (RANdom Sample Consensus) algorithm.
Step S2013: and determining whether the grid is a ground grid or not according to the plane normal vector of each plane and the height of each three-dimensional laser radar point cloud in the grid and by taking the constraint condition that the vertical angle deviation of each plane normal vector is smaller than a preset angle deviation threshold value and the height difference of the adjacent three-dimensional laser radar point cloud in the grid is smaller than a preset height difference threshold value.
The vertical angle deviation refers to the angle deviation between a plane normal vector and the vertical direction of the three-dimensional laser radar point cloud, and can be represented by a vector [ 01 ] to be vertically upward or a vector [ 0-1 ] to be vertically downward. After the plane normal vector is determined, a vector angle between the plane normal vector and the vector [ 01 ] representing the vertical direction (or the vector [ 0-1 ] representing the vertical direction) can be calculated by a vector angle calculation method, and the vector angle is taken as a vertical angle deviation.
If the vertical angle deviation is smaller than a preset angle deviation threshold value, the plane normal vector is vertical to the ground, and the plane corresponding to the plane normal vector is likely to be a ground plane; if the height difference of the adjacent three-dimensional laser radar point clouds in the grid is smaller than a preset height difference threshold value, the three-dimensional laser radar point clouds in the grid are relatively smooth, and the grid can be a ground grid. And after the two conditions that the vertical angle deviation is smaller than a preset angle deviation threshold value and the height difference of the adjacent three-dimensional laser radar point clouds in the grid is smaller than a preset height difference threshold value are simultaneously met, the grid can be judged to be the ground grid.
Step S2014: and performing ground fitting on the three-dimensional laser radar point cloud in the ground grid to determine the ground plane of the three-dimensional point cloud map. In this embodiment, a conventional plane fitting method in the field of plane fitting technology may be used to perform ground fitting on the three-dimensional lidar point cloud in the ground grid to determine the ground plane of the three-dimensional point cloud map, which is not described in detail in this embodiment. Based on the method described in the above steps S2011 to S2014, the ground plane of the three-dimensional point cloud map can be accurately obtained, so as to improve the accuracy of ground point cloud and non-ground point cloud, and finally improve the accuracy of removing the three-dimensional laser radar point cloud belonging to the dynamic object.
Now, a second embodiment of the processing method of the three-dimensional point cloud map has been described with reference to the drawings, and the following description continues to describe an embodiment of the method for acquiring lane marking data.
1. First embodiment of a method for acquiring lane marking data
Referring to fig. 6, fig. 6 is a flow chart illustrating the main steps of the lane marking data acquiring method according to the embodiment of the invention. As shown in fig. 6, the method for acquiring lane marking data in the embodiment of the present invention mainly includes the following steps S301 to S305.
Step S301: and acquiring three-dimensional laser radar point cloud of a driving environment acquired by a laser radar on the vehicle in the driving process of the vehicle. This step is the same as the method described in step S101 in the embodiment of the method shown in fig. 1, and is not repeated here.
Step S302: and establishing a three-dimensional point cloud map of the driving environment according to the three-dimensional laser radar point cloud. This step is the same as the method described in step S102 in the embodiment of the method shown in fig. 1, and is not repeated here.
Step S303: and coloring unclear three-dimensional laser radar point clouds in the three-dimensional point cloud map. In this step, the method of the embodiment of the method shown in fig. 1 is adopted to perform coloring treatment on unclear three-dimensional laser radar point clouds in the three-dimensional point cloud map, and details are not repeated here.
Step S304: and respectively generating a Bird Eye View (Bird Eye View) and a Height Map (Height Map) of the driving environment according to the colored three-dimensional point cloud Map.
The aerial view is an image obtained by projecting three-dimensional laser radar point clouds in a three-dimensional point cloud map to a plane perpendicular to the height direction of the point clouds, and each image point in the image corresponds to each three-dimensional laser radar point cloud in the three-dimensional point cloud map one by one.
Each image point in the height map is respectively in one-to-one correspondence with each three-dimensional laser radar point cloud in the three-dimensional point cloud map, and each image point respectively stores the point cloud height of the corresponding three-dimensional laser radar point cloud.
It should be noted that, a person skilled in the art may use a conventional bird's-eye view map generation method in the technical field of three-dimensional laser radar point cloud processing to generate a bird's-eye view map of a driving environment according to three-dimensional laser radar point cloud in a three-dimensional point cloud map, or may use a conventional height map generation method to generate a height map of a driving environment according to three-dimensional laser radar point cloud in a three-dimensional point cloud map, and the embodiment of the present invention is not limited to the above method.
Step S305: and marking the lane elements of the driving environment in the driving process of the vehicle according to the aerial view and the height map so as to form four-dimensional lane element marking data containing one-dimensional time information and three-dimensional space position information.
The three-dimensional point cloud map is established according to the three-dimensional laser radar point cloud in the vehicle driving process, so that the three-dimensional point cloud map is a dynamic map containing time information. Further, the bird's eye view and the height map determined from the three-dimensional point cloud map are dynamic images including time information. The lane elements at least comprise traffic signs and/or other objects capable of playing a role of sign on a lane, wherein the traffic signs at least comprise lane lines, stop lines, road sign signs (such as left-turning arrows), traffic display lamps, traffic signboards and the like, and the other objects capable of playing a role of sign at least comprise rod-shaped objects.
When lane element labeling is performed based on the bird's-eye view and the height map, lane element labeling is actually performed based on a bird's-eye view sequence and a height map sequence arranged in time order, and therefore four-dimensional lane element labeling data including one-dimensional time information and three-dimensional spatial position information can be formed. The one-dimensional time information can be determined by time information of a bird's-eye view or a height map, the two-dimensional plane position in the three-dimensional space position information can be determined according to the bird's-eye view, and the height information in the three-dimensional space position information can be determined according to the height map.
Based on the method described in the above steps S301 to S305, the labeled data of the lane elements in the four-dimensional space (the four-dimensional space formed by time and three-dimensional space) can be accurately obtained, and after model training is performed on the perception model of the vehicle by using the labeled data, the perception capability of the perception model can be greatly improved, so that the real-time local map and the obstacle information of the driving environment can be accurately determined in the driving process of the vehicle, and the vehicle can be ensured to be safely driven.
The following further describes step S304 and step S305, respectively.
In some embodiments of step S304, a first Bird 'S Eye View (Bird Eye View) may be generated from a clear three-dimensional lidar point cloud in the colored three-dimensional point cloud Map, a second Bird' S Eye View (Bird Eye View) may be generated from an unclear three-dimensional lidar point cloud in the colored three-dimensional point cloud Map, and a Height Map (Height Map) may be generated from the colored three-dimensional point cloud Map. In a first Bird's Eye View (Bird Eye View), each image point corresponds to a clear three-dimensional laser radar point cloud one by one; in a second Bird's Eye View (Bird Eye View), each image point corresponds to an unclear three-dimensional lidar point cloud one by one.
Accordingly, in some embodiments of the above step S305, the first Bird' S Eye View (Bird Eye View) and the Height Map (Height Map) may be loaded and displayed in response to receiving the annotation start instruction. Meanwhile, in the process of loading the first Bird's Eye View (Bird Eye View) and the Height Map (Height Map), if a received label switching command is received, loading and displaying the second Bird's Eye View (Bird Eye View) and the Height Map (Height Map).
Based on the above embodiment, the first Bird's-Eye View (Bird Eye View) can be preferentially loaded when the lane element labeling work is started, and when some lane elements cannot be seen from the first Bird's-Eye View, the second Bird's-Eye View (Bird Eye View) is switched to be loaded, so that the lane element labeling work is continuously completed.
So far, a first embodiment of the method for acquiring lane marking data has been described with reference to the accompanying drawings, and a second embodiment of the method for acquiring lane marking data will be described below.
2. Second embodiment of the method for acquiring Lane marking data
In the method for acquiring lane marking data according to the second embodiment of the present invention, the method for acquiring lane marking data may include steps S301 to S305 of the embodiment of the method shown in fig. 6, and the main difference from the embodiment of the method shown in fig. 6 is that a step of calibrating the pose parameters for coordinate system transformation of the laser radar and the image acquisition device is further included after step S305 is executed. Specifically, referring to fig. 7, fig. 7 schematically illustrates a main flow of a method for calibrating pose parameters of coordinate system transformation between a laser radar and an image acquisition device in an embodiment of the present invention. As shown in fig. 7, in the embodiment of the present invention, the pose parameters of the laser radar and the image capturing device for coordinate system transformation may be calibrated through the following steps S401 to S402.
Step S401: and acquiring two-dimensional lane element detection data obtained by performing lane element detection on the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle. In the embodiment, a two-dimensional image sample can be used for carrying out model training on a two-dimensional lane element detection model established based on a deep convolutional neural network, and then the trained two-dimensional lane element detection model is adopted for carrying out lane element detection on the two-dimensional image.
Step S402: and calibrating the pose parameters of the laser radar and the image acquisition device for coordinate system conversion according to the four-dimensional lane element labeling data and the two-dimensional lane element detection data.
Referring to fig. 8, in this embodiment, the pose parameters of the lidar and the image capturing device for coordinate system transformation may be calibrated through the following steps S4021 to S4024.
Step S4021: and projecting the four-dimensional lane element labeling data to a coordinate system of the image acquisition device according to the pose parameters to obtain projection points f (A, O) of the four-dimensional lane element labeling data. Wherein A represents the coordinate of the four-dimensional lane element labeling data in the coordinate system of the laser radar, and O represents the pose parameter.
Step S4022: two image points on the two-dimensional image that are closest to the projected point f (a, O) are determined.
Step S4023: and establishing a distance error equation from the projection point f (A, O) to the line segment cd according to the line segment cd formed by the projection point f (A, O) and the two image points.
The error equation of the distance from the projection point f (a, O) to the line segment cd is shown in the following equation (3).
loss=d(f(A,O),cd) (3)
The meaning of each parameter in the above formula (3) is as follows:
d represents a distance calculation function from the projection point f (a, O) to the line segment cd, and loss represents a distance calculated by the distance calculation function d.
Step S4024: and performing iterative optimization on the pose parameter O in the distance error equation by taking the distance loss smaller than the preset distance threshold as a target, acquiring the pose parameter O when the distance loss is smaller than the preset distance threshold, and taking the pose parameter O as the pose parameter after calibration.
In the embodiment of the invention, a least square algorithm can be adopted to carry out iterative optimization on the pose parameter O in the distance error equation. For example, the pose parameter O in the distance error equation may be iteratively optimized using the Levenberg-Marquardt algorithm in a least squares algorithm. The embodiment of the present invention does not specifically limit the iterative optimization method.
Based on the methods in the steps S4021 to S4024, the accuracy of calibrating the pose parameters for the coordinate system transformation between the laser radar and the image acquisition device can be improved.
Further, in the method for acquiring lane marking data according to the second embodiment of the present invention, the method for acquiring lane marking data may include the aforementioned steps S401 to S402, and may further determine lane element marking data on the two-dimensional image of the driving environment based on the calibrated pose parameter and according to the four-dimensional lane element marking data.
Specifically, coordinate system conversion can be performed on the four-dimensional lane element labeling data based on the calibrated pose parameters, image points corresponding to the four-dimensional lane element labeling data on the two-dimensional image are determined, and the lane element labeling data on the two-dimensional image can be determined according to the image points. By the method, the efficiency and the accuracy of acquiring the lane element labeling data on the two-dimensional image can be greatly improved.
The above is a description of a second embodiment of the lane marking data acquisition method.
It should be noted that, although the foregoing embodiments describe each step in a specific sequence, those skilled in the art will understand that, in order to achieve the effect of the present invention, different steps do not necessarily need to be executed in such a sequence, and they may be executed simultaneously (in parallel) or in other sequences, and these changes are all within the protection scope of the present invention.
It will be understood by those skilled in the art that all or part of the flow of the method according to the above-described embodiment may be implemented by a computer program, which may be stored in a computer-readable storage medium and used to implement the steps of the above-described embodiments of the method when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying said computer program code, media, usb disk, removable hard disk, magnetic diskette, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunication signals, software distribution media, etc. It should be noted that the computer readable storage medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable storage media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
Furthermore, the invention also provides computer equipment.
In an embodiment of a computer apparatus according to the present invention, the computer apparatus includes a processor and a storage device, the storage device may be configured to store a program for executing the processing method of the three-dimensional point cloud map or the acquiring method of the lane marking data of the above-described method embodiment, and the processor may be configured to execute a program in the storage device, the program including but not limited to a program for executing the processing method of the three-dimensional point cloud map or the acquiring method of the lane marking data of the above-described method embodiment. For convenience of explanation, only the parts related to the embodiments of the present invention are shown, and details of the specific techniques are not disclosed. The computer device may be a control apparatus device formed including various electronic devices.
Further, the invention also provides a computer readable storage medium.
In an embodiment of a computer-readable storage medium according to the present invention, the computer-readable storage medium may be configured to store a program for executing the processing method of the three-dimensional point cloud map or the acquiring method of the lane marking data of the above-described method embodiment, and the program may be loaded and executed by a processor to implement the processing method of the three-dimensional point cloud map or the acquiring method of the lane marking data. For convenience of explanation, only the parts related to the embodiments of the present invention are shown, and details of the specific techniques are not disclosed. The computer readable storage medium may be a storage device formed by including various electronic devices, and optionally, the computer readable storage medium is a non-transitory computer readable storage medium in the embodiment of the present invention.
So far, the technical solution of the present invention has been described with reference to one embodiment shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A processing method of a three-dimensional point cloud map is characterized by comprising the following steps:
acquiring three-dimensional laser radar point cloud of a driving environment acquired by a laser radar on a vehicle in the driving process of the vehicle;
establishing a three-dimensional point cloud map of the driving environment according to the three-dimensional laser radar point cloud;
determining clear three-dimensional laser radar point clouds and unclear three-dimensional laser radar point clouds according to the laser reflection intensity of each three-dimensional laser radar point cloud in the three-dimensional point cloud map;
and according to the color information of the two-dimensional image of the driving environment, which is acquired by the image acquisition device on the vehicle, coloring the unclear three-dimensional laser radar point cloud so as to perform visual display.
2. The method for processing the three-dimensional point cloud map according to claim 1, wherein the step of coloring the unclear three-dimensional lidar point cloud for visual display according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle specifically comprises:
respectively determining nearest neighbor two-dimensional images respectively corresponding to each three-dimensional laser radar point cloud according to the time stamp respectively corresponding to each three-dimensional laser radar point cloud and the time stamp respectively corresponding to each two-dimensional image acquired by the image acquisition device;
and respectively coloring unclear three-dimensional laser radar point clouds in each frame of three-dimensional laser radar point clouds according to the nearest neighbor two-dimensional image.
3. The method for processing the three-dimensional point cloud map according to claim 2, wherein the number of the image capturing devices on the vehicle is plural, and the step of respectively performing the coloring processing on the unclear three-dimensional lidar point cloud in each frame of the three-dimensional lidar point cloud according to the nearest two-dimensional image specifically comprises:
aiming at each frame of three-dimensional laser radar point cloud, if the nearest neighbor two-dimensional image corresponding to the current frame of three-dimensional laser radar point cloud comprises two-dimensional images acquired by a plurality of image acquisition devices, selecting the two-dimensional image acquired by the last image acquisition device as the final nearest neighbor two-dimensional image according to the preset arrangement sequence of the image acquisition devices;
and according to the final nearest neighbor two-dimensional image, performing coloring treatment on the unclear three-dimensional laser radar point cloud in the current frame three-dimensional laser radar point cloud.
4. The method for processing the three-dimensional point cloud map according to claim 1, wherein the step of coloring the unclear three-dimensional lidar point cloud for visual display according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle further comprises:
extracting a preset driving environment Region of Interest (Region of Interest) in the two-dimensional image;
and coloring the unclear three-dimensional laser radar point cloud according to the color information of the preset driving environment Region of interest (Region of interest).
5. The method for processing the three-dimensional point cloud map according to claim 1, wherein the step of determining a clear three-dimensional lidar point cloud and an unclear three-dimensional lidar point cloud according to the laser reflection intensity of each three-dimensional lidar point cloud in the three-dimensional point cloud map specifically comprises the steps of:
respectively determining clear three-dimensional laser radar point clouds and unclear three-dimensional laser radar point clouds in each frame of three-dimensional laser radar point clouds of the three-dimensional point cloud map in a parallel processing mode;
correspondingly, the step of coloring the unclear three-dimensional lidar point cloud according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle for visual display further comprises:
respectively coloring unclear three-dimensional laser radar point clouds in each frame of three-dimensional laser radar point clouds according to color information of the two-dimensional image of the driving environment, which is acquired by the image acquisition device on the vehicle, in a parallel processing mode;
and/or the presence of a gas in the gas,
before the step of determining clear three-dimensional laser radar point clouds and unclear three-dimensional laser radar point clouds according to the laser reflection intensity of each three-dimensional laser radar point cloud in the three-dimensional point cloud map or the step of coloring the unclear three-dimensional laser radar point clouds according to the color information of the two-dimensional image of the driving environment acquired by the image acquisition device on the vehicle so as to perform visual display, the method further comprises the following steps:
performing ground fitting on the three-dimensional laser radar point cloud in the three-dimensional point cloud map to determine a ground plane of the three-dimensional point cloud map;
and removing the three-dimensional laser radar point clouds belonging to the dynamic object in the three-dimensional point cloud map according to the ground plane.
6. A method of obtaining lane marking data, the method comprising:
acquiring three-dimensional laser radar point cloud of a driving environment acquired by a laser radar on a vehicle in the driving process of the vehicle;
establishing a three-dimensional point cloud map of the driving environment according to the three-dimensional laser radar point cloud;
the processing method of the three-dimensional point cloud map of any one of claims 1 to 5 is adopted to carry out coloring processing on unclear three-dimensional laser radar point clouds in the three-dimensional point cloud map;
respectively generating a Bird Eye View (Bird Eye View) and a Height Map (Height Map) of the driving environment according to the colored three-dimensional point cloud Map;
and marking the lane elements of the driving environment in the driving process of the vehicle according to the Bird Eye View and the Height Map to form four-dimensional lane element marking data containing one-dimensional time information and three-dimensional space position information.
7. The method for acquiring lane marking data according to claim 6, wherein the step of generating a Bird's Eye View (Bird Eye View) and a Height Map (Height Map) of the driving environment from the colored three-dimensional point cloud Map comprises:
and generating a first Bird Eye View (Bird Eye View) according to the clear three-dimensional laser radar point cloud in the colored three-dimensional point cloud Map, generating a second Bird Eye View (Bird Eye View) according to the unclear three-dimensional laser radar point cloud in the colored three-dimensional point cloud Map, and generating the Height Map (Height Map) according to the colored three-dimensional point cloud Map.
8. The method for acquiring lane marking data according to claim 7, wherein the step of marking lane elements of the driving environment during the driving of the vehicle according to the Bird's Eye View (Bird Eye View) and the Height Map (Height Map) to form four-dimensional lane element marking data including one-dimensional time information and three-dimensional spatial position information specifically comprises:
after responding to the received marking starting instruction, loading and displaying the first aerial View (Bird Eye View) and the Height Map (Height Map);
and in the process of loading the first Bird's Eye View (Bird Eye View) and the Height Map (Height Map), if a received label switching command is received, loading and displaying the second Bird's Eye View (Bird Eye View) and the Height Map (Height Map).
9. A computer device comprising a processor and a storage means adapted to store a plurality of program codes, characterized in that said program codes are adapted to be loaded and run by said processor to perform the method of processing a three-dimensional point cloud map of any of claims 1 to 5 or to perform the method of acquiring lane marking data of any of claims 6 to 8.
10. A computer-readable storage medium, in which a plurality of program codes are stored, characterized in that the program codes are adapted to be loaded and run by a processor to perform the method of processing a three-dimensional point cloud map of any of claims 1 to 5 or to perform the method of acquiring lane marking data of any of claims 6 to 8.
CN202211099091.XA 2022-09-06 2022-09-06 Point cloud map processing method, lane marking data acquisition method, equipment and medium Pending CN115407364A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211099091.XA CN115407364A (en) 2022-09-06 2022-09-06 Point cloud map processing method, lane marking data acquisition method, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211099091.XA CN115407364A (en) 2022-09-06 2022-09-06 Point cloud map processing method, lane marking data acquisition method, equipment and medium

Publications (1)

Publication Number Publication Date
CN115407364A true CN115407364A (en) 2022-11-29

Family

ID=84165832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211099091.XA Pending CN115407364A (en) 2022-09-06 2022-09-06 Point cloud map processing method, lane marking data acquisition method, equipment and medium

Country Status (1)

Country Link
CN (1) CN115407364A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661394A (en) * 2022-12-26 2023-01-31 安徽蔚来智驾科技有限公司 Method for constructing lane line map, computer device and storage medium
CN117557999A (en) * 2023-11-20 2024-02-13 镁佳(北京)科技有限公司 Image joint labeling method, computer equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150341552A1 (en) * 2014-05-21 2015-11-26 Here Global B.V. Developing a Panoramic Image
AU2019262089A1 (en) * 2018-05-01 2020-07-09 Commonwealth Scientific And Industrial Research Organisation Method and system for use in colourisation of a point cloud
CN111882612A (en) * 2020-07-21 2020-11-03 武汉理工大学 Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
WO2021051344A1 (en) * 2019-09-19 2021-03-25 深圳市大疆创新科技有限公司 Method and apparatus for determining lane lines in high-precision map
CN113252053A (en) * 2021-06-16 2021-08-13 中智行科技有限公司 High-precision map generation method and device and electronic equipment
US20220091266A1 (en) * 2020-09-18 2022-03-24 Denso International America, Inc. Systems and methods for enhancing outputs of a lidar
CN114445547A (en) * 2021-12-15 2022-05-06 北京云测信息技术有限公司 Point cloud coloring method and device, electronic equipment and storage medium
CN115451977A (en) * 2022-09-06 2022-12-09 安徽蔚来智驾科技有限公司 Method for acquiring lane marking data, computer device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150341552A1 (en) * 2014-05-21 2015-11-26 Here Global B.V. Developing a Panoramic Image
AU2019262089A1 (en) * 2018-05-01 2020-07-09 Commonwealth Scientific And Industrial Research Organisation Method and system for use in colourisation of a point cloud
WO2021051344A1 (en) * 2019-09-19 2021-03-25 深圳市大疆创新科技有限公司 Method and apparatus for determining lane lines in high-precision map
CN111882612A (en) * 2020-07-21 2020-11-03 武汉理工大学 Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
US20220091266A1 (en) * 2020-09-18 2022-03-24 Denso International America, Inc. Systems and methods for enhancing outputs of a lidar
CN113252053A (en) * 2021-06-16 2021-08-13 中智行科技有限公司 High-precision map generation method and device and electronic equipment
CN114445547A (en) * 2021-12-15 2022-05-06 北京云测信息技术有限公司 Point cloud coloring method and device, electronic equipment and storage medium
CN115451977A (en) * 2022-09-06 2022-12-09 安徽蔚来智驾科技有限公司 Method for acquiring lane marking data, computer device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661394A (en) * 2022-12-26 2023-01-31 安徽蔚来智驾科技有限公司 Method for constructing lane line map, computer device and storage medium
CN117557999A (en) * 2023-11-20 2024-02-13 镁佳(北京)科技有限公司 Image joint labeling method, computer equipment and medium

Similar Documents

Publication Publication Date Title
CN115451977A (en) Method for acquiring lane marking data, computer device and storage medium
CN110832279B (en) Alignment of data captured by autonomous vehicles to generate high definition maps
CN108765487B (en) Method, device, equipment and computer readable storage medium for reconstructing three-dimensional scene
CN113906414A (en) Distributed processing for generating pose maps for high definition maps for navigating autonomous vehicles
CN115407364A (en) Point cloud map processing method, lane marking data acquisition method, equipment and medium
CN110008851B (en) Method and equipment for detecting lane line
WO2019099802A1 (en) Iterative closest point process based on lidar with integrated motion estimation for high definitions maps
WO2021003453A1 (en) Annotating high definition map data with semantic labels
CN104537692B (en) Based on the key point tenacious tracking method that spatio-temporal context information is auxiliary
EP3942794B1 (en) Depth-guided video inpainting for autonomous driving
CN114637023A (en) System and method for laser depth map sampling
CN111207762B (en) Map generation method and device, computer equipment and storage medium
WO2021003487A1 (en) Training data generation for dynamic objects using high definition map data
CN111339876B (en) Method and device for identifying types of areas in scene
Konrad et al. Localization in digital maps for road course estimation using grid maps
CN112200911A (en) Region overlapping type three-dimensional map construction method and device combined with markers
CN114943952A (en) Method, system, device and medium for obstacle fusion under multi-camera overlapped view field
CN115390088A (en) Point cloud map establishing method, lane marking data acquiring method, equipment and medium
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
CN112580489A (en) Traffic light detection method and device, electronic equipment and storage medium
CN111754388B (en) Picture construction method and vehicle-mounted terminal
CN115511970B (en) Visual positioning method for autonomous parking
EP4148392A1 (en) Method and apparatus for vehicle positioning
CN113593026A (en) Lane line marking auxiliary map generation method and device and computer equipment
CN109901589B (en) Mobile robot control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination