CN109214248B - Method and device for identifying laser point cloud data of unmanned vehicle - Google Patents

Method and device for identifying laser point cloud data of unmanned vehicle Download PDF

Info

Publication number
CN109214248B
CN109214248B CN201710539630.XA CN201710539630A CN109214248B CN 109214248 B CN109214248 B CN 109214248B CN 201710539630 A CN201710539630 A CN 201710539630A CN 109214248 B CN109214248 B CN 109214248B
Authority
CN
China
Prior art keywords
laser point
map
grid
cube
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710539630.XA
Other languages
Chinese (zh)
Other versions
CN109214248A (en
Inventor
燕飞龙
闫鹤
王亮
王博胜
宋适宇
卢维欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN201710539630.XA priority Critical patent/CN109214248B/en
Priority to US16/026,338 priority patent/US11131999B2/en
Publication of CN109214248A publication Critical patent/CN109214248A/en
Application granted granted Critical
Publication of CN109214248B publication Critical patent/CN109214248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0088Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Abstract

Methods and apparatus for identifying laser point cloud data for an unmanned vehicle are disclosed. The unmanned vehicle is provided with a laser radar, and one specific embodiment of the method comprises the following steps: the method comprises the steps of responding to the latest frame of laser point cloud data collected by a laser radar, and obtaining the current pose information of the unmanned vehicle in a world coordinate system; acquiring NxN map blocks which are pre-loaded into a cache and take the map block corresponding to the current pose information as the center in a preset three-dimensional grid map from the cache according to the current pose information; for each laser point data in the received laser point cloud data, a laser point data identification operation is performed. According to the embodiment, whether each laser point data is a static laser point or not is recognized, a basis is provided for the subsequent analysis and processing of the laser point data by the unmanned vehicle, and the accuracy of recognizing the laser point data and the obstacles can be improved.

Description

Method and device for identifying laser point cloud data of unmanned vehicle
Technical Field
The application relates to the field of unmanned vehicles, in particular to the field of obstacle recognition, and particularly relates to a method and a device for recognizing laser point cloud data of an unmanned vehicle.
Background
Currently, most unmanned vehicles are equipped with lidar. The unmanned vehicle analyzes and processes the laser point cloud data collected by the laser radar, and finally performs route planning and driving control. When analyzing and processing the laser point cloud, the most important thing is to identify the obstacle of each laser point data in the laser point cloud data, that is, to give out what kind of obstacle of the physical world each laser point data in the laser point cloud data represents. Existing methods for obstacle recognition of laser point data are mainly rule-based methods or machine learning-based methods.
However, the rule-based method has the problems that the difficulty of finding and using a proper rule is high, and the difficulty of exhausting barrier species in the real world is too high. In addition, the method based on machine learning has a problem that the obstacle category which is not trained may not be identified.
Disclosure of Invention
It is an object of the present application to propose an improved method and apparatus for identifying laser point cloud data of an unmanned vehicle to solve the technical problems mentioned in the background section above.
In a first aspect, an embodiment of the present application provides a method for identifying laser point cloud data of an unmanned vehicle, where the unmanned vehicle is provided with a laser radar, and the method includes: responding to the latest frame of laser point cloud data collected by the laser radar, and acquiring the current pose information of the unmanned vehicle in a world coordinate system; according to the current pose information, obtaining N multiplied by N map blocks which are pre-loaded into a cache and take the map block corresponding to the current pose information as a center in a preset three-dimensional grid map, wherein N is an odd number, the preset three-dimensional grid map divides the earth surface into R rows and C columns of square map blocks according to the world coordinate system, each map block is divided into M multiplied by M square map grids, each map grid comprises at least one grid cube, and each grid cube comprises a cube type which is used for indicating that the grid cube is a static cube for representing a static obstacle or a dynamic cube for representing a dynamic obstacle; for each laser point data in the received laser point cloud data, performing the following laser point data identification operations: determining the coordinates of the laser point data in the world coordinate system according to the current pose information; acquiring a grid cube corresponding to the coordinates of the laser point data in the world coordinate system from the acquired N multiplied by N map blocks; in response to the cube type of the acquired grid cube being a static cube, the laser point data is determined as static laser point data for characterizing a static obstacle.
In some embodiments, the laser point data identification operation further comprises: in response to the cube type of the acquired grid cube being a dynamic cube, the laser point data is determined as dynamic laser point data for characterizing a dynamic obstacle.
In some embodiments, before the obtaining of the current pose information of the unmanned vehicle in the world coordinate system in response to receiving the latest frame of laser point cloud data acquired by the laser radar, the method further includes: in response to detecting a start signal of the unmanned vehicle, acquiring pose information of the unmanned vehicle in the world coordinate system, and determining the acquired pose information as start-time pose information; determining map blocks corresponding to the starting-time pose information in the preset three-dimensional grid map as initial map blocks; loading N multiplied by N map blocks which take the initial map block as a center in the preset three-dimensional grid map into the cache from a magnetic disk; and newly building and executing a first preloading thread, wherein the first preloading thread is used for loading (4N +4) map blocks which are not loaded into the cache in (N +2) × (N +2) map blocks which take the initial map block as the center in the preset three-dimensional grid map into the cache from a disk.
In some embodiments, before performing the following laser point data identification operation for each laser point data in the received laser point cloud data, the method further comprises: determining the vehicle pose information of the unmanned vehicle in the last sampling period of the current time as the pose information of the last period; determining the driving direction of the unmanned vehicle according to the difference information between the current pose information and the last period pose information; and newly building and executing a second preloading thread, wherein the second preloading thread is used for loading (N +2) map blocks, which are adjacent to the (N +2) × (N +2) map blocks which are loaded into the cache and take the map block corresponding to the current pose information as the center, in the preset three-dimensional grid map along the determined driving direction into the cache from a magnetic disk.
In some embodiments, the grid cube further comprises coordinates of a center point of the grid cube in the world coordinate system; and after the current pose information of the unmanned vehicle in the world coordinate system is obtained, the method further comprises the following steps: determining the current pose information as an initial value of the pose information of the vehicle to be determined; the objective function is constructed as follows: using the pose information of the undetermined vehicle as an independent variable; for each laser point data in the above laser point cloud data, performing the following alignment distance determination operation: calculating the coordinates of the laser point data in the world coordinate system according to the position and posture information of the vehicle to be determined; determining the coordinate of the center point in each grid cube of each map grid of each map block loaded in the cache and the coordinate of the center point of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system; determining the distance between the coordinate of the laser point data in the world coordinate system and the alignment coordinate of the laser point data in the world coordinate system as the alignment distance of the laser point data; calculating the sum of the alignment distances of each laser point data in the laser point cloud data; determining the sum of the calculated alignment distances as the output of the objective function; determining undetermined pose information which enables the output of the objective function to be minimum by utilizing a nearest neighbor iterative algorithm; and updating the current pose information by using the determined information of the pending pose.
In some embodiments, for each laser point data in the laser point cloud data, the following alignment distance determination operations are performed, including: performing down-sampling on the laser point cloud data to obtain down-sampled laser point cloud data; executing the alignment distance determination operation on each laser point data in the down-sampled laser point cloud data; and the calculating the sum of the alignment distances of the laser point data in the laser point cloud data comprises: and calculating the sum of the alignment distances of each laser point data in the down-sampled laser point cloud data.
In some embodiments, the determining, as the alignment coordinates of the laser point data in the world coordinate system, the coordinates of the center point of each grid cube of each map grid of each map partition loaded in the cache, where the coordinate of the center point of each grid cube is closest to the coordinate of the laser point data in the world coordinate system, includes: determining at least one grid cube of which the distance between the coordinate of the central point and the coordinate of the laser point data in a world coordinate system is smaller than or equal to a preset distance threshold in each grid cube of each map grid of each map block loaded in the cache; and determining the coordinate of the center point of at least one determined grid cube and the coordinate of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system.
In some embodiments, the preset three-dimensional grid map is obtained by the following steps: acquiring a laser point cloud data frame sequence acquired by a map acquisition vehicle, wherein each frame of laser point cloud data in the laser point cloud data frame sequence is marked with corresponding current pose information of the vehicle; for each frame of laser point cloud data in the laser point cloud data frame sequence, converting each laser point data in the frame of laser point cloud data into a coordinate in the world coordinate system according to the marked current pose information of the vehicle corresponding to the frame of laser point cloud data to obtain the converted laser point cloud frame sequence; splicing each frame of laser point cloud data in the converted laser point cloud data frame sequence to obtain spliced laser point cloud data, generating a three-dimensional map according to the spliced laser point cloud data, and presenting the three-dimensional map; responding to the received dynamic laser point marking operation of the user on the three-dimensional map, acquiring a dynamic laser point marked by the user, and deleting the acquired dynamic laser point from the spliced laser point cloud data to obtain static laser point cloud data; generating a preset three-dimensional grid map according to the following modes: dividing the earth surface into R rows and C columns of square map blocks under a world coordinate system, dividing each map block into M multiplied by M square map grids, dividing each map grid into at least one grid cube, wherein the edge length of each grid cube is the same as the side length of each map grid, and setting the cube type of each grid cube as a dynamic cube; and for each static laser point data in the static laser point cloud data, setting the cube type of the grid cube corresponding to the coordinate of the static laser point data in the world coordinate system as the static cube in the preset three-dimensional grid map.
In a second aspect, an embodiment of the present application provides an apparatus for identifying laser point cloud data of an unmanned vehicle, where the unmanned vehicle is provided with a laser radar, and the apparatus includes: the first acquisition unit is configured to respond to the latest frame of laser point cloud data acquired by the laser radar and acquire the current pose information of the unmanned vehicle in a world coordinate system; a second obtaining unit, configured to obtain, from a cache, N × N map patches, where N is an odd number, of preset three-dimensional grid maps, which are pre-loaded into the cache and center on a map patch corresponding to the current pose information, in a preset three-dimensional grid map, where N is an odd number, the preset three-dimensional grid map divides an earth surface into R rows and C columns of square map patches according to the world coordinate system, each map patch is divided into M × M square map grids, each map grid includes at least one grid cube, and each grid cube includes a cube type indicating that the grid cube is a static cube for representing a static obstacle or a dynamic cube for representing a dynamic obstacle; an identification unit configured to perform, for each laser point data in the received laser point cloud data, the following laser point data identification operation: determining the coordinates of the laser point data in the world coordinate system according to the current pose information; acquiring a grid cube corresponding to the coordinates of the laser point data in the world coordinate system from the acquired N multiplied by N map blocks; in response to the cube type of the acquired grid cube being a static cube, the laser point data is determined as static laser point data for characterizing a static obstacle.
In some embodiments, the laser point data identification operation further comprises: in response to the cube type of the acquired grid cube being a dynamic cube, the laser point data is determined as dynamic laser point data for characterizing a dynamic obstacle.
In some embodiments, the above apparatus further comprises: a first determination unit configured to acquire pose information of the unmanned vehicle in the world coordinate system in response to detection of an activation signal of the unmanned vehicle, and determine the acquired pose information as start-time pose information; a second determining unit configured to determine a map segment corresponding to the start-time pose information in the preset three-dimensional grid map as an initial map segment; a loading unit configured to load, from a disk, N × N map blocks centered on the initial map block in the preset three-dimensional grid map into the cache; a first preloading unit, configured to newly create and execute a first preloading thread, where the first preloading thread is configured to load, from a disk, into the cache, (4N +4) map tiles that have not been loaded into the cache among (N +2) × (N +2) map tiles centered on the initial map tile in the preset three-dimensional mesh map.
In some embodiments, the above apparatus further comprises: a third determination unit configured to determine vehicle pose information of the unmanned vehicle at a previous sampling period of a current time as previous period pose information; a fourth determination unit configured to determine a traveling direction of the unmanned vehicle based on difference information between the current pose information and the previous period pose information; and a second preloading unit configured to newly create and execute a second preloading thread, where the second preloading thread is configured to load, from a disk, into the cache, (N +2) map tiles adjacent to the (N +2) × (N +2) map tiles, which have been loaded into the cache and centered on the map tile corresponding to the current pose information, in the preset three-dimensional mesh map along the determined driving direction.
In some embodiments, the grid cube further comprises coordinates of a center point of the grid cube in the world coordinate system; and the above apparatus further comprises: the fifth determining unit is configured to determine the current pose information as an initial value of pose information of the vehicle to be determined; a construction unit configured to construct the objective function in the following manner: using the pose information of the undetermined vehicle as an independent variable; for each laser point data in the above laser point cloud data, performing the following alignment distance determination operation: calculating the coordinates of the laser point data in the world coordinate system according to the position and posture information of the vehicle to be determined; determining the coordinate of the center point in each grid cube of each map grid of each map block loaded in the cache and the coordinate of the center point of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system; determining the distance between the coordinate of the laser point data in the world coordinate system and the alignment coordinate of the laser point data in the world coordinate system as the alignment distance of the laser point data; calculating the sum of the alignment distances of each laser point data in the laser point cloud data; determining the sum of the calculated alignment distances as the output of the objective function; a sixth determining unit configured to determine to-be-determined pose information that minimizes an output of the objective function by using a nearest neighbor iterative algorithm; and the updating unit is configured to update the current pose information by using the determined information of the pending pose.
In some embodiments, for each laser point data in the laser point cloud data, the following alignment distance determination operations are performed, including: performing down-sampling on the laser point cloud data to obtain down-sampled laser point cloud data; executing the alignment distance determination operation on each laser point data in the down-sampled laser point cloud data; and the calculating the sum of the alignment distances of the laser point data in the laser point cloud data comprises: and calculating the sum of the alignment distances of each laser point data in the down-sampled laser point cloud data.
In some embodiments, the determining, as the alignment coordinates of the laser point data in the world coordinate system, the coordinates of the center point of each grid cube of each map grid of each map partition loaded in the cache, where the coordinate of the center point of each grid cube is closest to the coordinate of the laser point data in the world coordinate system, includes: determining at least one grid cube of which the distance between the coordinate of the central point and the coordinate of the laser point data in a world coordinate system is smaller than or equal to a preset distance threshold in each grid cube of each map grid of each map block loaded in the cache; and determining the coordinate of the center point of at least one determined grid cube and the coordinate of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system.
In some embodiments, the preset three-dimensional grid map is obtained by the following steps: acquiring a laser point cloud data frame sequence acquired by a map acquisition vehicle, wherein each frame of laser point cloud data in the laser point cloud data frame sequence is marked with corresponding current pose information of the vehicle; for each frame of laser point cloud data in the laser point cloud data frame sequence, converting each laser point data in the frame of laser point cloud data into a coordinate in the world coordinate system according to the marked current pose information of the vehicle corresponding to the frame of laser point cloud data to obtain the converted laser point cloud frame sequence; splicing each frame of laser point cloud data in the converted laser point cloud data frame sequence to obtain spliced laser point cloud data, generating a three-dimensional map according to the spliced laser point cloud data, and presenting the three-dimensional map; responding to the received dynamic laser point marking operation of the user on the three-dimensional map, acquiring a dynamic laser point marked by the user, and deleting the acquired dynamic laser point from the spliced laser point cloud data to obtain static laser point cloud data; generating a preset three-dimensional grid map according to the following modes: dividing the earth surface into R rows and C columns of square map blocks under a world coordinate system, dividing each map block into M multiplied by M square map grids, dividing each map grid into at least one grid cube, wherein the edge length of each grid cube is the same as the side length of each map grid, and setting the cube type of each grid cube as a dynamic cube; and for each static laser point data in the static laser point cloud data, setting the cube type of the grid cube corresponding to the coordinate of the static laser point data in the world coordinate system as the static cube in the preset three-dimensional grid map.
In a third aspect, an embodiment of the present application provides a driving control apparatus including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is implemented, when executed by a processor, to implement the method described in any implementation manner of the first aspect.
According to the method and the device for identifying the laser point cloud data of the unmanned vehicle, when the latest frame of laser point cloud data collected by the laser radar is received, the current pose information of the unmanned vehicle in a world coordinate system is obtained, N multiplied by N map blocks which are loaded into a cache in advance and take the map block corresponding to the current pose information as the center in a preset three-dimensional grid map are obtained from the cache according to the current pose information, and then laser point data identification operation is executed for each laser point data in the received laser point cloud data, so that whether each laser point data is a static laser point or not can be identified, a basis is provided for the subsequent analysis and processing of the laser point data by the unmanned vehicle, and the accuracy of laser point data obstacle identification can be improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for identifying laser point cloud data of an unmanned vehicle according to the present application;
FIG. 3 is a schematic illustration of 3 × 3 map tiles centered around a map tile corresponding to current pose information, according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for identifying laser point cloud data of an unmanned vehicle according to the present application;
FIG. 5A is a schematic diagram of peripheral unloaded 16 map tiles of 3 x 3 map tiles loaded into a cache according to the present application;
FIG. 5B is a schematic diagram of the peripheral 16 map tiles of the 3 x 3 map tiles that have been loaded into the cache also being loaded into the cache according to the present application;
FIG. 6A is a schematic diagram of 5 map tiles not loaded into cache contiguous with 5 x 5 map tiles already loaded into cache, according to the present application;
FIG. 6B is a schematic diagram of 5 map tiles contiguous with 5 x 5 map tiles already loaded into the cache also loaded into the cache according to the present application;
FIG. 7A is a flow diagram of yet another embodiment of a method for identifying laser point cloud data for an unmanned vehicle according to the present application;
FIG. 7B is an exploded flowchart according to one implementation of step 707 in the flowchart shown in FIG. 7A of the present application;
FIG. 7C is an exploded flow diagram of one implementation of an alignment distance determination operation according to the present application;
FIG. 8 is a schematic diagram of an embodiment of an apparatus for identifying laser point cloud data for an unmanned vehicle according to the present application;
fig. 9 is a schematic configuration diagram of a computer system suitable for implementing the driving control apparatus of the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the present method for identifying laser point cloud data of an unmanned vehicle or an apparatus for identifying laser point cloud data of an unmanned vehicle may be applied.
As shown in fig. 1, the system architecture 100 may include an unmanned vehicle 101.
The driverless vehicle 101 has mounted therein a drive control device 1011, a network 1012, and a laser radar 1013. Network 1012 is used to provide a medium for a communication link between driving control device 1011 and lidar 1013. Network 1012 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A driving control device (also referred to as an in-vehicle brain) 1011 is responsible for intelligent control of the unmanned vehicle 101. The driving control device 1011 may be a separately provided Controller, such as a Programmable Logic Controller (PLC), a single chip microcomputer, an industrial Controller, or the like; or the equipment consists of other electronic devices which have input/output ports and have the operation control function; but also a computer device installed with a vehicle driving control type application.
It should be noted that, in practice, at least one sensor, such as a camera, a gravity sensor, a wheel speed sensor, etc., may also be installed in the unmanned vehicle 101. In some cases, the unmanned vehicle 101 may further include GNSS (Global Navigation Satellite System) equipment, SINS (Strap-down Inertial Navigation System), and the like.
It should be noted that the method for identifying the laser point cloud data of the unmanned vehicle provided by the embodiment of the present application is generally executed by the driving control device 1011, and accordingly, the apparatus for identifying the laser point cloud data of the unmanned vehicle is generally disposed in the driving control device 1011.
It should be understood that the number of driving control devices, networks and lidar in fig. 1 is merely illustrative. There may be any number of steering control devices, networks, and lidar devices, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for identifying laser point cloud data of an unmanned vehicle in accordance with the present application is shown. The method for identifying laser point cloud data of an unmanned vehicle comprises the following steps:
step 201, in response to receiving the latest frame of laser point cloud data acquired by the laser radar, acquiring current pose information of the unmanned vehicle in a world coordinate system.
In this embodiment, an electronic device (for example, the driving control device shown in fig. 1) on which the method for identifying laser point cloud data of the unmanned vehicle operates may acquire current pose information of the unmanned vehicle in the world coordinate system in real time after receiving a latest frame of laser point cloud data acquired by the laser radar.
Here, the one-frame laser point cloud data collected by the laser radar may be laser point cloud data collected by a laser transmitter provided in the laser radar by one round around a central axis of the laser radar. The laser point cloud data may include at least one laser point data, the laser point data may include three-dimensional coordinates, and the three-dimensional coordinates in the laser point data may be three-dimensional coordinates of a target scanned by the laser point relative to a body coordinate system of the unmanned vehicle. The body coordinate system of the unmanned vehicle may be a predefined coordinate system. As an example, the center point of the laser radar may be used as the coordinate origin of the body coordinate system of the unmanned vehicle, the direction from the coordinate origin to the head of the vehicle may be used as the X-axis, the direction from the coordinate origin to the right of the body of the vehicle may be used as the Y-axis, and the direction perpendicular to the X-axis and upward of the vehicle may be used as the Z-axis.
The laser radar can output the collected one-frame laser point cloud data after the laser transmitter scans around the central shaft for one circle. Therefore, the electronic equipment can acquire the current pose information of the unmanned vehicle in the world coordinate system when receiving a new frame of laser point cloud data each time.
Here, the electronic apparatus described above may acquire the current pose information of the unmanned vehicle in the world coordinate system through a hardware structure that can provide a positioning function mounted on the unmanned vehicle. Here, the current pose information of the unmanned vehicle in the world coordinate system may include position information, which may include three-dimensional coordinates (e.g., longitude and latitude and altitude), and attitude information, which may include a pitch angle, a yaw angle, and a roll angle.
As an example, the unmanned vehicle may have a GNSS device and an inertial navigation system installed therein, and the electronic device may acquire three-dimensional coordinates of the unmanned vehicle in a world coordinate system from the GNSS device, may acquire a pitch angle, a yaw angle, and a roll angle of the unmanned vehicle in the world coordinate system from the inertial navigation system, and may use the acquired three-dimensional coordinates as position information in the current pose information and the acquired pitch angle, yaw angle, and roll angle as pose information in the current pose information.
Step 202, according to the current pose information, obtaining from the cache N × N map blocks, which are pre-loaded into the cache and centered on the map block corresponding to the current pose information, in the preset three-dimensional grid map.
In this embodiment, based on the current pose information obtained in step 201, the electronic device (for example, the driving control device shown in fig. 1) may first determine a map partition corresponding to the current pose information in the preset three-dimensional grid map according to the current pose information.
Here, the preset three-dimensional grid map may divide the earth surface into map partitions of R rows and C columns of squares according to a world coordinate system (e.g., UTM (Universal Transverse mercator) world coordinate system), each of the map partitions may be divided into map grids of M × M squares, each of the map grids may include at least one grid cube, where a edge length of the grid cube may be the same as a side length of the map grid, and each of the grid cubes may include a cube type indicating whether the grid cube is a static cube for representing a static obstacle or a dynamic cube for representing a dynamic obstacle. Here, R, C, M is a positive integer.
The preset three-dimensional grid map can be stored in a magnetic disk of the electronic equipment in advance, and the electronic equipment is loaded into a cache in advance when needed.
In some optional implementations of the present embodiment, the preset three-dimensional grid map may be a three-dimensional grid map provided by a map provider and meeting the above requirements.
In some optional implementation manners of this embodiment, the preset three-dimensional grid map may also be obtained by adopting the following steps one to six:
step one, acquiring a laser point cloud data frame sequence acquired by a map acquisition vehicle.
Here, each frame of laser point cloud data in the acquired sequence of laser point cloud data frames is marked with corresponding vehicle current pose information. The map collection vehicle can be provided with a laser radar, so that when the map collection vehicle runs through the geographic position of the map area to be generated, the laser point cloud data of the map area to be generated can be collected, and the pose information of the vehicle corresponding to each frame of laser point cloud data in the world coordinate system is recorded in the process of collecting the laser point cloud data.
And secondly, for each frame of laser point cloud data in the laser point cloud data frame sequence, converting each laser point data in the frame of laser point cloud data into coordinates in the world coordinate system according to the marked current pose information of the vehicle corresponding to the frame of laser point cloud data to obtain the converted laser point cloud frame sequence.
And thirdly, splicing each frame of laser point cloud data in the converted laser point cloud data frame sequence to obtain spliced laser point cloud data, generating a three-dimensional map according to the spliced laser point cloud data, and presenting the three-dimensional map.
And step four, responding to the received dynamic laser point marking operation of the user on the three-dimensional map, acquiring the dynamic laser point marked by the user, and deleting the acquired dynamic laser point from the spliced laser point cloud data to obtain static laser point cloud data.
Here, the dynamic laser point labeling operation of the user on the three-dimensional map may be various types of operations that can determine which laser points are dynamic laser points. As an example, the user may use a three-dimensional bounding box (bounding box) to enclose the laser point data, determining the laser point data in the three-dimensional bounding box as dynamic laser point data.
Because the dynamic laser point data in the spliced laser point cloud data is deleted, the static laser point data is left in the spliced laser point cloud data.
Step five, generating a preset three-dimensional grid map according to the following mode: dividing the earth surface into map blocks of R rows and C columns of squares in a world coordinate system, dividing each map block into map grids of M multiplied by M squares, dividing each map grid into at least one grid cube, wherein the edge length of each grid cube is the same as the side length of each map grid, and setting the cube type of each grid cube as a dynamic cube. That is, each grid cube is first defaulted to a dynamic cube.
And step six, setting the cube type of a grid cube corresponding to the coordinate of the static laser point data in a world coordinate system as the static cube in the preset three-dimensional grid map for each static laser point data in the static laser point cloud data.
In this embodiment, since the preset three-dimensional grid map is based on the world coordinate system, and the current pose information obtained in step 201 is also based on the world coordinate system, the electronic device may determine in which map partition the coordinate of the unmanned vehicle in the world coordinate system is located, that is, the map partition corresponding to the current pose information in the preset three-dimensional grid map, according to the position information in the current pose information (i.e., the coordinate of the unmanned vehicle in the world coordinate system), the upper left-corner coordinate of the preset three-dimensional grid map, and the side length of the map partition of the preset three-dimensional grid map.
As an example, the map tiles may be distinguished by (i, j), where i may represent the row number of the map tile in the preset three-dimensional grid map, and j may represent the column number of the map tile in the preset three-dimensional grid map.
Then, the electronic device may obtain, in the cache, N × N map tiles centered on the determined map tile corresponding to the current pose information in the preset three-dimensional grid map that is pre-loaded into the cache. Wherein N is an odd number.
As an example, if a map tile corresponding to the current pose information can be represented as (i, j), where i represents the row number of the map tile corresponding to the current pose information in the preset three-dimensional grid map, and j represents the column number of the map tile corresponding to the current pose information in the preset three-dimensional grid map, 3 × 3 map tiles centered on the map tile corresponding to the current pose information can be represented as follows, respectively: (i-1, j-1), (i +1, j-1), (i-1, j), (i +1, j), (i-1, j +1), (i +1, j +1) and (i +1, j + 1). Referring to fig. 3, fig. 3 shows 3 × 3 map tiles centered on a map tile corresponding to the current pose information when N is 3, where the map tile indicated by reference numeral 301 is the map tile corresponding to the current pose information.
Step 203, for each laser point data in the received laser point cloud data, performing a laser point data identification operation.
In the present embodiment, the electronic device (e.g., the driving control device of fig. 1) on which the method for identifying laser point cloud data of an unmanned vehicle operates may perform the following laser point data identification operation for each laser point data in the latest frame of laser point cloud data received in step 201:
first, the coordinates of the laser point data in the world coordinate system can be determined according to the current pose information. Here, since the three-dimensional coordinates in the laser point cloud data collected by the laser radar are coordinates of the target scanned by the laser spot emitted by the laser radar with respect to the vehicle body coordinate system, the current pose information is based on the coordinates of the world coordinate system. Therefore, the electronic equipment can obtain the translation matrix according to the position information in the current pose information and obtain the rotation matrix according to the pose information in the current pose information. Then, the three-dimensional coordinates in the laser point data may be converted according to the rotation matrix and the translation matrix to obtain the coordinates of the laser point data in the world coordinate system. It should be noted that how to obtain a translation matrix according to the position information in the current pose information, how to obtain a rotation matrix according to the pose information in the current pose information, and how to convert the three-dimensional coordinates in the laser point data according to the rotation matrix and the translation matrix are the prior art widely researched and applied at present, and details are not repeated here.
Then, from the N × N map patches acquired in step 202, a grid cube corresponding to the coordinates of the laser point data in the world coordinate system may be acquired. Here, since the N × N map tiles acquired from the cache in step 202 are centered on the map tile corresponding to the current pose information, and the lidar is limited by hardware parameters, and the scanning range of the lidar is limited, the electronic device may set the size of N according to the scanning range of the lidar, the size of the number M of map meshes in each map tile in the preset three-dimensional mesh map, and the side length of the map meshes, so that it may be ensured that the acquired N × N map tiles centered on the map tile corresponding to the current pose information may cover the scanning range of the lidar. For example, if each map partition in the preset three-dimensional grid map includes 512 × 512 map grids, and the side length of the map grid is 0.125 m, the size of each map partition is 64 × 64 m, and if the scanning range of the laser radar is less than 64 m, N may be 3. If the scan range of the laser is greater than 64 meters and less than 128 meters, then N may be 5.
Since the obtained N × N map segments centered on the map segment corresponding to the current pose information can cover the scanning range of the laser radar, and the laser point data is collected by the laser radar, the grid cube corresponding to the coordinate of the laser point data in the world coordinate system is inevitably in the obtained N × N map segments. In this way, the electronic device may determine, according to the coordinates of the laser point data in the world coordinate system, the upper left-hand coordinates of the preset three-dimensional grid map, the side length of the map partition of the preset three-dimensional grid map, the number of map grids included in the map partition, and the side length of the map grid, which grid cube the coordinates of the laser point data in the world coordinate system correspond to, i.e., determine the grid cube in the world coordinate system. As an example, the grid cubes may be distinguished by (i, j, k, l, m), where i may represent the grid cube in a map tile of a row in the preset three-dimensional grid map, j may represent the grid cube in a map tile of a column in the preset three-dimensional grid map, k may represent the grid cube in a map tile of a row in the map grid, l may represent the grid cube in a map tile of a column in the map grid, and m may represent the grid cube in a grid of a column in the grid. Wherein i, j, k, l and m are positive integers.
After determining the grid cube corresponding to the coordinates of the laser point data in the world coordinate system, the related information of the grid cube corresponding to the coordinates of the laser point data in the world coordinate system, for example, the cube type, may be obtained.
Finally, if the cube type of the acquired grid cube is a static cube, the laser point data is determined as static laser point data for characterizing a static obstacle. Thus, the laser dot data can be identified.
In some optional implementations of the present embodiment, if the cube type of the obtained grid cube is a dynamic cube, the laser point data may be determined as dynamic laser point data for characterizing a dynamic obstacle.
Through the operation of step 203, it is recognized whether each laser point data in the latest frame of laser point cloud data is static laser point data, so that a basis can be provided for the subsequent further processing of the laser point cloud data, and the accuracy of the subsequent obstacle recognition of the laser point cloud data can be improved. For example, when a certain laser point data is determined as a static laser point in step 203, the laser point data is not subsequently used to identify whether the laser point data is laser point data for representing movable objects such as pedestrians, vehicles, etc., but the laser point data is subsequently used to identify whether the laser point data is laser point data for representing immovable objects such as lane lines, lanes, traffic lights, intersections, etc.
According to the method provided by the embodiment of the application, when the latest frame of laser point cloud data acquired by the laser radar is received, the current pose information of the unmanned vehicle in a world coordinate system is acquired, according to the current pose information, N multiplied by N map blocks which are pre-loaded into a cache and take the map block corresponding to the current pose information in a preset three-dimensional grid map as the center are acquired from the cache, and then the laser point data identification operation is executed for each laser point data in the received laser point cloud data, so that whether each laser point data is a static laser point or not can be identified, a basis is provided for the subsequent analysis and processing of the laser point data by the unmanned vehicle, and the accuracy of laser point data obstacle identification can be improved.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for identifying laser point cloud data of an unmanned vehicle is shown. The process 400 of the method for identifying laser point cloud data of an unmanned vehicle comprises the steps of:
step 401, in response to detecting a start signal of the unmanned vehicle, acquiring pose information of the unmanned vehicle in a world coordinate system, and determining the acquired pose information as start-time pose information.
In the present embodiment, an electronic device (e.g., a driving control device shown in fig. 1) on which the method for recognizing laser point cloud data of an unmanned vehicle operates may acquire pose information of the unmanned vehicle in a world coordinate system (e.g., UTM world coordinate system) upon detection of a start signal of the unmanned vehicle, and determine the acquired pose information as start-time pose information. Here, the activation signal of the unmanned vehicle may be a preset electric signal, and for example, the activation signal may be a KL15 signal, a KL30 signal, a KL50 signal, a KL61 signal, or the like.
Here, the above-described electronic apparatus may acquire the pose information of the unmanned vehicle in the world coordinate system through a hardware structure that can provide a positioning function mounted on the unmanned vehicle, and since the vehicle pose information acquired here is pose information at the time of vehicle start, the acquired pose information may be determined as start-time pose information.
Here, the pose information of the unmanned vehicle in the world coordinate system may include position information, which may include three-dimensional coordinates (e.g., longitude and latitude and altitude), and attitude information, which may include a pitch angle, a yaw angle, and a roll angle.
Step 402, determining map blocks corresponding to the starting pose information in the preset three-dimensional grid map as initial map blocks.
In this embodiment, the magnetic disk of the electronic device may store a preset three-dimensional grid map. Here, the preset three-dimensional grid map divides the earth surface into R rows and C columns of square map blocks in accordance with the world coordinate system, each map block being divided into M × M square map grids, each map grid including at least one grid cube, each grid cube including a cube type for indicating whether the grid cube is a static cube for characterizing static obstacles or a dynamic cube for characterizing dynamic obstacles.
In this embodiment, the electronic device may first determine, according to the position information in the pose information at the time of starting (i.e., the coordinates in the world coordinate system when the unmanned vehicle is started), the upper left-hand coordinate of the preset three-dimensional grid map, and the side length of the map partition of the preset three-dimensional grid map, which map partition the coordinates in the world coordinate system when the unmanned vehicle is started in, i.e., the map partition corresponding to the pose information at the time of starting in the preset three-dimensional grid map. Then, the map blocks corresponding to the start-up pose information in the preset three-dimensional grid map may be determined as initial map blocks.
Step 403, loading N × N map blocks, which are centered on the initial map block, in the preset three-dimensional grid map from the disk into the cache.
In this embodiment, the electronic device may load, from the disk, N × N map tiles centered on the initial map tile in the preset three-dimensional grid map into the cache. Here, since the initial map tile has already been determined, N × N map tiles centered on the initial map tile may be determined from the initial map tile. As an example, if the initial map tile may be represented as (i, j), where i represents the row number of the initial map tile in the preset three-dimensional grid map, and j represents the column number of the initial map tile in the preset three-dimensional grid map, 3 × 3 map tiles centered on the initial map tile may be represented as follows, respectively: (i-1, j-1), (i +1, j-1), (i-1, j), (i +1, j), (i-1, j +1), (i +1, j +1) and (i +1, j + 1).
Step 404, a first preloading thread is created and executed.
In this embodiment, the electronic device may newly create and execute a first preloading thread. The first preloading thread is used for loading (4N +4) map blocks which are not loaded into the cache in (N +2) × (N +2) map blocks which take the initial map block as the center in the preset three-dimensional grid map into the cache from the disk.
Referring to fig. 5A, fig. 5A shows that when N is 3, 3 × 3 map tiles centered on the initial map tile (as indicated by reference numeral 501 in fig. 5A) have been loaded into the cache in step 403, and a first preloading thread is newly created and executed in step 404 for loading the peripheral 16 map tiles (as indicated by the dashed portion in fig. 5A) of the 3 × 3 map tiles loaded into the cache, since it is a loading task executed by the newly created thread and does not affect the execution of subsequent operations in the current thread. Referring to fig. 5B, fig. 5B is a schematic diagram illustrating that after the first preloading thread finishes executing, the peripheral 16 map tiles of the 3 × 3 map tiles loaded into the cache are also loaded into the cache.
And 405, responding to the received latest frame of laser point cloud data acquired by the laser radar, and acquiring the current pose information of the unmanned vehicle in a world coordinate system.
And 406, acquiring NxN map blocks which are loaded into the cache in advance and take the map block corresponding to the current pose information as the center in the preset three-dimensional grid map from the cache according to the current pose information.
In this embodiment, the specific operations of step 405 and step 406 are substantially the same as the operations of step 201 and step 202 in the embodiment shown in fig. 2, and are not described again here.
Step 407, determining the vehicle pose information of the unmanned vehicle in the last sampling period of the current time as the pose information of the last period.
In the present embodiment, an electronic device (e.g., a driving control device shown in fig. 1) on which the method for recognizing laser point cloud data of an unmanned vehicle is operated may store therein vehicle pose information for each sampling period (e.g., 1 second) within a preset time period (e.g., 1 hour) before a current time, and thus, the above-described electronic device may determine the vehicle pose information of the unmanned vehicle for a previous sampling period at the current time as previous period pose information. Here, the vehicle pose information may be pose information based on a world coordinate system.
And step 408, determining the driving direction of the unmanned vehicle according to the difference information between the current pose information and the pose information in the previous period.
In this implementation, the electronic device may determine the driving direction of the unmanned vehicle according to the difference information between the current pose information of the unmanned vehicle and the last period pose information acquired in step 407.
And step 409, creating and executing a second preloading thread.
In this embodiment, the electronic device may create and execute a second preloading thread. The second preloading thread is configured to load, from the disk, into the cache, (N +2) map tiles adjacent to the (N +2) × (N +2) map tiles, which have been loaded into the cache and centered on the map tile corresponding to the current pose information, along the driving direction determined in step 408 in the preset three-dimensional grid map. Referring to fig. 6A, in the case where N is 3, as shown in fig. 6A, 5 × 5 map tiles centered on a map tile corresponding to the current pose information (as indicated by reference numeral 601 in fig. 6A) have been loaded in the cache memory (as indicated by a solid line portion in fig. 6A), and the traveling direction of the unmanned vehicle is determined to be the direction indicated by the map tile to the right, so that a second preloading thread for loading 5 map tiles (indicated by a dotted line portion in fig. 6A) of the preset three-dimensional mesh map adjacent to the 5 × 5 map tiles centered on the map tile corresponding to the current pose information, which have been loaded in the cache memory, in the rightward direction, can be newly created and executed. Referring to fig. 6B, fig. 6B is a schematic diagram illustrating that, after the execution of the second preloading thread is finished, 5 map tiles adjacent to the 5 × 5 map tiles loaded into the middle cache and centered on the map tile corresponding to the current pose information in the preset three-dimensional grid map along the right direction are also loaded into the cache.
At step 410, for each laser point data in the received laser point cloud data, a laser point data identification operation is performed.
In this embodiment, the specific operation of step 410 is substantially the same as the operation of step 203 in the embodiment shown in fig. 2, and is not described herein again.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the process 400 of the method for identifying laser point cloud data of an unmanned vehicle in the present embodiment has a step of loading, at the time of vehicle start, N × N map tiles centered on a map tile corresponding to the start-time pose information of the vehicle in the preset three-dimensional mesh map into the cache, a step of creating and executing a first preloading thread, a step of determining the driving direction of the unmanned vehicle, and a step of creating and executing a second preloading thread. Therefore, according to the scheme described in the embodiment, the required map blocks can be loaded into the cache in advance, so that the processing speed of laser point data obstacle identification can be increased, and the safety of the unmanned vehicle is further improved to a greater extent.
With continued reference to fig. 7A, a flow 700 of yet another embodiment of a method for identifying laser point cloud data for an unmanned vehicle is shown. The process 700 of the method for identifying laser point cloud data of an unmanned vehicle comprises the steps of:
step 701, in response to the detection of a start signal of the unmanned vehicle, acquiring pose information of the unmanned vehicle in a world coordinate system, and determining the acquired pose information as start-time pose information.
And step 702, determining map blocks corresponding to the starting pose information in the preset three-dimensional grid map as initial map blocks.
Step 703, loading N × N map blocks, which are centered on the initial map block, in the preset three-dimensional grid map from the disk into the cache.
Step 704, create and execute the first preloaded thread.
Step 705, in response to receiving the latest frame of laser point cloud data acquired by the laser radar, obtaining current pose information of the unmanned vehicle in a world coordinate system.
In this embodiment, the specific operations from step 701 to step 705 are substantially the same as the operations from step 401 to step 405 in the embodiment shown in fig. 4, and are not described again here.
And step 706, determining the current pose information as an initial value of the pose information of the vehicle to be determined.
In step 707, an objective function is constructed.
In the present embodiment, an electronic device (e.g., a driving control device shown in fig. 1) on which the method for recognizing laser point cloud data of an unmanned vehicle operates may construct an objective function as follows from steps 7071 to 7074. Referring to FIG. 7B, FIG. 7B is an exploded flowchart of one implementation of step 707.
And 7071, taking the pose information of the vehicle to be determined as an independent variable.
At step 7072, for each laser point data in the laser point cloud data, the following alignment distance determination operation is performed.
Referring to fig. 7C, fig. 7C shows an exploded flow diagram of one implementation of an alignment distance determination operation, including steps 70721 through 70723:
and 70721, calculating the coordinates of the laser point data in the world coordinate system according to the position and posture information of the vehicle to be determined.
Here, the translation matrix can be obtained according to position information in the pose information of the vehicle to be determined, and the rotation matrix can be obtained according to the pose information in the pose information of the vehicle to be determined. Then, the three-dimensional coordinates in the laser point data may be converted according to the rotation matrix and the translation matrix to obtain the coordinates of the laser point data in the world coordinate system. It should be noted that how to obtain a translation matrix according to position information in the pose information of the vehicle to be determined, how to obtain a rotation matrix according to the pose information in the pose information of the vehicle to be determined, and how to convert the three-dimensional coordinates in the laser point data according to the rotation matrix and the translation matrix are prior art widely researched and applied at present, and details are not repeated here.
Step 70722, determining the coordinate of the center point in each grid cube of each map grid of each map block loaded in the cache and the coordinate of the center point of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system.
In some optional implementations of this embodiment, step 70722 may also proceed as follows:
firstly, at least one grid cube is determined in each grid cube of each map grid of each map block loaded in a cache, wherein the distance between the coordinate of the central point and the coordinate of the laser point data in a world coordinate system is smaller than or equal to a preset distance threshold.
Then, the center point coordinate of the determined at least one grid cube and the center point coordinate of the grid cube closest to the coordinate of the laser point data in the world coordinate system are determined as the alignment coordinate of the laser point data in the world coordinate system. Thus, the calculation range can be reduced, and the calculation amount can be reduced.
It should be noted that, in this embodiment, each grid cube of each map grid of each map partition in the preset three-dimensional grid map further includes coordinates of a center point of the grid cube in the world coordinate system, that is, coordinates of a geometric center of the grid cube in the world coordinate system.
Step 70723, determining the distance between the coordinates of the laser point data in the world coordinate system and the alignment coordinates of the laser point data in the world coordinate system as the alignment distance of the laser point data.
Step 7073, the sum of the alignment distances of each laser point data in the laser point cloud data is calculated.
At step 7074, the sum of the calculated alignment distances is determined as the output of the objective function.
In some optional implementations of this embodiment, step 7072 may also be performed as follows:
firstly, down-sampling is carried out on the received laser point cloud data to obtain the down-sampled laser point cloud data.
Then, the above-described alignment distance determination operation may be performed for each laser point data in the down-sampled laser point cloud data.
In this way, in step 7073, the sum of the alignment distances of the respective laser point data in the down-sampled laser point cloud data may be calculated.
Since the data volume in the received laser point cloud data is large, and the sum of the alignment distances can be calculated without a large data volume in step 7072, by sampling the above implementation manner, the alignment effect is not reduced while the calculation amount is reduced, and the calculation time is saved.
Step 708, determining the undetermined pose information that minimizes the output of the objective function by using a nearest neighbor iterative algorithm.
In this embodiment, the electronic device may determine to-be-determined pose information corresponding to a minimum output of the objective function by using an nearest neighbor Iteration (ICP) algorithm.
And 709, updating the current pose information by using the determined information of the to-be-determined pose.
Because the current pose information of the vehicle acquired in the step 701 may have errors, the alignment operation from the step 706 to the step 709 is performed, and the received laser point cloud data is used to update the current pose information of the vehicle to obtain the aligned current pose information, so that the aligned current pose information can be more accurately calculated in the subsequent calculation process.
And step 710, acquiring NxN map blocks which are loaded into the cache in advance and take the map block corresponding to the current pose information as the center in the preset three-dimensional grid map from the cache according to the current pose information.
And 711, determining the vehicle pose information of the unmanned vehicle in the last sampling period of the current time as the pose information of the last period.
And 712, determining the driving direction of the unmanned vehicle according to the difference information between the current pose information and the pose information in the previous period.
In step 713, a second preload thread is created and executed.
Step 714, for each laser point data in the received laser point cloud data, a laser point data identification operation is performed.
In this embodiment, the specific operations of steps 710 to 714 are substantially the same as the operations of steps 406 to 410 in the embodiment shown in fig. 4, and are not repeated herein.
As can be seen from fig. 7, compared with the embodiment corresponding to fig. 4, the process 700 of the method for identifying laser point cloud data of an unmanned vehicle in the present embodiment has more steps for aligning the current pose information of the unmanned vehicle. Therefore, the scheme described in the embodiment can further improve the accuracy of the identification of the laser point data obstacle.
With further reference to fig. 8, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for identifying laser point cloud data of an unmanned vehicle provided with a lidar, which corresponds to the method embodiment shown in fig. 2, which may be applied in various electronic devices in particular.
As shown in fig. 8, the apparatus 800 for recognizing laser point cloud data of an unmanned vehicle of the present embodiment includes: a first acquisition unit 801, a second acquisition unit 802, and a recognition unit 803. The first obtaining unit 801 is configured to obtain current pose information of the unmanned vehicle in a world coordinate system in response to receiving latest frame of laser point cloud data acquired by the laser radar; a second obtaining unit 802, configured to obtain, from the cache, N × N map partitions that are pre-loaded into the cache and center on a map partition corresponding to the current pose information in a preset three-dimensional grid map, where N is an odd number, the preset three-dimensional grid map divides the earth surface into R rows and C columns of square map partitions according to the world coordinate system, each map partition is divided into M × M square map grids, each map grid includes at least one grid cube, and each grid cube includes a cube type indicating that the grid cube is a static cube for representing a static obstacle or a dynamic cube for representing a dynamic obstacle; an identification unit 803 configured to perform the following laser point data identification operation for each laser point data in the received laser point cloud data: determining the coordinates of the laser point data in the world coordinate system according to the current pose information; acquiring a grid cube corresponding to the coordinates of the laser point data in the world coordinate system from the acquired N multiplied by N map blocks; in response to the cube type of the acquired grid cube being a static cube, the laser point data is determined as static laser point data for characterizing a static obstacle.
In this embodiment, specific processes of the first obtaining unit 801, the second obtaining unit 802, and the identifying unit 803 of the apparatus 800 for identifying laser point cloud data of an unmanned vehicle and technical effects brought by the processes may respectively refer to the related descriptions of step 201, step 202, and step 203 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of this embodiment, the laser point data identification operation may further include: in response to the cube type of the acquired grid cube being a dynamic cube, the laser point data is determined as dynamic laser point data for characterizing a dynamic obstacle.
In some optional implementations of this embodiment, the apparatus 800 may further include: a first determination unit (not shown) configured to acquire pose information of the unmanned vehicle in the world coordinate system in response to detection of an activation signal of the unmanned vehicle, and determine the acquired pose information as activation-time pose information; a second determination unit (not shown) configured to determine a map segment corresponding to the start-time pose information in the preset three-dimensional grid map as an initial map segment; a loading unit (not shown) configured to load, from a disk, N × N map blocks centered on the initial map block in the preset three-dimensional grid map into the cache; a first preloading unit (not shown) configured to newly create and execute a first preloading thread, where the first preloading thread is configured to load, from a disk, into the cache, (4N +4) map tiles that have not been loaded into the cache, of (N +2) × (N +2) map tiles centered on the initial map tile in the preset three-dimensional mesh map.
In some optional implementations of this embodiment, the apparatus 800 may further include: a third determination unit (not shown) configured to determine vehicle pose information of the above-mentioned unmanned vehicle at a previous sampling period of the current time as previous period pose information; a fourth determination unit (not shown) configured to determine a traveling direction of the unmanned vehicle based on difference information of the current pose information and the previous period pose information; and a second preloading unit (not shown) configured to create and execute a second preloading thread, wherein the second preloading thread is configured to load, from a disk, into the cache, (N +2) map tiles adjacent to the (N +2) × (N +2) map tiles, which have been loaded into the cache and centered on the map tile corresponding to the current pose information, in the determined driving direction in the preset three-dimensional mesh map.
In some optional implementations of this embodiment, the grid cube may further include coordinates of a center point of the grid cube in the world coordinate system; and the apparatus 800 may further include: a fifth determination unit (not shown) configured to determine the above-described current pose information as an initial value of pose information of the vehicle to be determined; a construction unit (not shown) configured to construct the objective function in the following way: using the pose information of the undetermined vehicle as an independent variable; for each laser point data in the above laser point cloud data, performing the following alignment distance determination operation: calculating the coordinates of the laser point data in the world coordinate system according to the position and posture information of the vehicle to be determined; determining the coordinate of the center point in each grid cube of each map grid of each map block loaded in the cache and the coordinate of the center point of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system; determining the distance between the coordinate of the laser point data in the world coordinate system and the alignment coordinate of the laser point data in the world coordinate system as the alignment distance of the laser point data; calculating the sum of the alignment distances of each laser point data in the laser point cloud data; determining the sum of the calculated alignment distances as the output of the objective function; a sixth determining unit (not shown) configured to determine to-be-determined pose information that minimizes an output of the objective function using a nearest neighbor iterative algorithm; an updating unit (not shown) configured to update the above current pose information with the determined pending pose information.
In some optional implementations of the present embodiment, for each laser point data in the laser point cloud data, the following alignment distance determination operations may be performed, and may include: performing down-sampling on the laser point cloud data to obtain down-sampled laser point cloud data; executing the alignment distance determination operation on each laser point data in the down-sampled laser point cloud data; the calculating the sum of the alignment distances of the laser point data in the laser point cloud data may include: and calculating the sum of the alignment distances of each laser point data in the down-sampled laser point cloud data.
In some optional implementation manners of this embodiment, the determining, as the alignment coordinate of the laser point data in the world coordinate system, the coordinate of the center point in each grid cube of each map grid of each map partition loaded in the cache, where the coordinate of the center point in each grid cube is closest to the coordinate of the laser point data in the world coordinate system, may include: determining at least one grid cube of which the distance between the coordinate of the central point and the coordinate of the laser point data in a world coordinate system is smaller than or equal to a preset distance threshold in each grid cube of each map grid of each map block loaded in the cache; and determining the coordinate of the center point of at least one determined grid cube and the coordinate of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system.
In some optional implementation manners of this embodiment, the preset three-dimensional grid map is obtained by adopting the following steps: acquiring a laser point cloud data frame sequence acquired by a map acquisition vehicle, wherein each frame of laser point cloud data in the laser point cloud data frame sequence is marked with corresponding current pose information of the vehicle; for each frame of laser point cloud data in the laser point cloud data frame sequence, converting each laser point data in the frame of laser point cloud data into a coordinate in the world coordinate system according to the marked current pose information of the vehicle corresponding to the frame of laser point cloud data to obtain the converted laser point cloud frame sequence; splicing each frame of laser point cloud data in the converted laser point cloud data frame sequence to obtain spliced laser point cloud data, generating a three-dimensional map according to the spliced laser point cloud data, and presenting the three-dimensional map; responding to the received dynamic laser point marking operation of the user on the three-dimensional map, acquiring a dynamic laser point marked by the user, and deleting the acquired dynamic laser point from the spliced laser point cloud data to obtain static laser point cloud data; generating a preset three-dimensional grid map according to the following modes: dividing the earth surface into R rows and C columns of square map blocks under a world coordinate system, dividing each map block into M multiplied by M square map grids, dividing each map grid into at least one grid cube, wherein the edge length of each grid cube is the same as the side length of each map grid, and setting the cube type of each grid cube as a dynamic cube; and for each static laser point data in the static laser point cloud data, setting the cube type of the grid cube corresponding to the coordinate of the static laser point data in the world coordinate system as the static cube in the preset three-dimensional grid map.
It should be noted that, for details of implementation and technical effects of each unit in the apparatus for identifying laser point cloud data of an unmanned vehicle provided in this embodiment, reference may be made to relevant descriptions of other embodiments in this application, and details are not described herein again.
Referring now to fig. 9, a block diagram of a computer system 900 suitable for implementing the steering control apparatus of the embodiments of the present application is shown. The driving control apparatus shown in fig. 9 is only an example, and should not bring any limitation to the functions and the range of use of the embodiment of the present application.
As shown in fig. 5, the computer system 900 includes a Central Processing Unit (CPU)901 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage section 906 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the system 900 are also stored. The CPU 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An Input/Output (I/O) interface 905 is also connected to bus 904.
The following components are connected to the I/O interface 905: a storage portion 906 including a hard disk and the like; and a communication section 907 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 907 performs communication processing via a network such as the internet. The drive 908 is also connected to the I/O interface 905 as necessary. A removable medium 909 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 908 as necessary, so that a computer program read out therefrom is mounted into the storage section 906 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 907 and/or installed from the removable medium 909. The above-described functions defined in the method of the present application are executed when the computer program is executed by a Central Processing Unit (CPU) 901. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first acquisition unit, a second acquisition unit, and an identification unit. Where the names of these units do not constitute a limitation on the unit itself in some cases, for example, the first acquisition unit may also be described as a "unit that acquires current pose information of the vehicle".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: responding to the latest frame of laser point cloud data collected by the laser radar, and acquiring the current pose information of the unmanned vehicle in a world coordinate system; according to the current pose information, obtaining N multiplied by N map blocks which are pre-loaded into a cache and take the map block corresponding to the current pose information as a center in a preset three-dimensional grid map, wherein N is an odd number, the preset three-dimensional grid map divides the earth surface into R rows and C columns of square map blocks according to the world coordinate system, each map block is divided into M multiplied by M square map grids, each map grid comprises at least one grid cube, and each grid cube comprises a cube type which is used for indicating that the grid cube is a static cube for representing a static obstacle or a dynamic cube for representing a dynamic obstacle; for each laser point data in the received laser point cloud data, performing the following laser point data identification operations: determining the coordinates of the laser point data in the world coordinate system according to the current pose information; acquiring a grid cube corresponding to the coordinates of the laser point data in the world coordinate system from the acquired N multiplied by N map blocks; in response to the cube type of the acquired grid cube being a static cube, the laser point data is determined as static laser point data for characterizing a static obstacle.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (18)

1. A method for identifying laser point cloud data of an unmanned vehicle, the unmanned vehicle being provided with a lidar, the method comprising:
responding to the latest frame of laser point cloud data collected by the laser radar, and acquiring the current pose information of the unmanned vehicle in a world coordinate system;
according to the current pose information, obtaining N multiplied by N map blocks which are pre-loaded into a cache and take the map block corresponding to the current pose information as a center in a preset three-dimensional grid map from the cache, wherein N is an odd number, the preset three-dimensional grid map divides the earth surface into R rows and C columns of square map blocks according to the world coordinate system, each map block is divided into M multiplied by M square map grids, each map grid comprises at least one grid cube, and each grid cube comprises a cube type which is used for indicating that the grid cube is a static cube for representing a static obstacle or a dynamic cube for representing a dynamic obstacle;
for each laser point data in the received laser point cloud data, performing the following laser point data identification operations: determining the coordinates of the laser point data in the world coordinate system according to the current pose information; acquiring a grid cube corresponding to the coordinates of the laser point data in the world coordinate system from the acquired N multiplied by N map blocks; in response to the cube type of the acquired grid cube being a static cube, the laser point data is determined as static laser point data for characterizing a static obstacle.
2. The method of claim 1, wherein the laser point data identification operation further comprises: in response to the cube type of the acquired grid cube being a dynamic cube, the laser point data is determined as dynamic laser point data for characterizing a dynamic obstacle.
3. The method of claim 2, wherein prior to obtaining current pose information of the unmanned vehicle in a world coordinate system in response to receiving a latest frame of laser point cloud data acquired by the lidar, the method further comprises:
in response to detecting a start signal of the unmanned vehicle, acquiring pose information of the unmanned vehicle in the world coordinate system and determining the acquired pose information as start-time pose information;
determining map blocks corresponding to the starting-time pose information in the preset three-dimensional grid map as initial map blocks;
loading N multiplied by N map blocks which take the initial map block as a center in the preset three-dimensional grid map into the cache from a magnetic disc;
newly building and executing a first preloading thread, wherein the first preloading thread is used for loading (4N +4) map blocks which are not loaded into the cache in (N +2) × (N +2) map blocks which take the initial map block as the center in the preset three-dimensional grid map into the cache from a disk.
4. The method of claim 3, wherein the method further comprises, prior to performing the following laser point data identification operation for each laser point data in the received laser point cloud data:
determining the vehicle pose information of the unmanned vehicle in the last sampling period of the current time as the pose information of the last period;
determining the running direction of the unmanned vehicle according to the difference information between the current pose information and the last period of pose information;
and newly building and executing a second preloading thread, wherein the second preloading thread is used for loading (N +2) map blocks, which are adjacent to the (N +2) × (N +2) map blocks which are loaded into the cache and take the map block corresponding to the current pose information as the center, in the preset three-dimensional grid map along the determined driving direction into the cache from a magnetic disk.
5. The method of claim 4, wherein the grid cube further comprises center point coordinates of the grid cube in the world coordinate system; and
after the obtaining of the current pose information of the unmanned vehicle in the world coordinate system, the method further comprises:
determining the current pose information as an initial value of pose information of the vehicle to be determined;
the objective function is constructed as follows: using the pose information of the undetermined vehicle as an independent variable; for each laser point data in the laser point cloud data, performing the following alignment distance determination operations: calculating the coordinates of the laser point data in the world coordinate system according to the position and posture information of the vehicle to be determined; determining the coordinate of the center point in each grid cube of each map grid of each map block loaded in the cache and the coordinate of the center point of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system; determining the distance between the coordinate of the laser point data in the world coordinate system and the alignment coordinate of the laser point data in the world coordinate system as the alignment distance of the laser point data; calculating the sum of the alignment distances of all the laser point data in the laser point cloud data; determining the sum of the calculated alignment distances as the output of the objective function;
determining undetermined pose information which enables the output of the objective function to be minimum by utilizing a nearest neighbor iterative algorithm;
updating the current pose information with the determined pending pose information.
6. The method of claim 5, wherein the performing, for each laser point data in the laser point cloud data, an alignment distance determination operation comprising:
performing down-sampling on the laser point cloud data to obtain down-sampled laser point cloud data;
executing the alignment distance determination operation for each laser point data in the down-sampled laser point cloud data; and
the calculating the sum of the alignment distances of the laser point data in the laser point cloud data comprises the following steps:
and calculating the sum of the alignment distances of each laser point data in the down-sampled laser point cloud data.
7. The method of claim 6, wherein determining the coordinate of the center point of the grid cube having the closest distance between the coordinate of the center point in each grid cube of each map grid of each map partition loaded in the cache and the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system comprises:
determining at least one grid cube of which the distance between the coordinate of the central point and the coordinate of the laser point data in the world coordinate system is less than or equal to a preset distance threshold in each grid cube of each map grid of each map block loaded in the cache;
and determining the coordinate of the central point in at least one determined grid cube and the coordinate of the central point of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system.
8. The method according to any one of claims 1 to 7, wherein the predetermined three-dimensional grid map is obtained by:
acquiring a laser point cloud data frame sequence acquired by a map acquisition vehicle, wherein each frame of laser point cloud data in the laser point cloud data frame sequence is marked with corresponding current pose information of the vehicle;
for each frame of laser point cloud data in the laser point cloud data frame sequence, converting each laser point data in the frame of laser point cloud data into coordinates in the world coordinate system according to the marked current pose information of the vehicle corresponding to the frame of laser point cloud data to obtain the converted laser point cloud frame sequence;
splicing each frame of laser point cloud data in the converted laser point cloud data frame sequence to obtain spliced laser point cloud data, generating a three-dimensional map according to the spliced laser point cloud data, and presenting the three-dimensional map;
responding to the received dynamic laser point marking operation of the user on the three-dimensional map, acquiring a dynamic laser point marked by the user, and deleting the acquired dynamic laser point from the spliced laser point cloud data to obtain static laser point cloud data;
generating a preset three-dimensional grid map according to the following modes: dividing the earth surface into R rows and C columns of square map blocks under a world coordinate system, dividing each map block into M multiplied by M square map grids, dividing each map grid into at least one grid cube, wherein the edge length of each grid cube is the same as the side length of each map grid, and setting the cube type of each grid cube as a dynamic cube;
and for each static laser point data in the static laser point cloud data, setting the cube type of the grid cube corresponding to the coordinate of the static laser point data in the world coordinate system as the static cube in the preset three-dimensional grid map.
9. An apparatus for identifying laser point cloud data of an unmanned vehicle, the unmanned vehicle being provided with a lidar, the apparatus comprising:
the first acquisition unit is configured to respond to the fact that the latest frame of laser point cloud data collected by the laser radar is received, and acquire the current pose information of the unmanned vehicle in a world coordinate system;
a second obtaining unit, configured to obtain, from a cache, according to the current pose information, N × N map patches, which are preloaded into the cache and centered on a map patch corresponding to the current pose information in a preset three-dimensional grid map, where N is an odd number, the preset three-dimensional grid map divides an earth surface into R rows and C columns of square map patches according to the world coordinate system, each map patch is divided into M × M square map grids, each map grid includes at least one grid cube, and each grid cube includes a cube type indicating that the grid cube is a static cube for representing a static obstacle or a dynamic cube for representing a dynamic obstacle;
an identification unit configured to perform, for each laser point data in the received laser point cloud data, the following laser point data identification operation: determining the coordinates of the laser point data in the world coordinate system according to the current pose information; acquiring a grid cube corresponding to the coordinates of the laser point data in the world coordinate system from the acquired N multiplied by N map blocks; in response to the cube type of the acquired grid cube being a static cube, the laser point data is determined as static laser point data for characterizing a static obstacle.
10. The apparatus of claim 9, wherein the laser point data identification operation further comprises: in response to the cube type of the acquired grid cube being a dynamic cube, the laser point data is determined as dynamic laser point data for characterizing a dynamic obstacle.
11. The apparatus of claim 10, further comprising:
a first determination unit configured to acquire pose information of the unmanned vehicle in the world coordinate system in response to detection of a start signal of the unmanned vehicle, and determine the acquired pose information as start-time pose information;
the second determining unit is configured to determine map blocks corresponding to the start-time pose information in the preset three-dimensional grid map as initial map blocks;
a loading unit configured to load, from a disk, N × N map blocks centered on the initial map block in the preset three-dimensional grid map into the cache;
a first preloading unit, configured to newly create and execute a first preloading thread, where the first preloading thread is configured to load, from a disk, into the cache, (4N +4) map tiles that have not been loaded into the cache among (N +2) × (N +2) map tiles centered on the initial map tile in the preset three-dimensional mesh map.
12. The apparatus of claim 11, further comprising:
a third determination unit configured to determine vehicle pose information of the unmanned vehicle at a previous sampling period of a current time as previous period pose information;
a fourth determination unit configured to determine a traveling direction of the unmanned vehicle according to difference information of the current pose information and the previous period pose information;
and a second preloading unit configured to newly create and execute a second preloading thread, where the second preloading thread is configured to load, from a disk, into the cache, (N +2) map tiles adjacent to the (N +2) × (N +2) map tiles, which have been loaded into the cache and centered on the map tile corresponding to the current pose information, in the preset three-dimensional mesh map along the determined driving direction.
13. The apparatus of claim 12, wherein grid cube further comprises center point coordinates of grid cube in the world coordinate system; and
the device further comprises:
the fifth determining unit is configured to determine the current pose information as an initial value of pose information of the vehicle to be determined;
a construction unit configured to construct the objective function in the following manner: using the pose information of the undetermined vehicle as an independent variable; for each laser point data in the laser point cloud data, performing the following alignment distance determination operations: calculating the coordinates of the laser point data in the world coordinate system according to the position and posture information of the vehicle to be determined; determining the coordinate of the center point in each grid cube of each map grid of each map block loaded in the cache and the coordinate of the center point of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system; determining the distance between the coordinate of the laser point data in the world coordinate system and the alignment coordinate of the laser point data in the world coordinate system as the alignment distance of the laser point data; calculating the sum of the alignment distances of all the laser point data in the laser point cloud data; determining the sum of the calculated alignment distances as the output of the objective function;
a sixth determining unit configured to determine to-be-determined pose information that minimizes an output of the objective function, using a nearest neighbor iterative algorithm;
and the updating unit is configured to update the current pose information by using the determined information of the pending pose.
14. The apparatus of claim 13, wherein the performing, for each laser point data in the laser point cloud data, an alignment distance determination operation comprises:
performing down-sampling on the laser point cloud data to obtain down-sampled laser point cloud data;
executing the alignment distance determination operation for each laser point data in the down-sampled laser point cloud data; and
the calculating the sum of the alignment distances of the laser point data in the laser point cloud data comprises the following steps:
and calculating the sum of the alignment distances of each laser point data in the down-sampled laser point cloud data.
15. The apparatus of claim 14, wherein the determining, as the alignment coordinates of the laser point data in the world coordinate system, the coordinates of the center point of each grid cube of each map grid of each map partition loaded in the cache, which is closest to the coordinates of the laser point data in the world coordinate system, comprises:
determining at least one grid cube of which the distance between the coordinate of the central point and the coordinate of the laser point data in the world coordinate system is less than or equal to a preset distance threshold in each grid cube of each map grid of each map block loaded in the cache;
and determining the coordinate of the central point in at least one determined grid cube and the coordinate of the central point of the grid cube closest to the coordinate of the laser point data in the world coordinate system as the alignment coordinate of the laser point data in the world coordinate system.
16. The apparatus according to any one of claims 9-15, wherein the predetermined three-dimensional grid map is obtained by:
acquiring a laser point cloud data frame sequence acquired by a map acquisition vehicle, wherein each frame of laser point cloud data in the laser point cloud data frame sequence is marked with corresponding current pose information of the vehicle;
for each frame of laser point cloud data in the laser point cloud data frame sequence, converting each laser point data in the frame of laser point cloud data into coordinates in the world coordinate system according to the marked current pose information of the vehicle corresponding to the frame of laser point cloud data to obtain the converted laser point cloud frame sequence;
splicing each frame of laser point cloud data in the converted laser point cloud data frame sequence to obtain spliced laser point cloud data, generating a three-dimensional map according to the spliced laser point cloud data, and presenting the three-dimensional map;
responding to the received dynamic laser point marking operation of the user on the three-dimensional map, acquiring a dynamic laser point marked by the user, and deleting the acquired dynamic laser point from the spliced laser point cloud data to obtain static laser point cloud data;
generating a preset three-dimensional grid map according to the following modes: dividing the earth surface into R rows and C columns of square map blocks under a world coordinate system, dividing each map block into M multiplied by M square map grids, dividing each map grid into at least one grid cube, wherein the edge length of each grid cube is the same as the side length of each map grid, and setting the cube type of each grid cube as a dynamic cube;
and for each static laser point data in the static laser point cloud data, setting the cube type of the grid cube corresponding to the coordinate of the static laser point data in the world coordinate system as the static cube in the preset three-dimensional grid map.
17. A driving control apparatus comprising:
one or more processors;
a storage device for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-8.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN201710539630.XA 2017-07-04 2017-07-04 Method and device for identifying laser point cloud data of unmanned vehicle Active CN109214248B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710539630.XA CN109214248B (en) 2017-07-04 2017-07-04 Method and device for identifying laser point cloud data of unmanned vehicle
US16/026,338 US11131999B2 (en) 2017-07-04 2018-07-03 Method and apparatus for identifying laser point cloud data of autonomous vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710539630.XA CN109214248B (en) 2017-07-04 2017-07-04 Method and device for identifying laser point cloud data of unmanned vehicle

Publications (2)

Publication Number Publication Date
CN109214248A CN109214248A (en) 2019-01-15
CN109214248B true CN109214248B (en) 2022-04-29

Family

ID=64903834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710539630.XA Active CN109214248B (en) 2017-07-04 2017-07-04 Method and device for identifying laser point cloud data of unmanned vehicle

Country Status (2)

Country Link
US (1) US11131999B2 (en)
CN (1) CN109214248B (en)

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11232532B2 (en) * 2018-05-30 2022-01-25 Sony Interactive Entertainment LLC Multi-server cloud virtual reality (VR) streaming
CN109064506B (en) * 2018-07-04 2020-03-13 百度在线网络技术(北京)有限公司 High-precision map generation method and device and storage medium
CN109297510B (en) * 2018-09-27 2021-01-01 百度在线网络技术(北京)有限公司 Relative pose calibration method, device, equipment and medium
CN111209353A (en) * 2018-11-21 2020-05-29 驭势科技(北京)有限公司 Visual positioning map loading method, device and system and storage medium
TWI726278B (en) * 2019-01-30 2021-05-01 宏碁股份有限公司 Driving detection method, vehicle and driving processing device
US11315317B2 (en) * 2019-01-30 2022-04-26 Baidu Usa Llc Point clouds ghosting effects detection system for autonomous driving vehicles
EP3707469B1 (en) * 2019-01-30 2023-10-11 Baidu.com Times Technology (Beijing) Co., Ltd. A point clouds registration system for autonomous vehicles
KR102334641B1 (en) * 2019-01-30 2021-12-03 바이두닷컴 타임즈 테크놀로지(베이징) 컴퍼니 리미티드 Map Partitioning System for Autonomous Vehicles
CN109870157B (en) * 2019-02-20 2021-11-02 苏州风图智能科技有限公司 Method and device for determining pose of vehicle body and mapping method
CN109668742B (en) * 2019-02-20 2020-04-28 苏州风图智能科技有限公司 Laser radar-based unmanned vehicle testing method and device
CN110070575A (en) * 2019-03-29 2019-07-30 东软睿驰汽车技术(沈阳)有限公司 A kind of method and device to label
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN112015938A (en) * 2019-05-28 2020-12-01 杭州海康威视数字技术股份有限公司 Point cloud label transmission method, device and system
CN110276834B (en) * 2019-06-25 2023-04-11 达闼科技(北京)有限公司 Construction method of laser point cloud map, terminal and readable storage medium
CN111684382A (en) * 2019-06-28 2020-09-18 深圳市大疆创新科技有限公司 Movable platform state estimation method, system, movable platform and storage medium
CN111886597A (en) * 2019-06-28 2020-11-03 深圳市大疆创新科技有限公司 Obstacle detection method and device for movable platform and movable platform
CN112805534A (en) * 2019-08-27 2021-05-14 北京航迹科技有限公司 System and method for locating target object
CN110645998B (en) * 2019-09-10 2023-03-24 上海交通大学 Dynamic object-free map segmentation establishing method based on laser point cloud
CN112630799B (en) * 2019-09-24 2022-11-29 阿波罗智能技术(北京)有限公司 Method and apparatus for outputting information
WO2021062587A1 (en) * 2019-09-30 2021-04-08 Beijing Voyager Technology Co., Ltd. Systems and methods for automatic labeling of objects in 3d point clouds
CN111223107A (en) * 2019-12-31 2020-06-02 武汉中海庭数据技术有限公司 Point cloud data set manufacturing system and method based on point cloud deep learning
CN111209504B (en) * 2020-01-06 2023-09-22 北京百度网讯科技有限公司 Method and apparatus for accessing map data
CN111260789B (en) * 2020-01-07 2024-01-16 青岛小鸟看看科技有限公司 Obstacle avoidance method, virtual reality headset and storage medium
CN111325136B (en) * 2020-02-17 2024-03-19 北京小马慧行科技有限公司 Method and device for labeling object in intelligent vehicle and unmanned vehicle
CN113377748B (en) * 2020-03-09 2023-12-05 北京京东乾石科技有限公司 Static point removing method and device for laser radar point cloud data
CN111401264A (en) * 2020-03-19 2020-07-10 上海眼控科技股份有限公司 Vehicle target detection method and device, computer equipment and storage medium
CN111461980B (en) * 2020-03-30 2023-08-29 北京百度网讯科技有限公司 Performance estimation method and device of point cloud stitching algorithm
CN111462072B (en) * 2020-03-30 2023-08-29 北京百度网讯科技有限公司 Point cloud picture quality detection method and device and electronic equipment
CN111239706B (en) * 2020-03-30 2021-10-01 许昌泛网信通科技有限公司 Laser radar data processing method
CN111458722B (en) * 2020-04-16 2022-04-01 杭州师范大学钱江学院 Map construction method of laser radar trolley in gradient environment
CN111428692A (en) * 2020-04-23 2020-07-17 北京小马慧行科技有限公司 Method and device for determining travel trajectory of vehicle, and storage medium
CN111638499B (en) * 2020-05-08 2024-04-09 上海交通大学 Camera-laser radar relative external parameter calibration method based on laser radar reflection intensity point characteristics
CN111721281B (en) * 2020-05-27 2022-07-15 阿波罗智联(北京)科技有限公司 Position identification method and device and electronic equipment
CN111551947A (en) * 2020-05-28 2020-08-18 东软睿驰汽车技术(沈阳)有限公司 Laser point cloud positioning method, device, equipment and system
CN113970758A (en) * 2020-07-22 2022-01-25 上海商汤临港智能科技有限公司 Point cloud data processing method and device
CN114093155A (en) * 2020-08-05 2022-02-25 北京万集科技股份有限公司 Traffic accident responsibility tracing method and device, computer equipment and storage medium
CN111983582A (en) * 2020-08-14 2020-11-24 北京埃福瑞科技有限公司 Train positioning method and system
CN112162297B (en) * 2020-09-24 2022-07-19 燕山大学 Method for eliminating dynamic obstacle artifacts in laser point cloud map
CN113761090B (en) * 2020-11-17 2024-04-05 北京京东乾石科技有限公司 Positioning method and device based on point cloud map
WO2022126540A1 (en) * 2020-12-17 2022-06-23 深圳市大疆创新科技有限公司 Obstacle detection and re-identification method, apparatus, movable platform, and storage medium
WO2022133770A1 (en) * 2020-12-23 2022-06-30 深圳元戎启行科技有限公司 Method for generating point cloud normal vector, apparatus, computer device, and storage medium
CN113781569B (en) * 2021-01-04 2024-04-05 北京京东乾石科技有限公司 Loop detection method and device
CN112923938B (en) * 2021-02-18 2023-10-03 中国第一汽车股份有限公司 Map optimization method, device, storage medium and system
CN112947454A (en) * 2021-02-25 2021-06-11 浙江理工大学 Fire fighting evaluation method, device, equipment and storage medium for warehouse
CN112733971B (en) * 2021-04-02 2021-11-16 北京三快在线科技有限公司 Pose determination method, device and equipment of scanning equipment and storage medium
CN113375664B (en) * 2021-06-09 2023-09-01 成都信息工程大学 Autonomous mobile device positioning method based on dynamic loading of point cloud map
CN113503883B (en) * 2021-06-22 2022-07-19 北京三快在线科技有限公司 Method for collecting data for constructing map, storage medium and electronic equipment
CN116027341A (en) * 2021-10-25 2023-04-28 珠海一微半导体股份有限公司 Grid and voxel positioning method based on laser observation direction, robot and chip
CN114463507B (en) * 2022-04-11 2022-06-14 国家电投集团科学技术研究院有限公司 Road identification method and device
TWI827056B (en) * 2022-05-17 2023-12-21 中光電智能機器人股份有限公司 Automated moving vehicle and control method thereof
CN115993089B (en) * 2022-11-10 2023-08-15 山东大学 PL-ICP-based online four-steering-wheel AGV internal and external parameter calibration method
CN115639842B (en) * 2022-12-23 2023-04-07 北京中飞艾维航空科技有限公司 Inspection method and system using unmanned aerial vehicle
CN115965756B (en) * 2023-03-13 2023-06-06 安徽蔚来智驾科技有限公司 Map construction method, device, driving device and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779280A (en) * 2012-06-19 2012-11-14 武汉大学 Traffic information extraction method based on laser sensor
CN106133756A (en) * 2014-03-27 2016-11-16 赫尔实验室有限公司 For filtering, split and identify the system without the object in constraint environment
US9523772B2 (en) * 2013-06-14 2016-12-20 Microsoft Technology Licensing, Llc Object removal using lidar-based classification

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9286538B1 (en) 2014-05-01 2016-03-15 Hrl Laboratories, Llc Adaptive 3D to 2D projection for different height slices and extraction of robust morphological features for 3D object recognition
KR20180038475A (en) * 2015-08-03 2018-04-16 톰톰 글로벌 콘텐트 비.브이. METHODS AND SYSTEMS FOR GENERATING AND USING POSITIONING REFERENCE DATA
US10444759B2 (en) * 2017-06-14 2019-10-15 Zoox, Inc. Voxel based ground plane estimation and object segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779280A (en) * 2012-06-19 2012-11-14 武汉大学 Traffic information extraction method based on laser sensor
US9523772B2 (en) * 2013-06-14 2016-12-20 Microsoft Technology Licensing, Llc Object removal using lidar-based classification
CN106133756A (en) * 2014-03-27 2016-11-16 赫尔实验室有限公司 For filtering, split and identify the system without the object in constraint environment

Also Published As

Publication number Publication date
CN109214248A (en) 2019-01-15
US11131999B2 (en) 2021-09-28
US20190011566A1 (en) 2019-01-10

Similar Documents

Publication Publication Date Title
CN109214248B (en) Method and device for identifying laser point cloud data of unmanned vehicle
US11320836B2 (en) Algorithm and infrastructure for robust and efficient vehicle localization
KR102273559B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
EP3361278B1 (en) Autonomous vehicle localization based on walsh kernel projection technique
CN108303103B (en) Method and device for determining target lane
US20210373161A1 (en) Lidar localization using 3d cnn network for solution inference in autonomous driving vehicles
CN111461981B (en) Error estimation method and device for point cloud stitching algorithm
CN112212874B (en) Vehicle track prediction method and device, electronic equipment and computer readable medium
US9576200B2 (en) Background map format for autonomous driving
CN110377025A (en) Sensor aggregation framework for automatic driving vehicle
CN111656135A (en) Positioning optimization based on high-definition map
CN110386142A (en) Pitch angle calibration method for automatic driving vehicle
CN108734780B (en) Method, device and equipment for generating map
US10860868B2 (en) Lane post-processing in an autonomous driving vehicle
CN111680747B (en) Method and apparatus for closed loop detection of occupancy grid subgraphs
CN110110029B (en) Method and device for lane matching
CN111339876B (en) Method and device for identifying types of areas in scene
US20230351686A1 (en) Method, device and system for cooperatively constructing point cloud map
CN111402387A (en) Removing short timepoints from a point cloud of a high definition map for navigating an autonomous vehicle
CN111353453B (en) Obstacle detection method and device for vehicle
CN114187357A (en) High-precision map production method and device, electronic equipment and storage medium
EP3696507B1 (en) Representing an environment by cylindrical and planar structures
CN111461982B (en) Method and apparatus for splice point cloud
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle
CN117392216A (en) Method and device for determining point cloud map, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211011

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant after: Apollo Intelligent Technology (Beijing) Co., Ltd

Address before: 100085 third floor, baidu building, No. 10, Shangdi 10th Street, Haidian District, Beijing

Applicant before: Baidu Online Network Technology (Beijing) Co., Ltd

GR01 Patent grant
GR01 Patent grant