CN110008851B - Method and equipment for detecting lane line - Google Patents

Method and equipment for detecting lane line Download PDF

Info

Publication number
CN110008851B
CN110008851B CN201910197037.0A CN201910197037A CN110008851B CN 110008851 B CN110008851 B CN 110008851B CN 201910197037 A CN201910197037 A CN 201910197037A CN 110008851 B CN110008851 B CN 110008851B
Authority
CN
China
Prior art keywords
point cloud
map
cloud data
frame
local map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910197037.0A
Other languages
Chinese (zh)
Other versions
CN110008851A (en
Inventor
陈海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenlan Robot Shanghai Co ltd
Original Assignee
Deep Blue Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Blue Technology Shanghai Co Ltd filed Critical Deep Blue Technology Shanghai Co Ltd
Priority to CN201910197037.0A priority Critical patent/CN110008851B/en
Publication of CN110008851A publication Critical patent/CN110008851A/en
Application granted granted Critical
Publication of CN110008851B publication Critical patent/CN110008851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and equipment for detecting lane lines, which are used for solving the problem that the lane lines cannot be accurately detected in the driving process of an unmanned vehicle in the prior art. Determining a binary image corresponding to a local map according to the local map based on a deep neural network model, wherein the local map is a bird-eye view obtained by rasterizing a point cloud map determined by continuous multi-frame point cloud data, and the deep neural network model is obtained by training the binary image with lane marking obtained by converting a historical local map and a high-precision map; and determining the position of the lane line according to the coordinates of the pixel points in the binary image corresponding to the local map. The method comprises the steps of accurately determining a local map according to continuous multi-frame point cloud data, accurately determining a binary image with lane line information according to the local map based on a depth neural network model, and accurately determining the position of a lane line according to coordinates of points representing the lane line in the binary image.

Description

Method and equipment for detecting lane line
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method and equipment for detecting lane lines.
Background
With the continuous development and progress of scientific technology, computer technology, modern sensing technology, artificial intelligence technology and the like are gradually applied to the field of automobiles, and intelligent vehicles with the functions of environmental perception, path planning and the like are produced. By controlling the intelligent vehicle, the intelligent vehicle can automatically run according to a preset path to realize unmanned driving, but only runs according to a planned path in the running process, lane line information in the running path cannot be determined, and the situation of lane departure running can occur;
in the prior art, when detecting a lane line, the detection method includes: monocular vision method, laser radar method, etc.; the monocular vision method only considers the visual information of a scene, and is very easily influenced by illumination conditions and weather conditions; the laser radar method only acquires point cloud data of a single frame, has the defect of sparse point cloud data and is low in accuracy.
In summary, in the prior art, the driverless vehicle cannot accurately detect the lane line in the driving process.
Disclosure of Invention
The invention provides a method and equipment for detecting a lane line, which are used for solving the problem that the lane line detection cannot be accurately carried out in the driving process of an unmanned vehicle in the prior art.
In a first aspect, an embodiment of the present invention provides a method for lane line detection, where the method includes:
determining a binary image corresponding to a local map according to the local map based on a deep neural network model, wherein the local map is a bird-eye view obtained by rasterizing a point cloud map, and the deep neural network model is obtained by training according to a historical local map and the binary image with lane marking;
and determining the position of the lane line according to the coordinates of the pixel points in the binary image corresponding to the local map.
The method comprises the steps that a local map during lane line detection is a bird's-eye view determined according to multi-frame point cloud data, a binary image corresponding to the local map is determined based on a trained deep neural network model, the binary image is a binary image with lane line marks, and the lane line position is determined according to coordinates of pixel points representing the lane lines in the obtained binary image with the lane line marks; the local map is determined according to continuous multi-frame point cloud data, the point cloud data are dense, and the determined local map is more accurate, so that the binary image with the lane line mark is accurately determined according to the accurate local map based on a depth neural network model, the lane line position is further accurately determined according to the coordinates of the points of the lane line mark in the binary image, and the determined lane line position is more accurate.
In one possible implementation, the point cloud map is constructed by:
determining relative pose information of an Nth frame of point cloud data by using an inertial navigation element, wherein the Nth frame of point cloud data is determined by scanning surrounding objects of a vehicle through a laser radar;
performing coordinate conversion on the N-th frame of point cloud data according to the relative pose information of the N-th frame of point cloud data to obtain point cloud map coordinates corresponding to the N-th frame of point cloud data;
determining the point cloud map according to point cloud map coordinates corresponding to continuous multi-frame point cloud data;
wherein N is a positive integer.
In one possible implementation, the local map is determined by:
dividing the point cloud map into a plurality of cubes;
aiming at any cube, carrying out weighted average on intensity values corresponding to all points in the cube to determine a reflection intensity mean value;
assigning the points in the grids corresponding to the cubes with the determined intensity average values;
and forming a bird's-eye view according to the points after the values are assigned in all the grids, and taking the bird's-eye view as a local map.
According to the method, coordinate conversion is carried out on continuous multiframe point cloud data acquired according to a laser radar, a point cloud map is generated, the point cloud map is converted into a two-dimensional aerial view through a rasterization processing mode, the point reflection intensity mean value in a grid is determined in the process of converting the point cloud data into the aerial view, the converted aerial view is used as a local map, the laser radar can accurately acquire peripheral object information, therefore, the point cloud map can be accurately determined, the point cloud map is accurately converted into the local map through the rasterization processing mode, the local map containing the reflection intensity mean value is detected in the lane line detection process, the detection process is more convenient, and lane lines can be accurately detected.
In a possible implementation manner, the track line labeled binary image is generated according to the track line information around the vehicle in the high-precision map with the track line information, the vehicle in the high-precision map is determined according to the vehicle position information corresponding to the historical local map, and the resolution of the track line labeled binary image is the same as that of the historical local map.
In a possible implementation manner, when determining the position of the lane line according to the coordinates of the pixel points in the binarized image corresponding to the local map, the position of the lane line is determined according to the coordinate value corresponding to the pixel point representing the lane line in the binarized image according to the ratio between the preset pixel value and the real value.
In a second aspect, an embodiment of the present invention provides an apparatus for lane line detection, where the apparatus includes: at least one processing unit and at least one storage unit, wherein the storage unit stores program code, and when the program code is executed by the processing unit, the processing unit is specifically configured to:
determining a binary image corresponding to a local map according to the local map based on a deep neural network model, wherein the local map is a bird-eye view obtained by rasterizing a point cloud map, and the deep neural network model is obtained by training according to a historical local map and the binary image with lane marking;
and determining the position of the lane line according to the coordinates of the pixel points in the binary image corresponding to the local map.
In a third aspect, an embodiment of the present invention provides a lane line detection apparatus, where the apparatus includes: a first determination module, a second determination module;
the first determining module is to: determining a binary image corresponding to a local map according to the local map based on a deep neural network model, wherein the local map is a bird-eye view obtained by rasterizing a point cloud map, and the deep neural network model is obtained by training according to a historical local map and the binary image with lane marking;
the second determination module is to: and determining the position of the lane line according to the coordinates of the pixel points in the binary image corresponding to the local map.
In a fourth aspect, the present invention also provides a computer storage medium having a computer program stored thereon, which when executed by a processing unit, performs the steps of the method of the first aspect.
In addition, for technical effects brought by any one implementation manner of the second aspect to the fourth aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of a method for lane line detection according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a method for dividing a cube according to an embodiment of the present invention;
fig. 3 is a flowchart of an overall method for lane line detection according to an embodiment of the present invention;
fig. 4 is a structural diagram of a first lane line detection apparatus according to an embodiment of the present invention;
fig. 5 is a structural diagram of a second lane line detection apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Some of the words that appear in the text are explained below:
1. the term "and/or" in the embodiments of the present invention describes an association relationship of associated objects, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship;
2. the vehicle in the embodiment of the invention refers to a vehicle which needs to carry out lane line detection in real time in the driving process;
3. in the embodiment of the present invention, "rasterization" is to convert a vector graphic into a bitmap (raster image). Rendering a three-dimensional scene represented by a polygon to a two-dimensional surface by a most basic rasterization algorithm;
4. the term "bird's-eye view" in the embodiments of the present invention refers to a perspective view (two-dimensional image) drawn by looking down the ground from a certain point at a high altitude by a high viewpoint perspective method according to the perspective principle. Simply, looking down an area in the air, the image is more realistic than a plan view. In the aerial view characteristic diagram in the embodiment of the invention, statistical information such as reflection intensity of point cloud data obtained by scanning of the laser radar is stored in each grid.
The application scenario described in the embodiment of the present invention is for more clearly illustrating the technical solution of the embodiment of the present invention, and does not form a limitation on the technical solution provided in the embodiment of the present invention, and it can be known by a person skilled in the art that with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present invention is also applicable to similar technical problems. In the description of the present invention, the term "plurality" means two or more unless otherwise specified.
As shown in fig. 1, a method for detecting a driving area according to an embodiment of the present invention specifically includes the following steps:
step 100, determining a binary image corresponding to a local map according to the local map based on a deep neural network model, wherein the local map is a bird's-eye view obtained by rasterizing a point cloud map, and the deep neural network model is obtained by training according to a historical local map and the binary image with lane marking;
and step 110, determining the position of the lane line according to the coordinates of the pixel points in the binary image corresponding to the local map.
According to the scheme, the local map during lane line detection is a bird's-eye view determined according to multi-frame point cloud data, a binary image corresponding to the local map is determined based on a trained deep neural network model, the binary image is a binary image with lane line marks, and the lane line position is determined according to coordinates of pixel points representing the lane lines in the obtained binary image with the lane line marks; the local map is determined according to continuous multi-frame point cloud data, the point cloud data are dense, and the determined local map is more accurate, so that the binary image with the lane line mark is accurately determined according to the accurate local map based on a depth neural network model, the lane line position is further accurately determined according to the coordinates of the points of the lane line mark in the binary image, and the determined lane line position is more accurate.
In the embodiment of the invention, a point cloud map needs to be constructed in real time, the point cloud map is determined according to continuous multiframe point cloud data acquired by a laser radar during construction, and the specific steps are as follows:
the first step is as follows: determining relative pose information of an Nth frame of point cloud data by using an inertial navigation element, wherein the Nth frame of point cloud data is determined by scanning the vehicle surroundings through a laser radar;
the second step is that: and carrying out coordinate conversion on the N-th frame of point cloud data according to the relative pose information of the N-th frame of point cloud data to obtain point cloud map coordinates corresponding to the N-th frame of point cloud data, wherein N is a positive integer.
The third step: and determining a point cloud map according to the point cloud map coordinates of the continuous multi-frame point cloud data.
When the point cloud map is determined according to the continuous multi-frame point cloud map coordinates, the point cloud map is determined according to the point cloud map coordinates of each continuous multi-frame point cloud according to the sequence of each frame of acquired point cloud.
The point cloud data refers to a set of vectors in a three-dimensional coordinate system. These vectors are usually expressed in X, Y, Z three-dimensional coordinates, represent geometric position information, and are generally used primarily to represent the shape of the external surface of an object. For example, Pi ═ { Xi, Yi, Zi } represents a Point in space, (i ═ 1, 2, 3, … …, n), and Point Cloud ═ P1, P2, P3, … …, Pn } represents a set of Point Cloud data.
In an embodiment of the present invention, the point cloud data is composed of at least one point, wherein the relative pose information of the point cloud data includes, but is not limited to, part or all of the following:
position information, speed, acceleration, heading angle.
When the inertial navigation element is used for determining the relative pose information of the Nth frame of point cloud data, the laser radar has a time difference between the output point cloud data of the objects around the scanning vehicle and the IMU data output by the inertial navigation element, so that when the laser radar outputs one frame of point cloud data, the inertial navigation element correspondingly outputs a plurality of sets of IMU data to form an IMU data set, namely the IMU data set is formed by at least one IMU data, wherein the IMU data comprises but is not limited to part or all of the following:
speed, acceleration, heading angle.
For example, the adopted device is a velodyne-32 line laser radar and a Nortay high-precision combined inertial navigation element unit IMU, and the IMU can achieve centimeter-level positioning precision. The method comprises the steps that the point cloud data of the Nth frame is output by scanning objects around a vehicle through a laser radar, and the point cloud data of the Nth frame is a set formed by points obtained by scanning the objects around the vehicle through the laser radar; when the point cloud data of the nth frame is determined, a set formed by IMU data is obtained through measurement of an inertial navigation element unit IMU, for example, when the output frequency of laser radar data is 10 Hz and the output frequency of the IMU is 100 Hz, 10 sets of IMU data (assuming that one set of IMU comprises an acceleration, a speed and a heading angle) are correspondingly output in the time of outputting the point cloud data of the nth frame, and the 10 sets of IMU data form an IMU data set, namely the IMU data set corresponding to the point cloud data of the nth frame.
Therefore, when the inertial navigation element is used for determining the relative pose information of the point cloud data: determining a first displacement translation, a second displacement translation and a rotation amount according to the Nth frame of point cloud data, determining an RT matrix according to the first displacement translation, the second displacement translation and the rotation amount, and taking the RT matrix as relative pose information; the first displacement translation is the product of the sum of all east speeds in the IMU data set corresponding to the Nth point cloud data and the IMU data measurement time interval, the second displacement translation is the product of the sum of all north speeds in the IMU data set corresponding to the Nth point cloud data and the IMU data measurement time interval, and the rotation amount is the course angle obtained by the last measurement determined according to the timestamp of the IMU data in the IMU data set corresponding to the Nth frame point cloud data, namely the relative pose information is obtained according to the IMU course angle and the speed acceleration information;
wherein the east speed and the north speed are obtained by decomposing the speed.
For example, an IMU dataset contains 10 sets of IMU data;
{(Vx1,Vy1,yaw1);(Vx2,Vy2,yaw2);…;(Vx10,Vy10,yaw10)}。
where Vxi represents the eastern speed measured the ith time when the IMU dataset corresponding to the nth point cloud data is obtained, Vyi represents the northbound speed measured the ith time, yawi represents the heading angle measured the ith time, and i is 1, 2, …, 10; (V)x1,Vy1, yaw1) is t1, (V)x2,Vy2, yaw2) is t2, …, (V)x10,Vy10, yaw10) is t10, and t1<t2<…<t10, then (V)x10,Vy10, yaw10), namely, IMU data output by the last measurement in the IMU data set, yaw10, namely, heading angle output by the last measurement, and T2-T1-T3-T2-T4-T3-T … -T10-T9, namely, IMU data measurement time interval output by the last measurement.
The first shift is offset (V)x1+Vx2+Vx3+…+Vx10)*T;
The second shift is Offy ═ (V)y1+Vy2+Vy3+…+Vy10)*T;
The rotation amount is θ equal to yaw 10.
And determining an RT (Rotational translation) matrix according to the first displacement translation, the second displacement translation and the rotation amount corresponding to the Nth frame of point cloud data.
For example, a rotation matrix R is determined from the rotation amount θ, assuming that R is a 4 × 4 matrix: r ═ cos θ, sin θ, 0, 0; -sin θ, cos θ, 0, 0; 0, 0, 1, 0; 0, 0, 0, 1], i.e. rotation about the z-axis, rotating point P (x, y, z) about the z-axis by an angle θ to obtain point P '(x', y ', z'):
x′=ysinθ+xcosθ;y′=ycosθ-xsinθ;z′=z。
expressed as P ═ x with the vector; y; z; 1], P '═ x'; y'; z'; 1], RP ═ P'.
Determining a translation matrix T according to the first displacement translation and the second displacement translation, and if a certain point Q (x, y, z) in the N-th frame point cloud data is assumed, performing translation transformation on Q to obtain Q '(x', y ', z'), wherein the movement components of three coordinate axes are dx ═ Offx, dy ═ Offy, and dz ═ 0 respectively:
x′=x+Offx;y′=y+Offy;z′=z。
expressed as Q ═ x with a vector; y; z; 1], Q '═ x'; y'; z'; 1], then TQ ═ Q', where T ═ 1, 0, 0, Offx; 0, 1, 0, Offy; 0, 0, 1, 0; 0,0,0,1].
After the relative pose information corresponding to the point cloud data of the nth frame is determined, coordinate conversion is performed on the points in the point cloud data of the nth frame according to the determined relative pose information, if the point P in the nth frame is converted, the point P can be rotated and then translated, and the converted point P can be expressed as: t R P.
Determining point cloud map coordinates obtained after coordinate conversion is carried out on the Nth frame of point cloud data; and after the point cloud map coordinates of the continuous multi-frame point cloud data are determined, determining the point cloud map according to the point cloud map coordinates of the continuous multi-frame point cloud data.
After the point cloud map is determined, dynamically updating the point cloud map, and specifically, determining the point cloud map once according to the point cloud map coordinates of each continuous multi-frame point cloud data according to the generation sequence of each frame of point cloud data; in any two adjacent point cloud maps determined twice, the first frame of point cloud data in continuous multi-frame point cloud data used by the point cloud map determined next time is the second frame of point cloud data in continuous multi-frame point cloud data used by the point cloud map determined last time.
In the embodiment of the invention, after the point cloud map is obtained, a local map is generated according to the obtained point cloud map;
when a local map is generated according to the obtained point cloud map, the point cloud map is processed in a rasterization processing mode to generate a bird's-eye view, and the bird's-eye view is used as the local map, and the method comprises the following steps:
dividing the point cloud map into a plurality of cubes;
specifically, as shown in fig. 2, the point cloud map of the three-dimensional space (X, Y, Z) is divided into a plurality of cubes, and the point cloud map of the three-dimensional space is divided into a plurality of cubes in an equal proportion manner on the (X, Y) plane during the division.
Secondly, carrying out weighted average on intensity values corresponding to all points in the cube aiming at any cube to determine a reflection intensity mean value;
because each cube comprises a plurality of points obtained by scanning through the laser radar, each point has a reflection intensity value and the like, when the reflection intensity mean value is determined according to all the points in each cube, all the points in each cube are added to calculate the mean value so as to obtain the reflection intensity mean value;
specifically, the mean reflection intensity value is obtained by performing weighted average on intensity values corresponding to all points in a cube, for example, there are 10 points in a cube, and the reflection intensities of the 10 points are assumed to be: 10. 23, 14, 15, 13, 25, 10, 9, 8, 10, the sum of the reflection intensities of the 10 points is 137, and the mean value of the reflection intensities is 13.7.
Assigning the points in the grids corresponding to the cubes with the determined intensity average values; and forming a bird's-eye view according to the points assigned in all the grids, and taking the bird's-eye view as a local map.
As shown in fig. 2, when a spatial three-dimensional point cloud map is converted into a two-dimensional aerial view, each cube corresponds to a grid on an (X, Y) plane, each grid corresponds to a pixel point, and points in the grids are assigned according to the determined reflection intensity average values; and sequentially, forming a bird's-eye view by the pixel points corresponding to all the grids, and taking the bird's-eye view as a local map.
It should be noted that the manner of determining the mean value of the reflection intensities of the grids listed in the embodiments of the present invention is only an example, and any manner of determining the reflection intensities of the grids is applicable to the embodiments of the present invention.
After a local map is generated, determining a binary image corresponding to the local map based on a deep neural network model according to the generated local map;
in the embodiment of the invention, the deep neural network model is obtained by training according to a historical local map and a binary image with lane line labels;
the binarized image with the lane line label is generated from the lane line information around the vehicle determined from the high-precision map with the lane line information according to the vehicle position information corresponding to the historical local map, and the resolution ratio of the binarized image with the lane line label is the same as that of the historical local map.
When the historical local map is generated, the point cloud data is acquired, and meanwhile, the position information of the vehicle is acquired, wherein the position information is determined when the laser radar on the vehicle scans each frame of point cloud data when the vehicle is in the driving process, and is determined according to a Positioning element, such as a Global Positioning System (GPS), for example; further, a local high-precision map with the same resolution as that of the historical local map is determined in an offline high-precision map according to the vehicle position information determined when the historical local map is generated, wherein the offline high-precision map contains the position information of the lane line;
when a local high-precision map with the same resolution as the historical local map is determined in the offline high-precision map according to the position information of the vehicle, a part of high-precision map with the same resolution as the historical local map can be intercepted from the high-precision map by taking the position information of the vehicle as the center according to the resolution of the corresponding historical local map;
and converting the intercepted high-precision map into a binary image, wherein, for example, the lane line information is represented by white in the binary image, and other objects (such as houses, trees and the like) are represented by black, and finally obtaining the binary image with lane line labels and the same resolution as the historical local map.
It should be noted that, the method for determining the binarized image with the lane line label according to the corresponding position of the historical local map, which is recited in the embodiment of the present invention, is only an example, and any method for determining the binarized image with the lane line label is applicable to the embodiment of the present invention.
In the training process, a history local map and a binary image with lane line labels generated by converting a local high-precision map acquired from a high-precision map according to vehicle position information are used as labels, a training network is supervised, and a deep neural network model is obtained, wherein the deep neural network model can be FCN (full Convolutional network); specifically, the historical local map is input into the FCN model, a binary image with lane line information corresponding to the historical local map can be obtained, the obtained binary image with the lane line information corresponding to the historical local map is compared with the input binary image with the lane line label as a label, and the deep neural network model is further adjusted according to a comparison result, so that the deep neural network model can be converged more quickly.
Therefore, the binary image corresponding to the local map and having the same resolution can be obtained according to the local map based on the trained deep neural network model, and the obtained binary image is the binary image with the lane line information.
After the binary image is obtained, when the position of the lane line is determined according to the coordinates of the pixel points in the binary image, the obtained binary image is the binary image with the lane line information, so that the coordinate positions of the pixel points representing the lane line and the pixel points representing the lane line can be accurately determined, the coordinate values of the pixel points of the lane line are determined, the coordinate values of the determined pixel points are amplified by the times obtained according to the preset proportion, the position of the lane line on the road surface is obtained after amplification, and if the proportion between the pixel value and the real value is 1:0.1, one pixel point is represented as 0.1 meter.
In the embodiment of the present invention, the lane line in the binarized image with lane line labels corresponding to the local map may also be mapped into the local map, because the local map and the corresponding binarized image have the same resolution, and the binarized image is obtained according to the local map, the position of the lane line in the local map is the same as the position of the lane line in the obtained binarized image, that is, if the local map and the binarized image are placed in an overlapping manner, the position of the lane line in the local map coincides with the position of the lane line in the binarized image;
after obtaining a local map marked with lane line position information, establishing a coordinate system in the local map by taking the central position of a vehicle as an origin, wherein each pixel point represents a coordinate, determining the coordinate value of the pixel point marked with a lane line, amplifying the corresponding coordinate value of the pixel point by a multiple obtained according to a preset proportion according to the ratio between a preset pixel value and a real value, and determining the position of the lane line and the distance between the vehicle and the lane line;
for example, the ratio between the pixel value and the true value is 1:0.1, which indicates that a pixel point is 0.1 meter, and if the corresponding coordinate of the pixel point at the edge position of the lane line is (1, 0), it indicates that the lane line is located at the position 0.1 meter on the right side of the vehicle.
As shown in fig. 3, a flowchart of an overall method for detecting a lane line according to an embodiment of the present invention specifically includes the following steps:
step 300, constructing a point cloud map according to continuous multi-frame point cloud data;
step 310, rasterizing the point cloud map to obtain a bird's-eye view, and taking the bird's-eye view as a local map;
step 320, determining a binary image corresponding to the local map according to the local map based on the deep neural network model;
and 330, determining the position of the lane line according to the coordinate value corresponding to the pixel point of the lane line in the binary image.
Based on the same inventive concept, the embodiment of the present invention further provides a device for lane line detection, and since the device corresponds to the device corresponding to the method for lane line detection in the embodiment of the present invention, and the principle of the device for solving the problem is similar to the principle, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Specifically, as shown in fig. 4, a structure diagram of a lane line detection device provided in an embodiment of the present invention includes: at least one processing unit 400 and at least one storage unit 410, wherein the storage unit 410 stores program code, and when the program code is executed by the processing unit 400, the processing unit 400 is specifically configured to:
determining a binary image corresponding to a local map according to the local map based on a deep neural network model, wherein the local map is a bird-eye view obtained by rasterizing a point cloud map, and the deep neural network model is obtained by training according to a historical local map and the binary image with lane marking;
and determining the position of the lane line according to the coordinates of the pixel points in the binary image corresponding to the local map.
Optionally, the processing unit 400 constructs the point cloud map by:
determining relative pose information of an Nth frame of point cloud data by using an inertial navigation element, wherein the Nth frame of point cloud data is determined by scanning surrounding objects of a vehicle through a laser radar;
performing coordinate conversion on the N-th frame of point cloud data according to the relative pose information of the N-th frame of point cloud data to obtain point cloud map coordinates corresponding to the N-th frame of point cloud data;
determining the point cloud map according to point cloud map coordinates corresponding to continuous multi-frame point cloud data;
wherein N is a positive integer.
Optionally, the processing unit 400 determines the local map by:
dividing the point cloud map into a plurality of cubes;
aiming at any cube, carrying out weighted average on intensity values corresponding to all points in the cube to determine a reflection intensity mean value;
assigning the points in the grids corresponding to the cubes with the determined intensity average values;
and forming a bird's-eye view according to the points after the values are assigned in all the grids, and taking the bird's-eye view as a local map.
Optionally, the binarized image with lane line labeling is generated according to lane line information around a vehicle in a high-precision map with lane line information, the vehicle in the high-precision map is determined according to vehicle position information corresponding to the historical local map, and the binarized image with lane line labeling has the same resolution as the historical local map.
Optionally, the processing unit 400 is specifically configured to:
and according to the ratio between the preset pixel value and the real value, determining the position of the lane line according to the coordinate value corresponding to the pixel point representing the lane line in the binary image.
As shown in fig. 5, a diagram of an apparatus for lane line detection according to a second embodiment of the present invention includes: a first determination module 500, a second determination module 510;
the first determining module 500 is configured to: determining a binary image corresponding to a local map according to the local map based on a deep neural network model, wherein the local map is a bird-eye view obtained by rasterizing a point cloud map, and the deep neural network model is obtained by training according to a historical local map and the binary image with lane marking;
the second determining module 510 is configured to: and determining the position of the lane line according to the coordinates of the pixel points in the binary image corresponding to the local map.
Optionally, the apparatus further comprises a construction module, wherein the construction module constructs the point cloud map by:
determining relative pose information of an Nth frame of point cloud data by using an inertial navigation element, wherein the Nth frame of point cloud data is determined by scanning surrounding objects of a vehicle through a laser radar;
performing coordinate conversion on the N-th frame of point cloud data according to the relative pose information of the N-th frame of point cloud data to obtain point cloud map coordinates corresponding to the N-th frame of point cloud data;
determining the point cloud map according to point cloud map coordinates corresponding to continuous multi-frame point cloud data;
wherein N is a positive integer.
Optionally, the first determining module 500 is specifically configured to:
dividing the point cloud map into a plurality of cubes;
aiming at any cube, carrying out weighted average on intensity values corresponding to all points in the cube to determine a reflection intensity mean value;
assigning the points in the grids corresponding to the cubes with the determined intensity average values;
and forming a bird's-eye view according to the points after the values are assigned in all the grids, and taking the bird's-eye view as a local map.
Optionally, the binarized image with lane line labeling is generated according to lane line information around a vehicle in a high-precision map with lane line information, the vehicle in the high-precision map is determined according to vehicle position information corresponding to the historical local map, and the binarized image with lane line labeling has the same resolution as the historical local map.
Optionally, the second determining module 520 is specifically configured to:
and according to the ratio between the preset pixel value and the real value, determining the position of the lane line according to the coordinate value corresponding to the pixel point representing the lane line in the binary image.
Embodiments of the present invention also provide a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method described in fig. 1 above.
The present invention is described above with reference to block diagrams and/or flowchart illustrations of methods, apparatus (systems) and/or computer program products according to embodiments of the invention. It will be understood that one block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, and/or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the present invention may also be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). Furthermore, the invention can take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method of lane line detection, the method comprising:
according to the generation sequence of each frame of point cloud data, dynamically determining a point cloud map according to the point cloud map coordinates of each continuous multi-frame point cloud data, wherein in any two adjacent determined point cloud maps, the first frame of point cloud data in the continuous multi-frame point cloud data used by the next determined point cloud map is the second frame of point cloud data in the continuous multi-frame point cloud data used by the previous determined point cloud map;
taking the aerial view obtained by rasterizing the point cloud map as a local map;
determining a binary image corresponding to the local map according to the local map based on a deep neural network model, wherein the deep neural network model is obtained by training according to a historical local map and the binary image with lane marking;
and determining the position of the lane line according to the coordinates of the pixel points in the binary image corresponding to the local map.
2. The method of claim 1, wherein the point cloud map is constructed by:
determining relative pose information of an Nth frame of point cloud data by using an inertial navigation element, wherein the Nth frame of point cloud data is determined by scanning surrounding objects of a vehicle through a laser radar;
performing coordinate conversion on the N-th frame of point cloud data according to the relative pose information of the N-th frame of point cloud data to obtain point cloud map coordinates corresponding to the N-th frame of point cloud data;
determining the point cloud map according to point cloud map coordinates corresponding to continuous multi-frame point cloud data;
wherein N is a positive integer.
3. The method of claim 1, wherein the local map is determined by:
dividing the point cloud map into a plurality of cubes;
aiming at any cube, carrying out weighted average on intensity values corresponding to all points in the cube to determine a reflection intensity mean value;
assigning the points in the grids corresponding to the cubes with the determined intensity average values;
and forming a bird's-eye view according to the points after the values are assigned in all the grids, and taking the bird's-eye view as a local map.
4. The method as claimed in claim 1, wherein the lane-line labeled binarized image is generated from lane-line information around a vehicle in a high-precision map with lane-line information determined from vehicle position information corresponding to the historical local map, the lane-line labeled binarized image having the same resolution as the historical local map.
5. The method as claimed in claim 1, wherein the determining the lane line position according to the coordinates of the pixel points in the binarized image corresponding to the local map comprises:
and according to the ratio between the preset pixel value and the real value, determining the position of the lane line according to the coordinate value corresponding to the pixel point representing the lane line in the binary image.
6. An apparatus for lane line detection, the apparatus comprising: at least one processing unit and at least one storage unit, wherein the storage unit stores program code, and when the program code is executed by the processing unit, the processing unit is specifically configured to:
according to the generation sequence of each frame of point cloud data, dynamically determining a point cloud map according to the point cloud map coordinates of each continuous multi-frame point cloud data, wherein in any two adjacent determined point cloud maps, the first frame of point cloud data in the continuous multi-frame point cloud data used by the next determined point cloud map is the second frame of point cloud data in the continuous multi-frame point cloud data used by the previous determined point cloud map;
taking the aerial view obtained by rasterizing the point cloud map as a local map;
determining a binary image corresponding to the local map according to the local map based on a deep neural network model, wherein the deep neural network model is obtained by training according to a historical local map and the binary image with lane marking;
and determining the position of the lane line according to the coordinates of the pixel points in the binary image corresponding to the local map.
7. The apparatus of claim 6, wherein the processing unit constructs the point cloud map by:
determining relative pose information of an Nth frame of point cloud data by using an inertial navigation element, wherein the Nth frame of point cloud data is determined by scanning surrounding objects of a vehicle through a laser radar;
performing coordinate conversion on the N-th frame of point cloud data according to the relative pose information of the N-th frame of point cloud data to obtain point cloud map coordinates corresponding to the N-th frame of point cloud data;
determining the point cloud map according to point cloud map coordinates corresponding to continuous multi-frame point cloud data;
wherein N is a positive integer.
8. The device of claim 7, wherein the processing unit determines the local map by:
dividing the point cloud map into a plurality of cubes;
aiming at any cube, carrying out weighted average on intensity values corresponding to all points in the cube to determine a reflection intensity mean value;
assigning the points in the grids corresponding to the cubes with the determined intensity average values;
and forming a bird's-eye view according to the points after the values are assigned in all the grids, and taking the bird's-eye view as a local map.
9. The apparatus of claim 7, wherein the lane-line labeled binarized image is generated from lane line information surrounding a vehicle in a high-precision map with lane line information determined from vehicle position information corresponding to the historical local map, the lane-line labeled binarized image having the same resolution as the historical local map.
10. The device of claim 6, wherein the processing unit is specifically configured to:
and according to the ratio between the preset pixel value and the real value, determining the position of the lane line according to the coordinate value corresponding to the pixel point representing the lane line in the binary image.
CN201910197037.0A 2019-03-15 2019-03-15 Method and equipment for detecting lane line Active CN110008851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910197037.0A CN110008851B (en) 2019-03-15 2019-03-15 Method and equipment for detecting lane line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910197037.0A CN110008851B (en) 2019-03-15 2019-03-15 Method and equipment for detecting lane line

Publications (2)

Publication Number Publication Date
CN110008851A CN110008851A (en) 2019-07-12
CN110008851B true CN110008851B (en) 2021-11-19

Family

ID=67167155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910197037.0A Active CN110008851B (en) 2019-03-15 2019-03-15 Method and equipment for detecting lane line

Country Status (1)

Country Link
CN (1) CN110008851B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308905B (en) * 2019-07-31 2024-05-10 北京地平线机器人技术研发有限公司 Method and device for determining coordinates of plane marker
CN110595494B (en) * 2019-09-17 2021-06-22 百度在线网络技术(北京)有限公司 Map error determination method and device
CN112926365B (en) * 2019-12-06 2024-07-12 广州汽车集团股份有限公司 Lane line detection method and system
CN111126204B (en) * 2019-12-09 2024-08-02 博泰车联网科技(上海)股份有限公司 Method for detecting drivable area and related equipment
CN111126182B (en) * 2019-12-09 2022-09-20 苏州智加科技有限公司 Lane line detection method, lane line detection device, electronic device, and storage medium
CN111121805A (en) * 2019-12-11 2020-05-08 广州赛特智能科技有限公司 Local positioning correction method, device and medium based on road traffic marking marks
CN111291676B (en) * 2020-02-05 2020-12-11 清华大学 Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN111273305B (en) * 2020-02-18 2022-03-25 中国科学院合肥物质科学研究院 Multi-sensor fusion road extraction and indexing method based on global and local grid maps
CN111680596B (en) * 2020-05-29 2023-10-13 北京百度网讯科技有限公司 Positioning true value verification method, device, equipment and medium based on deep learning
CN111998860B (en) * 2020-08-21 2023-02-17 阿波罗智能技术(北京)有限公司 Automatic driving positioning data verification method and device, electronic equipment and storage medium
CN112215144B (en) * 2020-10-12 2024-05-14 北京四维智联科技有限公司 Method and system for processing lane lines
CN112733812B (en) * 2021-03-01 2023-08-29 知行汽车科技(苏州)股份有限公司 Three-dimensional lane line detection method, device and storage medium
CN113093221A (en) * 2021-03-31 2021-07-09 东软睿驰汽车技术(沈阳)有限公司 Generation method and device of grid-occupied map
CN113469042A (en) * 2021-06-30 2021-10-01 上海商汤临港智能科技有限公司 Truth value data determination, neural network training and driving control method and device
CN116152761B (en) * 2022-12-26 2023-10-17 小米汽车科技有限公司 Lane line detection method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701449A (en) * 2015-12-31 2016-06-22 百度在线网络技术(北京)有限公司 Method and device for detecting lane lines on road surface
CN105930800A (en) * 2016-04-21 2016-09-07 北京智芯原动科技有限公司 Lane line detection method and device
CN106127113A (en) * 2016-06-15 2016-11-16 北京联合大学 A kind of road track line detecting method based on three-dimensional laser radar
CN106199558A (en) * 2016-08-18 2016-12-07 宁波傲视智绘光电科技有限公司 Barrier method for quick
CN106908775A (en) * 2017-03-08 2017-06-30 同济大学 A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN107045629A (en) * 2017-04-19 2017-08-15 南京理工大学 A kind of Multi-lane Lines Detection method
CN108038416A (en) * 2017-11-10 2018-05-15 智车优行科技(北京)有限公司 Method for detecting lane lines and system
CN108550318A (en) * 2018-03-12 2018-09-18 浙江大华技术股份有限公司 A kind of method and device of structure map
CN108764075A (en) * 2018-05-15 2018-11-06 北京主线科技有限公司 The method of container truck location under gantry crane
CN108985230A (en) * 2018-07-17 2018-12-11 深圳市易成自动驾驶技术有限公司 Method for detecting lane lines, device and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
CN106778664B (en) * 2016-12-29 2020-12-15 天津中科智能识别产业技术研究院有限公司 Iris image iris area segmentation method and device
CN108732584B (en) * 2017-04-17 2020-06-30 百度在线网络技术(北京)有限公司 Method and device for updating map

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701449A (en) * 2015-12-31 2016-06-22 百度在线网络技术(北京)有限公司 Method and device for detecting lane lines on road surface
CN105930800A (en) * 2016-04-21 2016-09-07 北京智芯原动科技有限公司 Lane line detection method and device
CN106127113A (en) * 2016-06-15 2016-11-16 北京联合大学 A kind of road track line detecting method based on three-dimensional laser radar
CN106199558A (en) * 2016-08-18 2016-12-07 宁波傲视智绘光电科技有限公司 Barrier method for quick
CN106908775A (en) * 2017-03-08 2017-06-30 同济大学 A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN107045629A (en) * 2017-04-19 2017-08-15 南京理工大学 A kind of Multi-lane Lines Detection method
CN108038416A (en) * 2017-11-10 2018-05-15 智车优行科技(北京)有限公司 Method for detecting lane lines and system
CN108550318A (en) * 2018-03-12 2018-09-18 浙江大华技术股份有限公司 A kind of method and device of structure map
CN108764075A (en) * 2018-05-15 2018-11-06 北京主线科技有限公司 The method of container truck location under gantry crane
CN108985230A (en) * 2018-07-17 2018-12-11 深圳市易成自动驾驶技术有限公司 Method for detecting lane lines, device and computer readable storage medium

Also Published As

Publication number Publication date
CN110008851A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110008851B (en) Method and equipment for detecting lane line
CN109828592B (en) A kind of method and apparatus of detection of obstacles
CN109766878B (en) A kind of method and apparatus of lane detection
CN109710724B (en) A kind of method and apparatus of building point cloud map
CN109740604B (en) A kind of method and apparatus of running region detection
CN110832279B (en) Alignment of data captured by autonomous vehicles to generate high definition maps
EP3520076B1 (en) Computer vision systems and methods for detecting and modeling features of structures in images
CN109214248B (en) Method and device for identifying laser point cloud data of unmanned vehicle
JP7204326B2 (en) Information processing device, its control method and program, and vehicle driving support system
CN109285188B (en) Method and apparatus for generating position information of target object
CN108401461A (en) Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
US11231283B2 (en) Localization with neural network based image registration of sensor data and map data
CN111339876B (en) Method and device for identifying types of areas in scene
Jeong et al. Hdmi-loc: Exploiting high definition map image for precise localization via bitwise particle filter
Matthies et al. Lunar rover localization using craters as landmarks
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
EP4194807A1 (en) High-precision map construction method and apparatus, electronic device, and storage medium
CN116295457B (en) Vehicle vision positioning method and system based on two-dimensional semantic map
CN114127738A (en) Automatic mapping and positioning
US20200380085A1 (en) Simulations with Realistic Sensor-Fusion Detection Estimates of Objects
Lacroix et al. Digital elevation map building from low altitude stereo imagery
CN116386003A (en) Three-dimensional target detection method based on knowledge distillation
CN115496873A (en) Monocular vision-based large-scene lane mapping method and electronic equipment
KR20230112296A (en) Implementation of a Mobile Target Search System with 3D SLAM and Object Localization in Indoor Environments
KR102618951B1 (en) Method for visual mapping, and computer program recorded on record-medium for executing method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240508

Address after: Room 6227, No. 999, Changning District, Shanghai 200050

Patentee after: Shenlan robot (Shanghai) Co.,Ltd.

Country or region after: China

Address before: Unit 1001, 369 Weining Road, Changning District, Shanghai, 200336 (9th floor of actual floor)

Patentee before: DEEPBLUE TECHNOLOGY (SHANGHAI) Co.,Ltd.

Country or region before: China