CN110163047B - Method and device for detecting lane line - Google Patents
Method and device for detecting lane line Download PDFInfo
- Publication number
- CN110163047B CN110163047B CN201810732366.6A CN201810732366A CN110163047B CN 110163047 B CN110163047 B CN 110163047B CN 201810732366 A CN201810732366 A CN 201810732366A CN 110163047 B CN110163047 B CN 110163047B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- image
- vehicle
- frames
- position information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000001514 detection method Methods 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 7
- 238000005286 illumination Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 18
- 230000001133 acceleration Effects 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000003708 edge detection Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000010365 information processing Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the invention provides a method and a device for detecting lane lines, which relate to the technical field of road detection, and the method comprises the following steps: and obtaining N frames of point cloud images determined by the vehicle-mounted laser equipment through transmitting laser signals. Splicing the N frames of point cloud images into a target point cloud image, determining a point cloud intensity image corresponding to the target point cloud image according to the target point cloud image, and determining a lane line from the point cloud intensity image corresponding to the target point cloud image. Because the laser signal is less influenced by illumination, the vehicle-mounted laser equipment is adopted to emit the laser signal to obtain the point cloud image for detecting the lane line, and the real-time performance and the stability of detecting the lane line are improved. Because the spliced point cloud image contains more information, the lane line detection is carried out according to the spliced point cloud image, and the lane line detection precision is improved. The lane line is detected by using the point cloud intensity image corresponding to the point cloud image, so that the point cloud image is prevented from being directly analyzed in a 3D space, and the efficiency of detecting the lane line is improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of road detection, in particular to a method and a device for detecting a lane line.
Background
A central technical link in automatic driving is vehicle positioning, and only after the relative position information of the vehicle on the road is known, the position of the automatic driving vehicle in the environment and the map is known, so that subsequent links are performed, for example, the relative positions of objects such as other vehicles in the environment and the vehicle are calculated. Currently, the position of a vehicle can be located by detecting lane lines. When the lane line is detected, a front camera or a rear camera of the vehicle is used for shooting images, and then the shot images are processed to determine the lane line. Because the camera is a sensor system of a passive light source, the collected image is fuzzy under the condition of poor illumination conditions such as night and the like, and the accuracy of detecting the lane line is low. Road detection techniques may be used in other areas besides the need for autonomous driving.
Disclosure of Invention
Because the accuracy of the method for detecting the lane line by shooting the road image through the camera in the prior art is greatly influenced by illumination, the embodiment of the invention provides a method and a device for detecting the lane line, so as to improve the accuracy of detecting the lane line.
In a first aspect, an embodiment of the present invention provides a method for detecting a lane line, where the method includes: and obtaining N frames of point cloud images determined by the vehicle-mounted laser equipment through transmitting laser signals, wherein N is a positive integer. Splicing the N frames of point cloud images into a target point cloud image, then determining a point cloud intensity image corresponding to the target point cloud image according to the target point cloud image, and then determining a lane line from the point cloud intensity image corresponding to the target point cloud image. Because the vehicle-mounted laser equipment is less interfered by the outside world in the process of acquiring the point cloud image by emitting the laser signal and has stronger adaptability to factors such as a light source and weather, the vehicle-mounted laser equipment is adopted to emit the laser signal to acquire the point cloud image, and then the detection precision of the lane line is less influenced by the light source and the weather when the lane line is determined by processing the point cloud image, so that the real-time property and the stability of detecting the lane line are improved. Secondly, because the spliced point cloud images contain more lane information and are enhanced, when the lane lines are detected according to the point cloud images, the multi-frame point cloud images are spliced firstly, and the lane lines are detected according to the spliced point cloud images, so that the accuracy of detecting the lane lines is improved. In addition, when the lane line is detected through the spliced point cloud image, the point cloud image is converted into a point cloud intensity image, so that the point cloud image is prevented from being directly analyzed in a 3D space, the process of detecting the lane line is simplified on one hand, and the efficiency of detecting the lane line is improved on the other hand.
Optionally, the stitching the N frames of point cloud images into a target point cloud image includes:
determining relative position information between the N frames of point cloud images according to the position information when the vehicle-mounted laser equipment emits laser signals;
and sequentially splicing the N frames of point cloud images according to the relative position information among the N frames of point cloud images to determine the target point cloud image. Because the positions of the vehicle-mounted laser equipment are different, the point cloud images determined by the vehicle-mounted laser equipment through emitting laser signals are also different, before the N frames of point cloud images are spliced, the relative position information between the N frames of point cloud images is determined according to the position information when the vehicle-mounted laser equipment emits the laser signals, then the N frames of point cloud images are spliced according to the relative position information between the N frames of point cloud images, the target point cloud image is determined, and therefore the accuracy of the target point cloud image is improved.
Optionally, the determining the relative position information between the N frames of point cloud images according to the position information of the vehicle-mounted laser device when emitting the laser signal includes:
aiming at any two frames of point cloud images, when the vehicle-mounted laser equipment emits a first laser signal corresponding to one frame of point cloud image, determining first position information of the vehicle-mounted laser equipment through vehicle-mounted inertial navigation equipment;
when the vehicle-mounted laser equipment emits a second laser signal corresponding to another frame of point cloud image, determining second position information of the vehicle-mounted laser equipment through the vehicle-mounted inertial navigation equipment;
determining relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment transmits the first laser signal and the second laser signal according to the first position information and the second position information;
and determining the relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal as the relative position information of the two frames of point cloud images.
Because the positions of the vehicle-mounted laser equipment for emitting laser signals are different, the point cloud images determined according to the laser signals are also different, and the relative position information of the vehicle-mounted laser equipment determines the relative position information between the point cloud images. The vehicle-mounted inertial navigation equipment can determine the position information of the vehicle-mounted laser equipment in real time, so that the vehicle-mounted inertial navigation equipment determines the position information of the vehicle-mounted laser equipment when emitting each laser signal, then further determines the relative position information of the vehicle-mounted laser equipment when emitting each laser signal, and takes the relative position information of the vehicle-mounted laser equipment when emitting each laser signal as the relative position information of each point cloud image, thereby bringing convenience for splicing the subsequent point cloud images.
Optionally, sequentially stitching the N frames of point cloud images according to the relative position information between the N frames of point cloud images to determine the target point cloud image includes:
unifying the N frames of point cloud images in the same coordinate system according to the relative position information among the N frames of point cloud images;
and splicing the N frames of point cloud images after unifying the coordinate system to determine the target point cloud image.
Optionally, the determining the target point cloud image by stitching N frames of point cloud images after unifying the coordinate system includes:
splicing the first frame point cloud image and the second frame point cloud image to determine a first sub-spliced point cloud image;
sequentially splicing the M sub-spliced point cloud image with the M +2 frame point cloud image, and determining the target point cloud image until the M +2 frame point cloud image is the Nth frame point cloud image, wherein M is an integer greater than or equal to 1, and N is an integer greater than or equal to 3.
When the N frames of point cloud images are spliced, the first frame of point cloud image is taken as a reference, and other point cloud images are sequentially spliced with the first frame of point cloud image, so that when the relative position information between the N frames of point cloud images is determined, only the relative position information between the first frame of point cloud image and other N-1 frames of point cloud images is required to be determined, and then the point cloud images are spliced, so that the image splicing process is simplified.
Optionally, the determining the target point cloud image by stitching N frames of point cloud images after unifying the coordinate system includes:
a first splicing step: splicing the Mth frame of point cloud image and the M +1 th frame of point cloud image to determine N/2 frames of first-stage spliced point cloud images, wherein M is more than or equal to 1 and less than N, and M is an odd number;
a second splicing step: splicing the L-th frame of first-stage spliced point cloud image with the L + 1-th frame of first-stage spliced point cloud image to determine N/4 frames of second-stage spliced point cloud images, wherein L is more than or equal to 1 and less than N/2, L is an odd number, and N is an even number more than or equal to 4;
and sequentially executing the first splicing step and the second splicing step until the determined spliced point cloud image is one frame, and forming the target point cloud image.
Because the similarity between two adjacent frames of point cloud images is higher than the similarity between non-adjacent point cloud images, the splicing error between the two adjacent frames of point cloud images is smaller than that between the two non-adjacent frames of point cloud images, when the N frames of point cloud images are spliced, the two adjacent frames of point cloud images are spliced according to the relative position information between the two adjacent frames of point cloud images, a plurality of sub-frames of spliced point cloud images are determined, then the adjacent sub-spliced point cloud images are spliced until a target point cloud image is determined, so that the splicing error during splicing the N frames of point cloud images is reduced, and the accuracy of the target point cloud image is improved.
Optionally, the determining a lane line from the point cloud intensity image corresponding to the target point cloud image includes:
and segmenting the point cloud intensity image according to the pixel gray value of the point cloud intensity image, and determining a lane line. Because the lane line is generally white or yellow, and the road surface is black or gray, the reflection intensity of the lane line and the reflection intensity of the road surface are obviously different in a point cloud intensity image obtained by adopting a vehicle-mounted laser device to emit laser signals, so that the pixel gray value of the lane line and the pixel gray value of the road surface are also obviously different in the obtained point cloud intensity image, the lane line can be effectively detected by segmenting the point cloud image according to the pixel gray value of the point cloud intensity image, and the detection precision of the detected lane line is improved.
In a second aspect, an embodiment of the present invention provides an apparatus for detecting a lane line, including:
the acquisition module is used for acquiring N frames of point cloud images determined by vehicle-mounted laser equipment through transmitting laser signals, wherein N is a positive integer;
the splicing module is used for splicing the N frames of point cloud images into a target point cloud image;
the processing module is used for determining a point cloud intensity image corresponding to the target point cloud image;
and the detection module is used for determining a lane line from the point cloud intensity image corresponding to the target point cloud image.
Optionally, the splicing module is specifically configured to:
determining relative position information between the N frames of point cloud images according to the position information when the vehicle-mounted laser equipment emits laser signals;
and sequentially splicing the N frames of point cloud images according to the relative position information among the N frames of point cloud images, and determining the target point cloud image.
Optionally, the splicing module is specifically configured to:
aiming at any two frames of point cloud images, when the vehicle-mounted laser equipment emits a first laser signal corresponding to one frame of point cloud image, determining first position information of the vehicle-mounted laser equipment through vehicle-mounted inertial navigation equipment;
when the vehicle-mounted laser equipment emits a second laser signal corresponding to another frame of point cloud image, determining second position information of the vehicle-mounted laser equipment through the vehicle-mounted inertial navigation equipment;
determining relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal according to the first position information and the second position information;
and determining the relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal as the relative position information of the two frames of point cloud images.
Optionally, the splicing module is specifically configured to:
unifying the N frames of point cloud images in the same coordinate system according to the relative position information among the N frames of point cloud images;
and splicing the N frames of point cloud images with the unified coordinate system to determine the target point cloud image.
Optionally, the splicing module is specifically configured to:
splicing the first frame point cloud image and the second frame point cloud image to determine a first sub-spliced point cloud image;
sequentially splicing the M sub-spliced point cloud image with the M +2 frame point cloud image, and determining the target point cloud image until the M +2 frame point cloud image is the Nth frame point cloud image, wherein M is an integer greater than or equal to 1, and N is an integer greater than or equal to 3.
Optionally, the splicing module is specifically configured to:
a first splicing step: splicing the M frame point cloud image and the M +1 frame point cloud image to determine N/2 frames of first-stage spliced point cloud images, wherein M is more than or equal to 1 and is less than N, and M is an odd number;
a second splicing step: splicing the L-th frame of first-stage spliced point cloud image with the L + 1-th frame of first-stage spliced point cloud image to determine N/4 frames of second-stage spliced point cloud images, wherein L is more than or equal to 1 and less than N/2, and L is an odd number;
and sequentially executing the first splicing step and the second splicing step until the determined spliced point cloud image is a frame to form the target point cloud image, wherein N is an even number which is more than or equal to 4.
Optionally, the detection module is specifically configured to:
and segmenting the point cloud intensity image according to the pixel gray value of the point cloud intensity image to determine a lane line.
In a third aspect, an embodiment of the present invention provides an automatic driving control method, including detecting a lane line and controlling a driving route according to the lane line, and detecting the lane line by using any one of the above methods.
In a fourth aspect, an embodiment of the present invention provides an automatic driving control system, which includes at least one processing unit and at least one storage unit, where the storage unit stores a computer program, and when the program is executed by the processing unit, the processing unit is caused to execute the steps of any one of the methods described above.
In a fifth aspect, the present invention provides a computer-readable medium, which stores a computer program, and when the program runs, executes the steps of any one of the above methods.
In the embodiment of the invention, the vehicle-mounted laser equipment has stronger adaptability to factors such as light sources, weather and the like in the process of acquiring the point cloud image by emitting the laser signal, so that the vehicle-mounted laser equipment is adopted to emit the laser signal to acquire the point cloud image, and then the influence of the light sources and the weather on the detection precision of the lane line is smaller when the lane line is determined by processing the point cloud image, thereby improving the real-time property and the stability of detecting the lane line. Secondly, because the spliced point cloud images contain more lane information and are enhanced, when the lane lines are detected according to the point cloud images, the multi-frame point cloud images are spliced firstly, and the lane lines are detected according to the spliced point cloud images, so that the accuracy of detecting the lane lines is improved. When point cloud images are spliced, because the positions of vehicle-mounted laser equipment are different, the point cloud images determined by the vehicle-mounted laser equipment through emitting laser signals are also different, before N frames of point cloud images are spliced, the relative position information between the N frames of point cloud images is determined according to the position information when the vehicle-mounted laser equipment emits the laser signals, then the N frames of point cloud images are spliced according to the relative position information between the N frames of point cloud images, the target point cloud image is determined, and therefore the accuracy of the target point cloud image is improved. In addition, when the lane line is detected through the spliced point cloud image, the point cloud image is converted into a point cloud intensity image, so that the point cloud image is prevented from being directly analyzed in a 3D space, the process of detecting the lane line is simplified on one hand, and the efficiency of detecting the lane line is improved on the other hand.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is an application scenario diagram according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an apparatus for detecting lane lines according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for detecting lane lines according to an embodiment of the present invention;
fig. 4 is a schematic view of a lane line according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a process for determining relative position information of a point cloud image according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a position coordinate system provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of an angular coordinate system according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of point cloud image stitching according to an embodiment of the present invention;
fig. 9 is a schematic diagram of point cloud image stitching according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a point cloud image stitching method according to an embodiment of the present invention;
FIG. 11a is a schematic diagram of a point cloud intensity image according to an embodiment of the present invention;
FIG. 11b is a schematic diagram of a point cloud intensity image according to an embodiment of the present invention;
FIG. 12 is a schematic view of a lane line according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an apparatus for detecting lane markings according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a driving control system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
For convenience of understanding, terms referred to in the embodiments of the present invention are explained below.
Laser radar: a radar system detects a characteristic quantity such as a position, a velocity, etc. of an object with a transmission laser beam. By transmitting a detection signal (laser beam) to a target, comparing a received echo signal (target echo) reflected from the target with the transmitted signal, and after appropriate processing, information about the target, such as target distance, orientation, height, speed, attitude, and even shape, can be obtained, thereby detecting, tracking, and identifying the target.
Laser point cloud: the point data set of the product appearance surface obtained by the measuring instrument is called point cloud. When a beam of laser irradiates the surface of an object, the reflected laser carries information such as direction, distance and the like. When the laser beam is scanned along a certain trajectory, the reflected laser spot information is recorded while scanning, and since the scanning is extremely fine, a large number of laser spots can be obtained, and a laser point cloud can be formed.
Splicing laser point cloud images: at specific moment, the point cloud image that laser radar gathered is a certain instantaneous image, is equipped with positioning device usually in vehicle or on-vehicle laser equipment, for example on-vehicle be used to lead, utilizes the positional information of on-vehicle being used to lead equipment output to can splice the point cloud image of adjacent moment, has just so obtained the point cloud image at different moments, and the point cloud image after multiframe concatenation is abundanter, and the characteristic is more obvious, provides abundant data support for the environmental perception.
Point cloud intensity information: the reflection intensity information is collected after the vehicle-mounted laser equipment transmits laser signals, and the reflection intensity information is related to the surface material, the roughness and the incident angle direction of a target, the transmitting energy of the laser equipment and the laser wavelength.
In a specific practical process, the inventor of the invention finds that the position of the self-vehicle can be positioned according to the position of the lane line when the unmanned vehicle or the auxiliary driving vehicle runs. The existing method mainly adopts a front camera or a rear camera of a vehicle to shoot images, and then detects a lane line according to the shot images. However, under poor illumination conditions such as at night, the image collected by the camera is very blurred, and sometimes the lane line cannot be identified from the shot image, so that the accuracy and real-time performance of detecting the lane line are affected.
Therefore, the inventor of the invention considers that the laser signal emitted by the laser radar is relatively less interfered by the outside, and has stronger adaptability to factors such as light sources and weather. Lidar may obtain information about a target by transmitting a laser signal to the target and then comparing the received echo signal reflected back from the target with the transmitted signal. In the embodiment of the invention, the vehicle-mounted laser equipment is adopted to emit the laser signal to determine the point cloud image, and then the lane line is detected according to the point cloud image, so that the real-time performance of detecting the lane line is improved. Secondly, the point cloud obtained by the vehicle-mounted laser device through emitting the laser signal at least comprises three-dimensional coordinates and reflection intensity, and when the vehicle-mounted laser device (such as a Velodyne 64 line laser radar) scans and obtains one frame of point cloud image, the point cloud reaches more than 10 ten thousand, and the data volume is huge. If the original point cloud data determined by the vehicle-mounted laser equipment is directly processed, the real-time requirement is difficult to achieve. For this reason, the inventors of the present invention considered that the lane line is generally white or yellow, the road surface is black or gray, the reflection intensity of the lane line to the laser signal is different from the reflection intensity of the road surface to the laser signal, and the lane line and the road surface are also clearly distinguished in the point cloud intensity image corresponding to the point cloud image. Therefore, the point cloud image is directly converted into the point cloud intensity image, and the lane line is detected according to the point cloud intensity image, so that the data amount required to be processed when the lane line is detected is reduced, and the real-time performance of lane line detection is improved. In the point cloud intensity image, the pixel gray value of the lane line is obviously different from the pixel gray value of the road surface, so that in the embodiment of the invention, the point cloud image is segmented according to the pixel gray value of the point cloud intensity image to detect the lane line, and the accuracy of detecting the lane line is improved. In addition, the three-dimensional coordinates of the point cloud in the point cloud image correspond to the reflection intensity, so that after the lane line is detected through the point cloud intensity image, the three-dimensional coordinates of the point cloud corresponding to the lane line can be determined according to the corresponding relation between the three-dimensional coordinates of the point cloud and the reflection intensity, and then the position of the lane line is determined according to the three-dimensional coordinates of the point cloud. Furthermore, the inventor of the present invention finds that, when a lane line is detected by using a single-frame point cloud intensity image, since the single-frame point cloud intensity image contains a small amount of information and has a limited detection distance, it is impossible to determine whether the currently detected lane line is a solid line or a dotted line by using the single-frame point cloud intensity image, and in addition, the boundary between the lane line and the ground in the single-frame point cloud intensity image is not very obvious, thereby affecting the detection accuracy of the lane line. Therefore, in the embodiment of the invention, the vehicle-mounted laser equipment emits laser signals to determine the multi-frame point cloud images, then the multi-frame point cloud images are spliced to determine the target point cloud image, and then the lane line is detected according to the point cloud intensity image corresponding to the target point cloud image. Because the target point cloud image determined after the multi-frame point cloud images are spliced contains more information and the detection distance is longer, whether the lane line is a solid line or a dotted line can be well judged, and meanwhile, the precision of image segmentation of the point cloud intensity image is improved, so that the precision of detecting the lane line is further improved.
The method for detecting lane lines in the embodiment of the present invention may be applied to an application scenario as shown in fig. 1, where the application scenario includes a vehicle 101 and a vehicle-mounted laser device 102.
The vehicle 101 may be an unmanned vehicle or a drive-assisted smart vehicle. The vehicle 101 includes therein an automatic driving control system that senses the environment around the vehicle using an on-vehicle sensor and controls the steering and speed of the vehicle based on the road, vehicle position, and obstacle information obtained by the sensing, thereby enabling the vehicle to run safely and reliably on the road. In the auxiliary driving intelligent vehicle, the automatic driving control system senses the surrounding environment of the vehicle by using the vehicle-mounted sensor, and measures such as emergency braking and the like are taken when collision is unavoidable according to the road, vehicle position and obstacle information obtained by sensing so as to protect the safety of drivers.
The in-vehicle laser device 102 is a laser radar mounted on a vehicle, and is a mobile three-dimensional laser scanning system. The onboard laser device 102 may be located on the top, nose, tail, etc. of the vehicle 101. The vehicle-mounted laser device 102 is composed of a laser transmitter, an optical receiver, a turntable, an information processing system and the like, wherein the laser transmitter converts an electric signal into a laser signal to be transmitted, and the optical receiver restores the laser signal reflected from a target into the electric signal to be transmitted to the information processing system. When the laser transmitter transmits laser signals, the rotary table rotates to realize laser scanning. The information processing system compares the laser signal received by the optical receiver and reflected from the target with the laser signal emitted by the laser emitter, and after proper processing, a point cloud image can be obtained, wherein the point cloud image comprises the three-dimensional coordinates of the point cloud, the reflection intensity of the point cloud and the like.
The vehicle 101 further includes a Positioning device such as a Global Positioning System (GPS) and a vehicle-mounted inertial navigation device, and the Positioning device may be located inside the vehicle 101 or in the vehicle-mounted laser device 102. The inertial navigation equipment is an autonomous navigation system which does not depend on external information and does not radiate energy to the outside, and the working environment of the inertial navigation equipment comprises the air and the ground and can be underwater. The basic working principle of the inertial navigation equipment is based on Newton's law of mechanics, and information such as speed, yaw angle and position in a navigation coordinate system can be obtained by measuring the acceleration of a carrier in an inertial reference system, integrating the acceleration with time and transforming the acceleration into the navigation coordinate system.
Further, in the application scenario diagram shown in fig. 1, the automatic driving control system in the vehicle 101 includes a device for detecting a lane line, and the device is schematically illustrated in fig. 2 and includes an obtaining module 1011, a splicing module 1012, a processing module 1013, and a detecting module 1014. The obtaining module 1011 obtains N frames of point cloud images from the vehicle-mounted laser device 102, and determines the position information of the vehicle-mounted optical device when emitting the laser signals corresponding to the N frames of point cloud images through the positioning device. The splicing module 1012 determines relative position information between the N frames of point cloud images according to position information when the vehicle-mounted optical device emits laser signals corresponding to the N frames of point cloud images, then splices the N frames of point cloud images into a target point cloud image according to the relative position information between the N frames of point cloud images, then emits the target point cloud image to the processing module 1003, and converts the target point cloud image into a point cloud intensity image by the processing module 1003. The detection module 1004 segments the point cloud intensity image according to the pixel gray value of the point cloud image to determine a lane line.
Based on the application scenario diagram shown in fig. 1 and the schematic structural diagram of the apparatus for detecting lane lines shown in fig. 2, an embodiment of the present invention provides a flow of a method for detecting lane lines, where the flow of the method may be executed by the apparatus for detecting lane lines, as shown in fig. 3, and includes the following steps:
step S301, obtaining N frames of point cloud images determined by vehicle-mounted laser equipment through transmitting laser signals, wherein N is a positive integer.
The lane lines are used for indicating that the vehicles should drive according to the pointed direction at the entrance section of the intersection. Such marking lines are generally drawn at a traffic intersection with a large traffic flow, and are intended to clarify driving directions, respectively drive their own lanes, and slow down traffic pressure, and the lane lines include white dotted lines, white solid lines, guidance indication lines, deceleration indication lines, and the like, and exemplarily, as shown in fig. 4, the lane lines included in the drawing are the white solid lines and the guidance indication lines.
The vehicle-mounted laser device emits laser signals in a scanning mode, and the time interval of emitting the laser signals each time is set according to actual conditions, such as every 0.1 second or 0.2 second and the like. The vehicle-mounted laser equipment receives an echo signal returned by the road surface after emitting the laser signal, and determines the point cloud image according to the returned echo signal. When the lane line is detected, N frames of point cloud images continuously determined by the vehicle-mounted laser device can be obtained, and N frames of point cloud images continuously determined by the vehicle-mounted laser device can also be obtained. Illustratively, the laser signal is emitted by the vehicle-mounted laser device once every 0.1 second, and 10 frames of point cloud images are determined within 1 second. And setting N to be 5, and when the lane line is detected, acquiring the first 5 frames of point cloud images determined by the vehicle-mounted laser equipment, namely the first frame of point cloud image to the fifth frame of point cloud image. Or selecting one frame of point cloud image every other frame from 10 frames of point cloud images determined by the vehicle-mounted laser device, namely the first frame, the third frame, the fifth frame, the seventh frame and the ninth frame of point cloud images. Other methods for selecting point cloud images are also possible, and are not described in detail here.
The point cloud image is composed of a plurality of point clouds, and each point cloud comprises information such as three-dimensional coordinates and reflection intensity.
Step S302, splicing the N frames of point cloud images into a target point cloud image.
Specifically, the relative position information between the N frames of point cloud images can be determined according to the position information when the vehicle-mounted laser device emits the laser signals corresponding to the N frames of point cloud images. The relative position information between the N frames of point cloud images can also be determined according to the position information when the vehicle-mounted laser equipment receives the echo signals corresponding to the N frames of point cloud images. And then sequentially splicing the N frames of point cloud images according to the relative position information among the N frames of point cloud images to determine a target point cloud image.
The following description is given of a process of determining relative position information between N frames of point cloud images, and for any two frames of point cloud images in the N frames of point cloud images, one frame of point cloud image is set as a reference point cloud image, and the other frame of point cloud image is a point cloud image to be spliced, as shown in fig. 5, the method includes the following steps:
step S501, the vehicle-mounted laser device emits a first laser signal corresponding to the reference point cloud image.
Step S501, the vehicle-mounted inertial navigation device determines first position information when the vehicle-mounted laser device transmits the first laser information.
And S503, the vehicle-mounted laser equipment emits a second laser signal corresponding to the point cloud image to be spliced.
Step S504, the vehicle-mounted inertial navigation device determines second position information when the vehicle-mounted laser device emits the second laser information.
The position information comprises position coordinates and angle coordinates, and the vehicle-mounted inertial navigation equipment determines the position coordinates of the vehicle by detecting the acceleration of the vehicle in the driving process, then integrating the acceleration to determine the speed and further integrating the speed. Since the vehicle-mounted laser device is fixed to the vehicle, the position coordinates of the vehicle can be used as the position coordinates of the vehicle-mounted laser device. For example, as shown in fig. 6, a navigation coordinate system is established by setting the longitudinal direction of the road surface as the X axis, the transverse direction of the road surface as the Y axis, and the direction perpendicular to the road surface as the Z axis. The origin coordinates of the navigation coordinate system are set according to actual conditions, for example, the origin coordinates may be position coordinates of the vehicle-mounted laser device when emitting the laser signal corresponding to the reference point cloud image, that is, first position coordinates. The origin coordinate may also be a position coordinate of the vehicle-mounted laser device when emitting the laser signal corresponding to the point cloud image to be stitched, that is, a second position coordinate. The origin coordinate may also be other coordinates besides the first position coordinate and the second position coordinate. For example, the vehicle starts from the position of the first position coordinate (0, 0) and reaches the position of the second position coordinate after 0.1 second, the vehicle-mounted laser device emits a laser signal at the position of the second position coordinate, the vehicle-mounted inertial navigation device acquires the acceleration of the vehicle in the X-axis direction within the 0.1 second, and the acceleration is integrated twice to acquire the position coordinate of the vehicle in the X-axis direction as 1. Similarly, the in-vehicle inertial navigation apparatus acquires acceleration of the vehicle in the Y-axis direction within the 0.1 second, integrates the acceleration twice, and acquires position coordinates of the vehicle in the Y-axis direction as 1. Since the acceleration of the vehicle in the Z-axis direction is 0, the position coordinate of the vehicle in the Z-axis direction is 0, and the second position coordinate is (1, 0) which can be determined from the position coordinates of the vehicle in the X-axis, Y-axis and Z-axis directions.
The angular coordinate is an angle of rotation of the in-vehicle laser apparatus about the X-axis, the Y-axis, and the Z-axis with respect to a reference angle. Since the vehicle-mounted laser device is fixed on the vehicle, the angular coordinate of the vehicle can be used as the angular coordinate of the vehicle-mounted laser device. Before determining the angle coordinate of the vehicle-mounted laser device, a reference angle coordinate needs to be set, for example, the reference angle coordinate may be an angle coordinate of the vehicle-mounted laser device when emitting a laser signal corresponding to the reference point cloud image, that is, a first angle coordinate. The reference angle coordinate may also be an angle coordinate of the vehicle-mounted laser device when transmitting the laser signal corresponding to the point cloud image to be stitched, that is, a second angle coordinate. The reference angular coordinate may also be an angular coordinate other than the first angular coordinate and the second angular coordinate. Exemplarily, as shown in fig. 7, the reference angle coordinate is set to be the first angle coordinate (0 °,0 °,0 °), the vehicle-mounted laser device emits the laser signal corresponding to the point cloud image to be spliced after 0.1 second, and at this time, the vehicle is rotated clockwise by 30 degrees around the Z axis compared to the first angle coordinate, so that the second angle coordinate of the vehicle-mounted laser device is rotated clockwise by 30 degrees around the Z axis, that is, (0 °,0 °,30 °).
It should be noted that, when determining the position information of the vehicle-mounted laser device, not only the vehicle-mounted inertial navigation device but also the vehicle-mounted GPS may be used.
And step S505, determining the relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal according to the first position information and the second position information.
The relative position information includes a position offset and an angle offset. For example, if the first position coordinate is (0, 0) and the second position coordinate is (1, 0), the position offset between the first position coordinate and the second position coordinate is 1 in the X-axis direction and 1 in the Y-axis direction. For another example, the first angular coordinate is 0 degrees clockwise (0 °,0 °,0 °) around the X-axis, the Y-axis, and the Z-axis. The second angular coordinate is 30 degrees clockwise, i.e. (0 °,0 °,30 °) about the Z-axis. The angular offset between the first angular coordinate and the second angular coordinate is then a 30 degree clockwise rotation about the Z-axis.
Step S506, determining the relative position information of the vehicle-mounted laser device when emitting the first laser signal and the second laser signal as the relative position information of the reference point cloud image and the point cloud image to be spliced.
Optionally, after determining the relative position information between the N frames of point cloud images, unifying the N frames of point cloud images in the same coordinate system according to the relative position information between the N frames of point cloud images, and then stitching the N frames of point cloud images after unifying the coordinate system to determine the target point cloud image.
In the following, for an exemplary description of unifying N frames of point cloud images in the same coordinate system, the first position coordinate is (0, 0), the second position coordinate is (1, 0), the position offset between the first position coordinate and the second position coordinate is 1 offset along the X-axis direction, and 1 offset along the Y-axis direction, so that the point cloud image to be stitched is 1 offset along the X-axis direction and 1 offset along the Y-axis direction compared to the reference point cloud image. In order to convert the position coordinates of the point clouds in the point cloud images to be spliced into the position coordinates of the reference point cloud image, the X coordinate value in the original coordinate is added with 1 for each point cloud in the point cloud images to be spliced, and the Y coordinate value in the original coordinate is added with 1. For example, if the position coordinates of the point cloud a in the point cloud image to be stitched are (0, 1, 0), the position coordinates of the point cloud a are converted to the position coordinates of the reference point cloud image to be (1, 2, 0).
Exemplarily, the angular offset between the first angular coordinate and the second angular coordinate is set to be 30 degrees clockwise around the Z-axis, and then the angular offset of the point cloud image to be stitched is 30 degrees clockwise around the Z-axis compared to the reference point cloud image. In order to convert the angular coordinate of the point cloud in the point cloud image to be spliced into the angular coordinate in the coordinate system of the reference point cloud image, each point cloud in the point cloud image to be spliced needs to rotate the original angular coordinate clockwise by 30 degrees around the Z axis. For example, the angle coordinate of the point cloud a in the point cloud image to be spliced is (0 °,0 °,30 °), and the angle coordinate after the angle coordinate of the point cloud a is converted into the coordinate system of the reference point cloud image is (0 °,0 °,60 °).
When the vehicle-mounted laser equipment determines the point cloud image by emitting laser information, the position information of each point cloud in the point cloud image is determined by taking the position information of the vehicle-mounted laser equipment emitting the laser as a reference, so that when the position of the vehicle-mounted laser equipment changes, the coordinate system of the point cloud image changes along with the change, and meanwhile, each point cloud image cannot be in the same coordinate system, so that the point cloud image cannot be directly spliced. In the embodiment of the invention, the relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal is determined according to the first position information and the second position information, then the relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the second laser signal is used as the relative position information of the reference point cloud image and the point cloud image to be spliced, the reference point cloud image and the point cloud image to be spliced are unified under a coordinate system, and then the point cloud image to be spliced is spliced, so that the accuracy of the spliced target point cloud image is improved.
In the process of determining the target point cloud image by splicing the N frames of point cloud images with the unified coordinate system, at least the following splicing modes are included:
in a possible splicing mode, a first frame point cloud image and a second frame point cloud image are spliced, a first sub-spliced point cloud image is determined, an M sub-spliced point cloud image and an M +2 frame point cloud image are sequentially spliced until the M +2 frame point cloud image is an N frame point cloud image, the target point cloud image is determined, wherein M is an integer greater than or equal to 1, and N is an integer greater than or equal to 3.
Exemplarily, as shown in fig. 8, setting N equal to 4, and stitching the first frame point cloud image and the second frame point cloud image to determine a first sub-stitched point cloud image. And splicing the first sub-spliced point cloud image with the third frame of point cloud image to determine a second sub-spliced point cloud image. And splicing the second sub-spliced point cloud image with the fourth frame of point cloud image to determine a target point cloud image. When the N frames of point cloud images are spliced, the first frame of point cloud image is taken as a reference, and other point cloud images are sequentially spliced with the first frame of point cloud image, so that only the relative position information between the first frame of point cloud image and other N-1 frames of point cloud images is required to be determined, and the splicing process is simplified.
In one possible splicing manner, the first splicing step: and splicing the M frame point cloud image and the M +1 frame point cloud image to determine N/2 frames of first-stage spliced point cloud images, wherein M is more than or equal to 1 and less than N, M is an odd number, and N is an even number more than or equal to 2. A second splicing step: and splicing the L-th frame of first-stage spliced point cloud image with the L + 1-th frame of first-stage spliced point cloud image, determining N/4 frames of second-stage spliced point cloud images, wherein L is more than or equal to 1 and less than N/2, L is an odd number, sequentially executing the first splicing step and the second splicing step until the determined spliced point cloud image is one frame to form the target point cloud image, and N is an even number more than or equal to 2.
Exemplarily, as shown in fig. 9, setting N to 4, and stitching the first frame point cloud image and the second frame point cloud image to determine a first-stage stitched point cloud image 1. And splicing the third frame of point cloud image and the fourth frame of point cloud image to determine a first-stage spliced point cloud image 2. And splicing the first-stage spliced point cloud image 1 and the first-stage spliced point cloud image 2 to determine a target point cloud image.
In a possible splicing mode, sequentially splicing the M frame point cloud image and the M +1 frame point cloud image, and determining (N-1)/2 first-stage spliced point cloud images, wherein M is more than or equal to 1 and less than N, M is an odd number, and N is an odd number more than or equal to 3. And sequentially splicing the L-th frame of first-stage spliced point cloud image with the L + 1-th frame of first-stage spliced point cloud image, determining (N-1)/4 frames of second-stage spliced point cloud images, wherein L is more than or equal to 1 and less than (N-1)/2, L is an odd number, and the like until the determined spliced point cloud image is one frame, thereby forming the target point cloud image. Optionally, since N is an odd number, the nth frame point cloud image is not stitched with other point cloud images, and a first-level stitched point cloud image is determined. Therefore, when the point cloud images are spliced, the point cloud image of the nth frame can be directly discarded, the point cloud image of the nth frame can be directly used as a first-stage spliced point cloud image to be spliced with other first-stage spliced point cloud images, the point cloud image of the nth frame can be directly used as a second-stage spliced point cloud image to be spliced with the second-stage spliced point cloud image, and the like, and the embodiment of the invention is not specifically limited.
Exemplarily, as shown in fig. 10, setting N to 5, and stitching the first frame point cloud image and the second frame point cloud image to determine a first-stage stitched point cloud image 1. And splicing the third frame of point cloud image and the fourth frame of point cloud image to determine a first-stage spliced point cloud image 2. And splicing the first-stage spliced point cloud image 1 and the first-stage spliced point cloud image 2 to determine a second-stage spliced point cloud image 1. And splicing the second-stage spliced point cloud image 1 and the fifth frame point cloud image to determine a target point cloud image. Because the similarity between two adjacent frames of point cloud images is higher than the similarity between non-adjacent point cloud images, the splicing error between the two adjacent frames of point cloud images is smaller than that between the two non-adjacent frames of point cloud images, so that when the N frames of point cloud images are spliced, the N frames of point cloud images are spliced according to the relative position information between the two adjacent frames of point cloud images to determine a plurality of frames of sub-spliced point cloud images, and then the adjacent sub-spliced point cloud images are spliced until the target point cloud image is determined, so that the splicing error during the splicing of the N frames of point cloud images is reduced, and the accuracy of the target point cloud image is improved.
It should be noted that the above several ways of stitching are only described as examples, and the ways of stitching the point cloud images in the embodiment of the present invention are not limited to the above several ways.
Step S303, determining a point cloud intensity image corresponding to the target point cloud image.
And determining the point cloud images by emitting laser signals through the vehicle-mounted laser equipment, wherein each point cloud at least comprises a three-dimensional coordinate and reflection intensity, and a mapping relation exists between the three-dimensional coordinate and the reflection intensity. Because each frame of point cloud image contains a large amount of point cloud data, the method for directly detecting the point cloud image to determine the lane line is low in efficiency. In order to improve the efficiency of detecting the lane line from the target point cloud image, the target point cloud image can be mapped into a corresponding point cloud intensity image, then the lane line is detected according to the point cloud intensity image, and the processing of the three-dimensional coordinates of the point cloud is omitted.
Exemplarily, as shown in fig. 11a, a single frame point cloud image is mapped to a point cloud intensity image, and a black vertical line in the image is a lane line. Because the distance of the vehicle-mounted laser equipment for emitting the laser signal is short, the information content contained in the single-frame point cloud intensity image is less. When the detected lane line is a dotted line or a solid line, because the detection distance is limited, it cannot be determined whether the currently detected lane line is a solid line or a dotted line through a single-frame point cloud intensity image, for example, in the point cloud intensity image shown in fig. 11a, although the lane line is detected, it is not determined whether the lane line is a solid line or a dotted line, and therefore, the vehicle lane change cannot be further controlled. Secondly, since the single-frame point cloud intensity image contains a small amount of information, when a lane line is detected in the point cloud intensity image, the accuracy of detecting the lane line is affected. In order to solve the problem of single-frame point cloud images in lane line detection, in the implementation of the invention, N frames of point cloud images are spliced into a target point cloud image, and then the target point cloud image is converted into a point cloud intensity image.
Illustratively, three frames of point cloud images, namely a point cloud image A, a point cloud image B and a point cloud image C, are continuously acquired by emitting laser signals through the vehicle-mounted laser equipment. And acquiring position information when the vehicle-mounted laser equipment sends laser signals corresponding to the three frames of point cloud images, wherein the position information is first position information, second position information and third position information.
Firstly, splicing the point cloud image A and the point cloud image B by taking the coordinate system of the point cloud image A as a reference, and specifically comprising the following steps: and determining the relative position information of the point cloud image A and the point cloud image B according to the first position information and the second position information. And then converting the coordinate system of the point cloud image B into the coordinate system of the point cloud image A according to the relative position information of the point cloud image A and the point cloud image B, and then splicing the point cloud image A and the point cloud image B to determine a first sub-spliced point cloud image.
And then splicing the first sub-spliced point cloud image and the point cloud image C, specifically: the coordinate system of the point cloud image A is taken as a reference, the coordinate system of the point cloud image B is converted into the coordinate system of the point cloud image A, then the point cloud image A and the point cloud image B are spliced, and a first sub-spliced point cloud image is determined, so that the coordinate system of the first sub-spliced point cloud image is taken as the coordinate system of the point cloud image A, and the relative position information of the first sub-spliced point cloud image and the point cloud image C can be determined according to the first position information and the third position information. And then converting the coordinate system of the point cloud image C into the coordinate system of the point cloud image C according to the relative position information of the first sub-spliced point cloud image and the point cloud image C, and then splicing the first sub-spliced point cloud image and the point cloud image C to determine a target point cloud image. The point cloud intensity image corresponding to the target point cloud image is shown in fig. 11b, wherein the black vertical line is the lane line. As can be seen from fig. 11b, the detected lane lines are dotted lines, and the automatic driving control system can control the lane change of the vehicle based on the dotted lines. After the multi-frame point cloud images are spliced, more information can be acquired, and longer distance can be measured, so that whether the lane line is a solid line or a dotted line can be detected, and the detection precision of the lane line is improved. Secondly, because the reflection intensity in the point cloud intensity image and the three-dimensional coordinates of the point cloud have a mapping relation, after the lane line is detected according to the point cloud intensity image, the position information of the lane line is directly determined according to the mapping relation between the reflection intensity and the three-dimensional coordinates without complex and unstable coordinate transformation, so that the accuracy of positioning by using the lane line in the automatic driving vehicle is improved.
Step S304, determining a lane line from the point cloud intensity image corresponding to the target point cloud image.
Optionally, since the color of the lane line is greatly different from the color of the road surface, in the point cloud intensity image obtained by emitting the laser signal by the vehicle-mounted laser device, the reflection intensity of the lane line is obviously different from the reflection intensity of the road surface, so that the point cloud intensity image can be segmented according to the pixel gray value of the point cloud intensity image to determine the lane line.
When the point cloud intensity image is segmented, at least the following embodiments are included:
in one possible implementation, the threshold segmentation method is adopted to segment the point cloud intensity image to determine the lane line. The specific process is as follows: a threshold T is first set, which may be a global threshold, an adaptive threshold, an optimal threshold, etc. And then comparing the threshold T with the gray values of all the pixel points in the point cloud image one by one. And setting the pixel points with the gray values larger than the threshold value as 1, and setting the pixel points with the gray values smaller than the threshold value as 0. And then determining the area formed by the pixel points set to be 1 as the area where the lane line in the point cloud intensity image is located.
In one possible implementation, the point cloud intensity image is segmented by using a region segmentation method to determine the lane line. The specific process is as follows: firstly, selecting seed pixels from the point cloud intensity image, wherein the seed pixels can be single pixels of a lane line area in the point cloud intensity image or small areas comprising a plurality of pixels. And then determining a similarity rule according to the pixel gray value of the lane line area in the point cloud intensity image, and simultaneously establishing a condition or a criterion for stopping growth. And then combining the pixels which are in the surrounding neighborhood of the seed pixels and have the same or similar gray value with the seed pixels into the region where the seed pixels are located. The above process continues with these new pixels as new seed pixels until no pixels meeting the condition can be included, and the region growing is stopped. And the region where the seed pixel after the region growth is stopped is the lane line in the point cloud intensity image.
In one possible implementation, the point cloud intensity image is segmented by an edge segmentation method to determine the lane line. The specific process is as follows: and performing edge detection on the point cloud intensity image, determining the edge of the lane line in the point cloud intensity image through the edge detection, and further determining the position of the lane line in the point cloud intensity image according to the edge of the lane line. In particular, edge detection, i.e. detecting where a gray level or structure has an abrupt transition, indicates the end of one region and is where another region starts. The gray values of the pixels at the edges in the image are not continuous and this discontinuity can be detected by taking the derivative. For a step-like edge, its position corresponds to the extreme point of the first derivative, and to the zero crossing of the second derivative. Therefore, differential operators can be adopted for edge detection, common first-order differential operators include a Roberts operator, a Prewitt operator and a Sobel operator, and second-order differential operators include a Laplace operator, a Kirsh operator and the like.
In a possible implementation mode, a plurality of frames of point cloud intensity images marked with lane lines artificially are input into a convolutional neural network as training samples for training, and a convolutional neural network model capable of detecting the lane lines from the point cloud intensity images is determined. When the lane line is detected in real time, the point cloud intensity image corresponding to the target point cloud image is input to the convolutional neural network for feature extraction, and the lane line in the point cloud intensity image is determined. It should be noted that the method for segmenting the point cloud intensity image is not limited to the above four methods, and may also be a histogram method, a level set method, or the like, and the implementation of the present invention is not particularly limited.
In the embodiment of the invention, the vehicle-mounted laser equipment is adopted to emit the laser signal to obtain the point cloud image, and then the influence of the light source and the weather on the detection precision of the lane line is small when the lane line is determined by processing the point cloud image, so that the real-time property and the stability of the detected lane line are improved. Secondly, because the spliced point cloud images contain more lane information and are enhanced, when the lane lines are detected according to the point cloud images, the multi-frame point cloud images are spliced firstly, and the lane lines are detected according to the spliced point cloud images, so that the accuracy of detecting the lane lines is improved. In addition, when the lane line is detected through the spliced point cloud image, the point cloud image is converted into a point cloud intensity image, so that the point cloud image is prevented from being directly analyzed in a 3D space, the process of detecting the lane line is simplified on one hand, and the efficiency of detecting the lane line is improved on the other hand.
The embodiment of the invention also provides an automatic driving control method, which comprises the following steps: the device for detecting the lane line obtains N frames of point cloud images determined by vehicle-mounted laser equipment through transmitting laser signals, the N frames of point cloud images are spliced into a target point cloud image, then a point cloud intensity image corresponding to the target point cloud image is determined according to the target point cloud image, and then the lane line is determined from the point cloud intensity image corresponding to the target point cloud image. The automatic driving control system may determine the relative position of the vehicle and the lane line based on the position of the lane line and the current position of the vehicle after detecting the lane line by the device for detecting the lane line, and then further adjust the position of the vehicle based on the relative position of the vehicle and the lane line. For example, the detected lane lines and the position of the vehicle are set as shown in fig. 12, and it can be known from fig. 12 that the current position of the vehicle is close to the lane line on the left side, so that the automatic driving control system can simultaneously control the vehicle to be close to the lane line on the right side until the position of the vehicle is located in the middle of the lane lines on both sides in the process of controlling the vehicle to travel forwards. The automatic driving control system can also adjust the driving lane of the vehicle according to the type of the lane line after detecting the lane line through the device for detecting the lane line. For example, as can be seen from fig. 12, the left and right lane lines of the current position of the vehicle are dotted lines, and the automatic driving control system may control the vehicle to change from the current lane to the left lane or the right lane. The lane line is detected by the vehicle-mounted laser equipment, and high-precision lane line position information is provided for automatic driving, so that the positioning precision of automatic driving is effectively improved.
Based on the same technical concept, an embodiment of the present invention provides an apparatus for detecting a lane line, as shown in fig. 13, the apparatus 1300 includes: an obtaining module 1301, a splicing module 1302, a processing module 1303, and a detecting module 1304.
The acquisition module 1301 is used for acquiring N frames of point cloud images determined by the vehicle-mounted laser device through transmitting laser signals, where N is a positive integer;
a stitching module 1302, configured to stitch the N frames of point cloud images into a target point cloud image;
the processing module 1303 is used for determining a point cloud intensity image corresponding to the target point cloud image;
a detecting module 1304, configured to determine a lane line from the point cloud intensity image corresponding to the target point cloud image.
Optionally, the splicing module 1302 is specifically configured to:
determining relative position information between the N frames of point cloud images according to the position information when the vehicle-mounted laser equipment emits laser signals;
and sequentially splicing the N frames of point cloud images according to the relative position information among the N frames of point cloud images, and determining the target point cloud image.
Optionally, the splicing module 1302 is specifically configured to:
aiming at any two frames of point cloud images, when the vehicle-mounted laser equipment emits a first laser signal corresponding to one frame of point cloud image, determining first position information of the vehicle-mounted laser equipment through vehicle-mounted inertial navigation equipment;
when the vehicle-mounted laser equipment emits a second laser signal corresponding to another frame of point cloud image, determining second position information of the vehicle-mounted laser equipment through the vehicle-mounted inertial navigation equipment;
determining relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal according to the first position information and the second position information;
and determining the relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal as the relative position information of the two frames of point cloud images.
Optionally, the splicing module 1302 is specifically configured to:
unifying the N frames of point cloud images in the same coordinate system according to the relative position information among the N frames of point cloud images;
and splicing the N frames of point cloud images after unifying the coordinate system to determine the target point cloud image.
Optionally, the splicing module 1302 is specifically configured to:
splicing the first frame point cloud image and the second frame point cloud image to determine a first sub-spliced point cloud image;
sequentially splicing the M sub-spliced point cloud image with the M +2 frame point cloud image, and determining the target point cloud image until the M +2 frame point cloud image is the Nth frame point cloud image, wherein M is an integer greater than or equal to 1, and N is an integer greater than or equal to 3.
Optionally, the splicing module 1302 is specifically configured to:
a first splicing step: splicing the M frame point cloud image and the M +1 frame point cloud image to determine N/2 frames of first-stage spliced point cloud images, wherein M is more than or equal to 1 and is less than N, and M is an odd number;
a second splicing step: splicing the L-th frame of first-stage spliced point cloud image with the L + 1-th frame of first-stage spliced point cloud image to determine N/4 frames of second-stage spliced point cloud images, wherein L is more than or equal to 1 and is more than N/2, and L is an odd number;
and sequentially executing the first splicing step and the second splicing step until the determined spliced point cloud image is a frame to form the target point cloud image, wherein N is an even number which is more than or equal to 4.
Optionally, the detecting module 1304 is specifically configured to:
and segmenting the point cloud intensity image according to the pixel gray value of the point cloud intensity image, and determining a lane line.
Based on the same technical concept, an embodiment of the present invention provides an automatic driving control system, as shown in fig. 14, including at least one processor 1401, and a memory 1402 connected to at least one processor, where a specific connection medium between the processor 1401 and the memory 1402 is not limited in the embodiment of the present invention, and the processor 1401 and the memory 1402 are connected through a bus in fig. 14 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiment of the present invention, the memory 1402 stores instructions executable by the at least one processor 1401, and the at least one processor 1401 can execute the steps included in the method for detecting lane lines described above by executing the instructions stored in the memory 1402.
The processor 1401 is a control center of the automatic driving control system, and may connect various parts of the automatic driving control system by using various interfaces and lines, and detect a lane line by operating or executing instructions stored in the memory 1402 and calling data stored in the memory 1402. Alternatively, the processor 1401 may include one or more processing units, and the processor 1401 may integrate an application processor, which mainly handles an operating system, a user interface, application programs, and the like, and a modem processor, which mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into processor 1401. In some embodiments, processor 1401 and memory 1402 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 1401 may be a general-purpose processor such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in a processor.
Based on the same inventive concept, embodiments of the present invention provide a computer-readable medium, which stores a computer program that, when executed, performs the steps of the method for detecting lane lines as described above.
It should be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (13)
1. A method of detecting lane lines, the method being applied to an autonomous vehicle, and the method comprising:
obtaining N frames of point cloud images determined by vehicle-mounted laser equipment through transmitting laser signals, wherein N is a positive integer greater than 1;
determining relative position information between the N frames of point cloud images according to position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits laser signals corresponding to the N frames of point cloud images, wherein the position information comprises position coordinates and angle coordinates;
sequentially splicing the N frames of point cloud images according to the relative position information among the N frames of point cloud images to determine a target point cloud image;
mapping the target point cloud image into a corresponding point cloud intensity image;
determining a lane line from the point cloud intensity image corresponding to the target point cloud image;
and determining the position information of the lane line according to the mapping relation between the three-dimensional coordinates of the point cloud and the reflection intensity in the target point cloud image, wherein the position information of the lane line is used for positioning the automatic driving vehicle.
2. The method as claimed in claim 1, wherein said determining relative position information between the N frames of point cloud images from the position information of the vehicle-mounted laser device when emitting laser signals comprises:
aiming at any two frames of point cloud images, when the vehicle-mounted laser equipment emits a first laser signal corresponding to one frame of point cloud image, determining first position information of the vehicle-mounted laser equipment through vehicle-mounted inertial navigation equipment;
when the vehicle-mounted laser equipment emits a second laser signal corresponding to another frame of point cloud image, determining second position information of the vehicle-mounted laser equipment through the vehicle-mounted inertial navigation equipment;
determining relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal according to the first position information and the second position information;
and determining the relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal as the relative position information of the two frames of point cloud images.
3. The method of claim 2, wherein the sequentially stitching the N frames of point cloud images according to the relative location information between the N frames of point cloud images to determine the target point cloud image comprises:
unifying the N frames of point cloud images in the same coordinate system according to the relative position information among the N frames of point cloud images;
and splicing the N frames of point cloud images after unifying the coordinate system to determine the target point cloud image.
4. The method of claim 3, wherein the stitching the N frames of point cloud images after the unified coordinate system to determine the target point cloud image comprises:
splicing the first frame point cloud image and the second frame point cloud image to determine a first sub-spliced point cloud image;
sequentially splicing the M sub-spliced point cloud image with the M +2 frame point cloud image, and determining the target point cloud image when the M +2 frame point cloud image is the Nth frame point cloud image, wherein M is an integer larger than or equal to 1, and N is an integer larger than or equal to 3.
5. The method of claim 3, wherein the stitching the N frames of point cloud images after the unified coordinate system to determine the target point cloud image comprises:
a first splicing step: splicing the M frame point cloud image and the M +1 frame point cloud image to determine N/2 frames of first-stage spliced point cloud images, wherein M < > is more than or equal to 1, and M is an odd number;
a second splicing step: splicing the L-th frame of first-stage spliced point cloud image with the L + 1-th frame of first-stage spliced point cloud image to determine N/4 frames of second-stage spliced point cloud images, wherein L is more than or equal to 1 and is an odd number, and N is an even number more than or equal to 4;
and sequentially executing the first splicing step and the second splicing step until the determined spliced point cloud image is one frame, and forming the target point cloud image.
6. The method of any one of claims 1 to 5, wherein determining a lane line from the point cloud intensity image corresponding to the target point cloud image comprises:
and segmenting the point cloud intensity image according to the pixel gray value of the point cloud intensity image to determine a lane line.
7. An apparatus for detecting a lane line, the apparatus being applied to an autonomous vehicle, comprising:
the acquisition module is used for acquiring N frames of point cloud images determined by vehicle-mounted laser equipment through transmitting laser signals, wherein N is a positive integer greater than 1;
the splicing module is used for determining relative position information between the N frames of point cloud images according to position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits laser signals corresponding to the N frames of point cloud images, and the position information comprises position coordinates and angle coordinates; sequentially splicing the N frames of point cloud images according to the relative position information among the N frames of point cloud images to determine a target point cloud image;
the processing module is used for mapping the target point cloud image into a corresponding point cloud intensity image;
the detection module is used for determining a lane line from the point cloud intensity image corresponding to the target point cloud image;
and the determining module is used for determining the position information of the lane line according to the mapping relation between the three-dimensional coordinates of the point cloud and the reflection intensity in the target point cloud image, and the position information of the lane line is used for positioning the automatic driving vehicle.
8. The apparatus of claim 7, wherein the splicing module is specifically configured to:
aiming at any two frames of point cloud images, when the vehicle-mounted laser equipment emits a first laser signal corresponding to one frame of point cloud image, determining first position information of the vehicle-mounted laser equipment through vehicle-mounted inertial navigation equipment;
when the vehicle-mounted laser equipment emits a second laser signal corresponding to another frame of point cloud image, determining second position information of the vehicle-mounted laser equipment through the vehicle-mounted inertial navigation equipment;
determining relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal according to the first position information and the second position information;
and determining the relative position information of the vehicle-mounted laser equipment when the vehicle-mounted laser equipment emits the first laser signal and the second laser signal as the relative position information of the two frames of point cloud images.
9. The apparatus of claim 8, wherein the splicing module is specifically configured to:
unifying the N frames of point cloud images in the same coordinate system according to the relative position information among the N frames of point cloud images;
and splicing the N frames of point cloud images after unifying the coordinate system to determine the target point cloud image.
10. The apparatus according to any one of claims 7 to 9, wherein the detection module is specifically configured to:
and segmenting the point cloud intensity image according to the pixel gray value of the point cloud intensity image to determine a lane line.
11. An automatic driving control method comprising detecting a lane line and controlling a driving route according to the lane line, characterized in that the lane line is detected by the method of any one of claims 1 to 6.
12. An autopilot control system, characterized in that it comprises at least one processing unit and at least one memory unit, wherein the memory unit stores a computer program which, when executed by the processing unit, causes the processing unit to carry out the steps of the method according to any one of claims 1 to 6.
13. A computer-readable medium, in which a computer program is stored which, when running, performs the steps of the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810732366.6A CN110163047B (en) | 2018-07-05 | 2018-07-05 | Method and device for detecting lane line |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810732366.6A CN110163047B (en) | 2018-07-05 | 2018-07-05 | Method and device for detecting lane line |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110163047A CN110163047A (en) | 2019-08-23 |
CN110163047B true CN110163047B (en) | 2023-04-07 |
Family
ID=67645038
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810732366.6A Active CN110163047B (en) | 2018-07-05 | 2018-07-05 | Method and device for detecting lane line |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163047B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111175788B (en) * | 2020-01-20 | 2022-07-08 | 北京主线科技有限公司 | Transverse positioning method and positioning system for automatic driving vehicle |
CN111507340B (en) * | 2020-04-16 | 2023-09-01 | 北京深测科技有限公司 | Target point cloud data extraction method based on three-dimensional point cloud data |
CN113673274A (en) * | 2020-05-13 | 2021-11-19 | 长沙智能驾驶研究院有限公司 | Road boundary detection method, road boundary detection device, computer equipment and storage medium |
CN112712023B (en) * | 2020-12-30 | 2024-04-05 | 武汉万集光电技术有限公司 | Vehicle type recognition method and system and electronic equipment |
CN113378636B (en) * | 2021-04-28 | 2024-08-09 | 杭州电子科技大学 | Vehicle pedestrian detection method based on depth map matching |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105115976B (en) * | 2015-06-24 | 2018-03-20 | 上海图甲信息科技有限公司 | A kind of rail abrasion defect detecting system and method |
CN106570446B (en) * | 2015-10-12 | 2019-02-01 | 腾讯科技(深圳)有限公司 | The method and apparatus of lane line drawing |
CN105701449B (en) * | 2015-12-31 | 2019-04-23 | 百度在线网络技术(北京)有限公司 | The detection method and device of lane line on road surface |
CN107463918B (en) * | 2017-08-17 | 2020-04-24 | 武汉大学 | Lane line extraction method based on fusion of laser point cloud and image data |
CN107796397B (en) * | 2017-09-14 | 2020-05-15 | 杭州迦智科技有限公司 | Robot binocular vision positioning method and device and storage medium |
CN108230379B (en) * | 2017-12-29 | 2020-12-04 | 百度在线网络技术(北京)有限公司 | Method and device for fusing point cloud data |
-
2018
- 2018-07-05 CN CN201810732366.6A patent/CN110163047B/en active Active
Non-Patent Citations (2)
Title |
---|
邓超.区域分割.《数字图像处理与模式识别研究》.地质出版社,2018,第85~87页. * |
陆国栋等.图样的分块与拼接.《工程图样数字化转换与智能理解》.机械工业出版社,2001,第15-16页. * |
Also Published As
Publication number | Publication date |
---|---|
CN110163047A (en) | 2019-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163047B (en) | Method and device for detecting lane line | |
US11024055B2 (en) | Vehicle, vehicle positioning system, and vehicle positioning method | |
EP3742200B1 (en) | Detection apparatus and parameter adjustment method thereof | |
US11494979B2 (en) | Bounding box estimation and lane vehicle association | |
US11520349B2 (en) | Autonomous driving device | |
US9863775B2 (en) | Vehicle localization system | |
US11157751B2 (en) | Traffic guide object recognition device, traffic guide object recognition method, and storage medium | |
CN111091037B (en) | Method and device for determining driving information | |
CN110531376A (en) | Detection of obstacles and tracking for harbour automatic driving vehicle | |
CN114442101B (en) | Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar | |
US11676403B2 (en) | Combining visible light camera and thermal camera information | |
JP6038422B1 (en) | Vehicle determination device, vehicle determination method, and vehicle determination program | |
CN110647801A (en) | Method and device for setting region of interest, storage medium and electronic equipment | |
US20230111364A1 (en) | Method for detecting crosswalk using lidar sensor and crosswalk detection device for performing the method | |
CN112673280A (en) | Road detection method for a motor vehicle equipped with a LIDAR sensor | |
US20190180117A1 (en) | Roadside object recognition apparatus | |
US20240271945A1 (en) | Vehicle, Vehicle Positioning Method and Apparatus, Device, and Computer-Readable Storage Medium | |
KR102003387B1 (en) | Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program | |
KR20210069816A (en) | Attached tpye-agricultural working path detecting appatatus | |
CN114084129A (en) | Fusion-based vehicle automatic driving control method and system | |
WO2021245515A1 (en) | Detection of traffic safety mirrors and navigational response | |
CN112581771A (en) | Driving control device for automatic driving vehicle, target object for parking, and driving control system | |
US11435191B2 (en) | Method and device for determining a highly precise position and for operating an automated vehicle | |
CN118489238A (en) | Data acquisition method and device and intelligent driving equipment | |
CN110727269A (en) | Vehicle control method and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |