CN111209777A - Lane line detection method and device, electronic device and readable storage medium - Google Patents

Lane line detection method and device, electronic device and readable storage medium Download PDF

Info

Publication number
CN111209777A
CN111209777A CN201811392943.8A CN201811392943A CN111209777A CN 111209777 A CN111209777 A CN 111209777A CN 201811392943 A CN201811392943 A CN 201811392943A CN 111209777 A CN111209777 A CN 111209777A
Authority
CN
China
Prior art keywords
lane line
road surface
probability
lane
surface image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811392943.8A
Other languages
Chinese (zh)
Inventor
孙鹏
程光亮
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201811392943.8A priority Critical patent/CN111209777A/en
Priority to PCT/CN2019/119886 priority patent/WO2020103892A1/en
Priority to KR1020217015000A priority patent/KR20210080459A/en
Priority to JP2021525040A priority patent/JP2022506920A/en
Publication of CN111209777A publication Critical patent/CN111209777A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The embodiment of the invention provides a lane line detection method, a lane line detection device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a road surface image acquired by vehicle-mounted equipment installed on a vehicle; inputting the road surface image into a neural network, and outputting M probability maps corresponding to the road surface image through the neural network, wherein the M probability maps comprise N lane line probability maps and M-N non-lane line probability maps, and the N lane line probability maps respectively correspond to N lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the corresponding lane lines; the M-N non-lane line probability maps correspond to non-lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the non-lane lines; and determining the lane lines in the road surface image according to the lane line probability map. The method can obtain accurate lane line detection results under the scene with higher complexity.

Description

Lane line detection method and device, electronic device and readable storage medium
Technical Field
The present invention relates to computer technologies, and in particular, to a lane line detection method and apparatus, an electronic device, and a readable storage medium.
Background
The auxiliary driving and the automatic driving are two important technologies in the field of intelligent driving, and the auxiliary driving or the automatic driving can reduce the workshop interval, reduce the occurrence of traffic accidents and reduce the physical and mental burden of a driver, thereby playing an important role in the field of intelligent driving.
In the driver assistance technique and the automatic driving technique, it is necessary to perform lane line detection, that is, to detect a lane line on a road surface on which a vehicle travels. In the auxiliary driving, the lane line detection can be used for early warning of the running deviation of the vehicle, and can also be used for giving a warning when the vehicle and the front vehicle are about to collide. In the automatic driving, the lane line detection may provide the most basic information for operations such as automatic cruise driving, lane keeping, vehicle overtaking, etc., thereby guaranteeing normal driving of the vehicle. Therefore, how to accurately and efficiently detect the lane line is an important issue worthy of research.
Disclosure of Invention
The embodiment of the invention provides a technical scheme for detecting lane lines.
A first aspect of an embodiment of the present invention provides a lane line detection method, including:
acquiring a road surface image acquired by vehicle-mounted equipment installed on a vehicle;
inputting the road surface image into a neural network, and outputting M probability maps corresponding to the road surface image through the neural network, wherein the M probability maps comprise N lane line probability maps and M-N non-lane line probability maps, and the N lane line probability maps respectively correspond to N lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the corresponding lane lines; the M-N non-lane line probability maps correspond to non-lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the non-lane lines, wherein N is a positive integer, and M is an integer larger than N;
and determining the lane lines in the road surface image according to the lane line probability map.
Further, the determining the lane line in the road surface image according to the lane line probability map includes:
responding to that the Lth lane line probability map comprises a plurality of pixel points with probability values larger than or equal to a preset threshold value, and fitting the Lth lane line according to the plurality of pixel points with the probability values larger than or equal to the preset threshold value, wherein the Lth lane line probability map is any one of the N lane line probability maps.
Further, the determining the lane line in the road surface image according to the lane line probability map includes:
and in response to that a plurality of probability values corresponding to the first pixel point in the plurality of lane line probability graphs are all larger than or equal to a preset threshold value, taking the first pixel point as a pixel point when the first lane line is fitted, wherein the first lane line is a lane line corresponding to the lane line probability graph corresponding to the maximum probability value in the plurality of probability values.
Further, the determining the lane line in the road surface image according to the lane line probability map includes:
and in response to the S-th non-lane line probability map comprising a plurality of pixel points with probability values larger than or equal to a preset threshold, determining a non-lane line according to the plurality of pixel points with the probability values larger than or equal to the preset threshold, wherein the S-th non-lane line probability map is any one of the M-N non-lane line probability maps.
Further, the method also comprises the following steps:
performing fusion processing on the M probability maps to obtain a target probability map;
adjusting the pixel value of a pixel point corresponding to a first lane line probability map in the target probability map to a preset pixel value corresponding to the first lane line probability map;
the first lane line probability map is any one of the N lane line probability maps, and the pixel points corresponding to the first lane line probability map are the pixel points of the lane line which forms a fit in the first lane line probability map.
Further, the outputting, via the neural network, M probability maps corresponding to the road surface image includes:
extracting low-layer characteristic information of M channels of the road surface image through at least one convolution layer of the neural network;
extracting high-level feature information of M channels of the road surface image based on the low-level feature information of the M channels through at least one residual extraction layer of the neural network;
and performing upsampling processing on the high-level feature information of the M channels through at least one upsampling layer of the neural network to obtain M probability maps which are as large as the road surface image.
Further, the at least one convolutional layer is a connected 6-10 convolutional layers, the at least one residual extraction layer comprises a connected 7-12 residual extraction layers, and the at least one upsampling layer comprises a connected 1-4 upsampling layers.
Further, the neural network is obtained by adopting a road surface training image set which comprises lane line or non-lane line marking information for supervision training.
Further, the neural network is obtained by supervised training of a road surface training image set including lane line or non-lane line marking information, and comprises:
inputting training images included in the road surface training image set into the neural network, and acquiring a predicted lane line probability map of the training images;
fitting the predicted lane line of the image for training according to a plurality of pixel points of which the probability values included in the predicted lane line probability graph are greater than or equal to a preset threshold value;
obtaining the loss between the predicted lane line of the image for training and the lane line in the lane line true value image of the image for training, wherein the lane line true value image is obtained based on the marking information of the lane line of the image for training;
adjusting network parameters of the neural network according to the loss.
Further, before the training of the neural network using the road surface training image set, the method further includes:
collecting road surface images under a plurality of scenes;
taking an image obtained after lane marking is carried out on the road surface images under the plurality of scenes as an image for training;
wherein the plurality of scenes comprise at least two scenes of a daytime scene, a rainy scene, a foggy scene, a straight road scene, a curve scene, a tunnel scene, a strong light scene and a night scene.
Further, before inputting the road surface image into the neural network, the method further includes:
and carrying out distortion removal processing on the road surface image.
Further, the method also comprises the following steps:
and mapping the lane lines in the road surface image to a world coordinate system to obtain the positions of the lane lines in the road surface image in the world coordinate system.
A second aspect of an embodiment of the present invention provides a lane line detection apparatus, including:
the first acquisition module is used for acquiring a road surface image acquired by vehicle-mounted equipment installed on a vehicle;
the second acquisition module is used for inputting the road surface image into a neural network and outputting M probability maps corresponding to the road surface image through the neural network, wherein the M probability maps comprise N lane line probability maps and M-N non-lane line probability maps, and the N lane line probability maps respectively correspond to N lane lines on a road surface and are used for representing the probability that pixel points in the road surface image belong to the corresponding lane lines; the M-N non-lane line probability maps correspond to non-lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the non-lane lines, wherein N is a positive integer, and M is an integer larger than N;
and the first determining module is used for determining the lane lines in the road surface image according to the lane line probability map.
Further, the first determining module comprises:
the first determining unit is used for fitting an L-th lane line according to a plurality of pixel points with probability values larger than or equal to a preset threshold when the L-th lane line probability map comprises a plurality of pixel points with probability values larger than or equal to the preset threshold, wherein the L-th lane line probability map is any one of the N lane line probability maps.
Further, the first determining module further comprises:
and the second determining unit is used for taking the first pixel point as a pixel point when the first pixel point is fitted with the first lane line when a plurality of probability values corresponding to the first pixel point in a plurality of lane line probability maps are all larger than or equal to a preset threshold value, wherein the first lane line is a lane line corresponding to the lane line probability map corresponding to the maximum probability value in the plurality of probability values.
Further, the first determining module further comprises:
and the third determining unit is used for determining the non-lane line according to the plurality of pixel points with the probability values larger than or equal to the preset threshold when the S-th non-lane line probability map comprises the plurality of pixel points with the probability values larger than or equal to the preset threshold, wherein the S-th non-lane line probability map is any one of the M-N non-lane line probability maps.
Further, the method also comprises the following steps:
the fusion module is used for carrying out fusion processing on the M probability graphs to obtain a target probability graph;
the adjusting module is used for adjusting the pixel value of a pixel point corresponding to the first lane line probability map in the target probability map to a preset pixel value corresponding to the first lane line probability map;
the first lane line probability map is any one of the N lane line probability maps, and the pixel points corresponding to the first lane line probability map are the pixel points of the lane line which forms a fit in the first lane line probability map.
Further, the second obtaining module includes:
the first acquisition unit is used for extracting low-layer characteristic information of M channels of the road surface image through at least one convolution layer of the neural network;
a second obtaining unit, configured to extract, through at least one residual extraction layer of the neural network, high-level feature information of M channels of the road surface image based on the low-level feature information of the M channels;
and the third acquisition unit is used for performing upsampling processing on the high-level feature information of the M channels through at least one upsampling layer of the neural network to obtain M probability maps which are as large as the road surface image.
Further, the at least one convolutional layer is a connected 6-10 convolutional layers, the at least one residual extraction layer comprises a connected 7-12 residual extraction layers, and the at least one upsampling layer comprises a connected 1-4 upsampling layers.
Further, the neural network is obtained by adopting a road surface training image set which comprises lane line or non-lane line marking information for supervision training.
Further, the neural network is obtained by supervised training of a road surface training image set including lane line or non-lane line marking information, and comprises:
inputting training images included in the road surface training image set into the neural network, and acquiring a predicted lane line probability map of the training images;
fitting the predicted lane line of the image for training according to a plurality of pixel points of which the probability values included in the predicted lane line probability graph are greater than or equal to a preset threshold value;
obtaining the loss between the predicted lane line of the image for training and the lane line in the lane line true value image of the image for training, wherein the lane line true value image is obtained based on the marking information of the lane line of the image for training;
adjusting network parameters of the neural network according to the loss.
Further, the method also comprises the following steps:
the system comprises an acquisition module, a training module and a display module, wherein the acquisition module is used for acquiring road surface images under a plurality of scenes and taking an image obtained by marking a lane line on the road surface images under the plurality of scenes as an image for training;
wherein the plurality of scenes comprise at least two scenes of a rainy scene, a foggy scene, a straight scene, a curved scene, a tunnel scene, a strong light scene and a night scene.
Further, the method also comprises the following steps:
and the preprocessing module is used for carrying out distortion removal processing on the road surface image.
Further, the method also comprises the following steps:
and the mapping module is used for mapping the lane lines in the road surface image to a world coordinate system to obtain the positions of the lane lines in the road surface image in the world coordinate system.
A third aspect of an embodiment of the present invention provides a driving control method, including:
the driving control device obtains a lane line detection result of the road surface image, wherein the lane line detection result of the road surface image is obtained by adopting the lane line detection method of the first aspect;
and the driving control device outputs prompt information and/or carries out intelligent driving control on the vehicle according to the lane line detection result.
A fourth aspect of the embodiments of the present invention provides a driving control apparatus including:
an obtaining module, configured to obtain a lane line detection result of a road surface image, where the lane line detection result of the road surface image is obtained by using the lane line detection method according to the first aspect;
and the driving control module is used for outputting prompt information and/or carrying out intelligent driving control on the vehicle according to the lane line detection result.
A fifth aspect of an embodiment of the present invention provides an electronic device, including:
a memory for storing program instructions;
a processor for calling and executing the program instructions in the memory to perform the method steps of the first aspect.
A sixth aspect of an embodiment of the present invention provides an intelligent driving system, including: a communicatively connected camera for acquiring road surface images, an electronic device as described in the fifth aspect and a driving control apparatus as described in the fourth aspect.
A seventh aspect of the embodiments of the present invention provides a readable storage medium, in which a computer program is stored, the computer program being configured to perform the method steps of the first aspect.
According to the lane line detection method, the lane line detection device, the electronic equipment and the readable storage medium, the neural network obtained by training the road training image containing the lane line or non-lane line marking information is used for obtaining the probability graph of each pixel point in the road image belonging to the corresponding lane line, and the lane line in the road image is determined according to the probability graph of the lane line, so that the accurate lane line detection result can be obtained under the scene with higher complexity. In addition, the M probability maps in this embodiment include a non-lane line probability map, that is, a non-lane line category is added in addition to the lane line category. Therefore, the accuracy of road surface image segmentation can be improved, and the accuracy of a lane line detection result is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the following briefly introduces the drawings needed to be used in the description of the embodiments or the prior art, and obviously, the drawings in the following description are some embodiments of the present invention, and those skilled in the art can obtain other drawings according to the drawings without inventive labor.
Fig. 1 is a scene schematic diagram of a lane line detection method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a lane line detection method according to a first embodiment of the present invention;
fig. 3 is a schematic flow chart of a lane line detection method according to a second embodiment of the present invention;
fig. 4 is a schematic flow chart of a lane line detection method according to a third embodiment of the present invention;
FIG. 5 is a schematic diagram of the structure of a convolutional neural network corresponding to this example;
fig. 6 is a schematic flowchart of a fourth embodiment of a lane line detection method according to an embodiment of the present invention;
fig. 7 is a schematic flowchart of a fifth embodiment of a lane line detection method according to an embodiment of the present invention;
fig. 8 is a block diagram of a lane line detection apparatus according to a first embodiment of the present invention;
fig. 9 is a block diagram of a second embodiment of a lane marking detection apparatus according to the present invention;
fig. 10 is a block diagram of a lane line detection apparatus according to a third embodiment of the present invention;
fig. 11 is a block diagram of a fourth embodiment of a lane line detection apparatus according to the present invention;
fig. 12 is a block diagram of a fifth exemplary embodiment of a lane marking detection apparatus according to the present invention;
fig. 13 is a block diagram of a lane line detection apparatus according to a sixth embodiment of the present invention;
fig. 14 is a block diagram of a seventh embodiment of a lane line detection apparatus according to an embodiment of the present invention;
fig. 15 is a block diagram of an eighth embodiment of a lane line detection apparatus according to an embodiment of the present invention;
fig. 16 is a block diagram of a lane line detection apparatus according to a ninth embodiment of the present invention;
fig. 17 is a block diagram of an electronic device according to an embodiment of the present invention;
FIG. 18 is a flow chart illustrating a driving control method according to an embodiment of the present invention;
fig. 19 is a schematic structural diagram of a driving control apparatus according to an embodiment of the present invention;
fig. 20 is a schematic diagram of an intelligent driving system provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a lane line detection method, which is characterized in that a neural network obtained through training of a large amount of labeled data is used for obtaining a probability map of each pixel point in a road surface image belonging to a lane line, and the lane line in the road surface image is determined according to the probability map of the lane line.
Fig. 1 is a scene schematic diagram of a lane line detection method according to an embodiment of the present invention. As shown in fig. 1, the method may be applied to a vehicle in which an in-vehicle device is installed. The vehicle-mounted device may be a camera or a vehicle event data recorder mounted on a vehicle, and the like having a shooting function. When the vehicle is positioned on the road surface, the road surface image is acquired through vehicle-mounted equipment on the vehicle, and the lane line on the road surface where the vehicle is positioned is detected based on the method of the embodiment of the invention, so that the detection result obtained by the vehicle can be applied to auxiliary driving or automatic driving.
Fig. 2 is a schematic flow chart of a lane line detection method according to a first embodiment of the present invention, and as shown in fig. 2, the method includes:
s201, acquiring a road surface image acquired by vehicle-mounted equipment installed on a vehicle.
Optionally, the vehicle-mounted device mounted on the vehicle may collect road images on a driving road of the vehicle in real time, and then continuously input the road images collected by the vehicle-mounted device into the neural network to obtain continuously updated lane line detection results.
S202, inputting the road surface image into a neural network, and outputting M probability maps corresponding to the road surface image through the neural network, wherein the M probability maps comprise N lane line probability maps and M-N non-lane line probability maps, the N lane line probability maps respectively correspond to N lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the corresponding lane lines, and the M-N non-lane line probability maps correspond to non-lane lines on the road surface and are used for representing the probability that the pixel points in the road surface image belong to the non-lane lines.
Wherein N is a positive integer, and M is an integer greater than N.
Optionally, the neural network may be, but is not limited to, a convolutional neural network.
Optionally, the neural network is obtained by supervised training of a road surface training image set including lane line or non-lane line marking information in advance. The road surface training image set comprises a large number of training images. Each training image is obtained through the process of acquiring an actual road surface image and labeling. Specifically, the method includes the steps of firstly collecting actual road surface images under various scenes such as day, night, rainy day and tunnels, and further labeling each actual road surface image at a pixel level, namely labeling the category of each pixel point in the actual road surface image as a lane line or a non-lane line, so as to obtain an image for training. The neural network is obtained through supervised training of training images acquired through rich scenes, so that the neural network after training can obtain accurate lane line detection results in simple scenes such as daytime scenes with good weather conditions and light conditions, and can also obtain accurate lane line detection results in scenes with high complexity such as rainy days, nights, tunnels and the like.
The training process of the neural network will be described in detail in the following embodiments.
Optionally, N is a positive integer, and M is an integer greater than N.
Alternatively, the non-lane line may refer to a portion of the driving road surface of the vehicle other than the lane line, and may also be referred to as a road surface background. For example, a road surface other than the lane line, a car on the road surface, a plant on the side of the road surface, and the like belong to the category of the road surface background.
As an example, M may be equal to 5, and N may be equal to 4. That is, it can be considered that there are 4 lane lines on the road surface on which the vehicle travels, the neural network may output 5 probability maps, where there are 4 lane line probability maps in the 5 probability maps, and the 4 lane lines on the road surface correspond to the 4 lane lines respectively, that is, the 4 lane line probability maps correspond to the 4 lane lines on the road surface one to one. In addition, there are 1 non-lane line probability map in the 5 probability maps, corresponding to the non-lane lines on the road surface.
As another example, M may be equal to 3 and N may be equal to 2. I.e. there are 2 lane lines on the road on which the vehicle is traveling. Correspondingly, the neural network may output 3 probability maps, where 2 of the 3 probability maps correspond to 2 lane lines on the road surface, respectively, that is, the 2 lane line probability maps correspond to the 2 lane lines on the road surface one to one. In addition, there are 1 non-lane line probability map among the 3 probability maps, corresponding to the non-lane lines on the road surface.
Assuming that 4 lane lines on the road surface are lane line 1, lane line 2, lane line 3 and lane line 4 in order from the left side to the right side of the vehicle, the 4 lane line probability maps in the 5 probability maps are probability map 1, probability map 2, probability map 3 and probability map 4, respectively. The correspondence relationship between the lane line probability map and the lane line may be shown in table 1 below.
TABLE 1
Lane line probability map Probability map 1 Probability map 2 Probability map 3 Probability map 4
Lane line Lane line 1 Lane line 2 Lane line 3 Lane line 4
That is, the probability map 1 of the neural network output corresponds to the lane line 1, the probability map 2 corresponds to the lane line 2, and so on.
It should be noted that table 1 is only an example of the correspondence between the lane line probability map and the lane line, and in a specific implementation process, the correspondence between the lane line probability map and the lane line may be flexibly set as needed, which is not specifically limited in the embodiment of the present invention.
Further, for example, based on the correspondence shown in table 1, the probability map 1 may identify the probability that each pixel point in the road surface image belongs to the lane line 1. Assuming that the road surface image is represented by a matrix with a size of 200 × 200, after the matrix is input into the neural network, a matrix with a size of 200 × 200 may be output, wherein a value of each element in the matrix is a probability that the corresponding pixel belongs to the lane line 1. For example, in a matrix of 200 × 200 output by the neural network, if the value of the 1 st row and the 1 st column in the 1 st row is 0.4, it indicates that the probability that the pixel point of the 1 st row and the 1 st column in the road surface image belongs to the lane line 1 is 0.4. Further, the matrix output by the neural network may be represented in the form of a lane line probability map.
And S203, determining the lane lines in the road surface image according to the lane line probability map.
After the steps, the probability that each pixel point in the road surface image belongs to each lane line can be determined, and the lane lines in the road surface image can be determined based on the probabilities.
Optionally, the N lane line probability maps output by the neural network correspond to N lane lines on the road surface, for each lane line probability map, a part of pixel points may be selected according to a predetermined condition, and the lane lines corresponding to the lane line probability maps are fitted to the pixel points, so as to obtain the N lane lines.
In the embodiment, the neural network obtained by training the road training image including the lane line or non-lane line marking information is used to obtain the probability map of each pixel point in the road image belonging to the corresponding lane line, and the lane line in the road image is determined according to the probability map of the lane line, so that the accurate lane line detection result can be obtained under the scene with higher complexity. In addition, the M probability maps in this embodiment include a non-lane line probability map, that is, a non-lane line category is added in addition to the lane line category. Therefore, the accuracy of road surface image segmentation can be improved, and the accuracy of a lane line detection result is further improved.
On the basis of the above-described embodiments, the present embodiment relates to a method of determining a lane line in a road surface image from a lane line probability map.
Optionally, as described above, N of the M probability maps correspond to N lane lines on the road surface, and optionally, an L-th lane line probability map of the N lane line probability maps corresponds to an L-th lane line, where L is any one integer greater than or equal to 1 and less than or equal to M, that is, the L-th lane line probability map is any one lane line probability map of the N lane line probability maps.
For the above-mentioned lth lane line probability map, the lth lane line may be fitted based on a plurality of pixel points in the probability map whose probability values are greater than or equal to a preset threshold.
Optionally, in response to that the lth lane line probability map includes a plurality of pixel points with probability values greater than or equal to a preset threshold, the plurality of pixel points with probability values greater than or equal to the preset threshold are fitted to the lth lane line.
Firstly, after a road surface image is input into a neural network, each pixel point has a probability value in an L-th lane line probability graph output by the neural network, and if the probability value is greater than or equal to a preset threshold value, the probability that the pixel point belongs to the L-th lane line is higher.
Then, after a plurality of pixel points with probability values larger than or equal to a preset threshold value are selected from the L-th lane line probability map, calculation of the maximum connected domain can be carried out on the selected pixel points, and then lane line fitting is carried out based on the maximum connected domain, so that lane lines in the road surface image can be obtained.
Illustratively, the preset threshold may be, for example, 0.5.
In an example, assuming that the lth lane line probability map includes probability values of three pixels, where the probability value of the pixel a is 0.5, the probability value of the pixel B is 0.6, and the probability value of the pixel C is 0.2, that is, the probability values of the pixel a and the pixel B are greater than a preset threshold, the lth lane line can be fitted through the pixel a and the pixel B.
In another case, if the lth lane line probability map does not meet the condition that the number of pixel points with probability values greater than or equal to the preset threshold is included, it is indicated that the lth lane line corresponding to the lth lane line probability map does not exist in the current road surface image.
In a specific implementation process, a situation that probability values of the same pixel point in a plurality of probability maps are all greater than or equal to a preset threshold may exist, and in this situation, the processing may be performed as follows.
Optionally, in response to that a plurality of probability values corresponding to the first pixel point in the plurality of lane line probability maps are all greater than or equal to a preset threshold, the first pixel point is used as a pixel point when the first lane line is fitted, where the first lane line is a lane line corresponding to the lane line probability map corresponding to the maximum probability value in the plurality of probability values.
For example, assuming that the preset threshold is 0.5, the neural network outputs 4 lane line probability maps in total, the probability value of the first pixel point in the 1 st lane line probability map is 0.5, the probability value in the 2 nd lane line probability map is 0.6, the probability in the 3 rd lane line probability map is 0.7, and the probability in the 4 th lane line probability map is 0.2, that is, the probabilities of the first pixel point in the 1 st, 2 nd and 3 th lane line probability maps are all greater than or equal to the preset threshold, at this time, the first pixel point may be considered to belong to the lane line corresponding to the 3 rd lane line probability map, that is, the first pixel point is used to fit the lane line corresponding to the 3 rd lane line probability map.
Through the processing, the purpose of effectively removing noise can be achieved, and the condition that one pixel point belongs to a plurality of lane lines is avoided.
In another embodiment, as described above, M-N of the M probability maps correspond to non-lane lines on the road surface, optionally, an S-th lane line probability map of the M-N lane line probability maps corresponds to a non-lane line, S is any integer greater than or equal to 1 and less than or equal to M-N, that is, the S-th non-lane line probability map is any one of the M-N non-lane line probability maps.
For the S-th lane line probability map, a non-lane line may be determined based on a plurality of pixel points in the probability map, where the probability value is greater than or equal to a preset threshold.
Optionally, in response to that the S-th non-lane line probability map includes a plurality of pixel points whose probability values are greater than or equal to a preset threshold, the non-lane line is determined according to the plurality of pixel points whose probability values are greater than or equal to the preset threshold.
Firstly, after a road surface image is input into a neural network, each pixel point has a probability value in an S-th non-lane line probability graph output by the neural network, and if the probability value is greater than or equal to a preset threshold value, the probability that the pixel point belongs to the non-lane line is higher.
Further, after a plurality of pixel points with probability values greater than or equal to a preset threshold value are selected from the S-th lane line probability map, the selected pixel points may be subjected to, for example, calculation for finding a maximum connected domain, so as to obtain a non-lane line region in the road surface image.
Illustratively, the preset threshold may be, for example, 0.5.
In an example, assuming that the S-th non-lane line probability map includes probability values of three pixels, where the probability value of the pixel a is 0.5, the probability value of the pixel B is 0.6, and the probability value of the pixel C is 0.2, that is, the probability values of the pixel a and the pixel B are greater than a preset threshold, the non-lane line in the road image may be determined through the pixel a and the pixel B.
Further, after the lane line in the road surface image is determined through the above embodiment, optionally, the color of the pixel point in the road surface image may be adjusted to the color corresponding to the lane line according to the lane line to which the pixel point in the road surface image belongs, so as to improve the visual effect.
Fig. 3 is a schematic flow chart of a second embodiment of the lane line detection method provided in the embodiment of the present invention, and as shown in fig. 3, the method further includes:
s301, performing fusion processing on the M probability maps to obtain a target probability map.
The M probability maps respectively correspond to a lane line or a non-lane line, and after each lane line is fitted and determined by using the M probability maps, the M probability maps can be fused into a target probability map. The target probability map includes information of each lane line and information of a non-lane line.
S302, adjusting the pixel value of the pixel point corresponding to the first lane line probability map in the target probability map to be a preset pixel value corresponding to the first lane line probability map.
The first lane line probability map is any one of the N lane line probability maps, and the pixel points corresponding to the first lane line probability map are pixel points forming a fitted lane line in the first lane line probability map.
Optionally, after the lane line corresponding to the first lane line probability map is fitted by the above embodiment, the pixel points of the lane line corresponding to the first lane line probability map are determined, and in this step, the pixel value of each pixel point of the lane line corresponding to the first lane line probability map is set to the color corresponding to the lane line in the fused probability map.
For example, the color corresponding to each lane line may be preset, for example, if there are 4 lane lines on the road surface, the colors of the 4 lane lines may be respectively set to red, yellow, blue, and purple, after the target probability map is obtained through the above process, the pixel values of each pixel point constituting each lane line are respectively set to the corresponding colors, and after the setting, the 4 lane lines displayed by the four colors of red, yellow, blue, and purple may be obtained.
In this embodiment, the pixel value of the pixel point corresponding to the first lane line probability map is adjusted to the preset pixel value corresponding to the first lane line probability map, so that a user in a vehicle can view lane lines on a road surface more intuitively and clearly, and user experience is improved.
On the basis of the above embodiments, the present embodiment relates to a process of passing through a lane line probability map.
Fig. 4 is a schematic flowchart of a third embodiment of the lane line detection method according to the embodiment of the present invention, and as shown in fig. 4, the step S202 includes:
s401, extracting low-layer characteristic information of M channels of the road surface image through at least one convolution layer of the neural network.
Optionally, the resolution of the road surface image may be reduced by the convolutional layer, and the low-layer features of the road surface image are retained.
For example, the low-layer feature information of the road surface image may include edge information, straight line information, curve information, and the like in the image.
Optionally, the M channels of the road surface image correspond to lane line categories, where, assuming that there are 4 lane lines on the road surface, there are 5 lane line categories, which are lane line 1, lane line 2, lane line 3, lane line 4, and non-lane line, respectively.
And S402, extracting the high-level feature information of the M channels of the road surface image based on the low-level feature information of the M channels through at least one residual extraction layer of the neural network.
Optionally, the high-level feature information of the M channels of the road surface image extracted by the residual extraction layer includes semantic features, contours, an overall structure, and the like.
And S403, performing upsampling processing on the high-level feature information of the M channels through at least one upsampling layer of the neural network to obtain M probability maps which are as large as the road surface image.
Alternatively, the image may be restored to the original size of the image input to the neural network by the up-sampling process of the up-sampling layer.
In this step, after the up-sampling processing is performed on the high-level feature information of the M channels, M probability maps having a size equal to that of the road surface image input to the neural network can be obtained.
Further, optionally, a normalization layer may be further included in the neural network after the upsampling layer, and a result after the upsampling process is normalized by the normalization layer, and the lane line probability map is output.
Illustratively, a feature map of the road surface image is obtained after upsampling, and the value of each pixel point in the feature map is normalized, so that the value of each pixel point in the feature map is in the range of 0 to 1, and a driving-feasible region probability map is obtained.
Illustratively, one normalization method is: firstly, the maximum value of the pixel values in the feature map is determined, and then the value of each pixel is divided by the maximum value, so that the value of each pixel in the feature map is in the range of 0 to 1.
It should be noted that, in the embodiment of the present invention, the execution sequence of the steps S401 and S402 is not limited, that is, S401 may be executed first and then S402 is executed, or S402 may be executed first and then S401 is executed.
On the basis of the above embodiments, the present embodiment relates to the present embodiment, which relates to the above training process for establishing the neural network.
Optionally, as can be seen from the foregoing embodiments, the neural network according to the embodiments of the present invention may be a convolutional neural network, and the convolutional neural network may include a convolutional layer, a residual extraction layer, an upsampling layer, and a normalization layer. The sequence of the convolution layer and the residual extraction layer can be flexibly set according to the requirement, and in addition, the number of each layer can also be flexibly set according to the requirement.
In an alternative mode, the convolutional neural network may include 6-10 convolutional layers connected, 7-12 residual extraction layers connected, and 1-4 upsampling layers connected.
When the convolutional neural network with the specific structure is used for lane line detection, the requirement of lane line detection in multiple scenes or complex scenes can be met, and therefore the robustness of a detection result is better.
In one example, the convolutional neural network may include 8 convolutional layers connected, 9 residual extraction layers connected, and 2 upsampling layers connected.
Fig. 5 is a schematic structural diagram of the convolutional neural network corresponding to the example, as shown in fig. 5, after the road surface image is input, the road surface image first passes through 8 consecutive convolutional layers of the convolutional neural network, after the 8 consecutive convolutional layers, 9 consecutive residual extraction layers are included, after the 9 consecutive residual extraction layers, 2 consecutive upsampling layers are included, after the 2 consecutive upsampling layers, the normalization layer is included, that is, the lane line probability map is finally output by the normalization layer.
Illustratively, each of the above residual extraction layers may include 256 filters, and each layer includes 128 filters of 3 × 3 and 128 filters of 1 × 1 size.
Optionally, before determining the lane line probability map corresponding to the road surface image by using the neural network, the neural network may be trained by using the road surface training image set.
Fig. 6 is a schematic flow chart of a fourth embodiment of the lane line detection method according to the embodiment of the present invention, and as shown in fig. 6, the training process of the neural network may be:
s601, inputting the training images included in the road surface training image set into the neural network, and acquiring a predicted lane line probability map of the training images.
The predicted lane line probability map is a lane line probability map currently output by the neural network.
And S602, fitting the predicted lane line of the image for training according to a plurality of pixel points with probability values which are more than or equal to a preset threshold value and included in the predicted lane line probability graph.
The specific process may refer to the above-mentioned portion for determining the lane line in the road surface image according to the lane line probability map, and details are not repeated here.
And S603, acquiring the loss between the predicted lane line of the training image and the lane line in the lane line true value image of the training image.
Wherein the lane line true value graph is obtained based on the marking information of the lane line of the training image.
Alternatively, the loss between the predicted lane line and the lane line in the lane line true value map may be calculated by using a loss function.
And S604, adjusting the network parameters of the neural network according to the loss.
Optionally, the network parameters of the neural network may include a convolution kernel size, weight information, and the like.
In this step, the loss may be reversely returned in the neural network by a gradient back propagation manner, and a network parameter of the neural network may be adjusted.
After the step, a training process is completed to obtain a new neural network.
Further, the steps S601 to S604 are continued based on the new neural network until the loss of the predicted lane line and the lane line in the lane line true value map is within a preset loss range, and at this time, the trained neural network is obtained.
Illustratively, the neural network may be trained with one training image at a time, or may be trained with multiple training images at a time.
On the basis of the above embodiment, the present embodiment relates to a process of generating the above training image.
Fig. 7 is a schematic flow chart of a fifth embodiment of the lane line detection method according to the embodiment of the present invention, and as shown in fig. 7, before training the neural network, the method further includes:
and S701, collecting road surface images under a plurality of scenes.
S702 sets an image obtained by labeling a lane line on the road surface images in the plurality of scenes as the training image.
The plurality of scenes comprise at least two scenes of a daytime scene, a rainy scene, a foggy scene, a straight road scene, a curve scene, a tunnel scene, a strong light scene and a night scene.
Optionally, vehicle-mounted devices such as a camera on the vehicle may be used in advance to collect road surface images in the above scenes, and then lane lines on the collected road surface images may be marked in manners such as manual marking, so as to obtain training images in the scenes.
The training images obtained through the process cover various actual scenes, so that the neural network trained by using the training images has good robustness for lane line detection in various scenes, the detection time is short, and the detection result is high in accuracy.
As an alternative implementation, before the road surface image is input into the neural network in step S202, the road surface image may be first subjected to a distortion removal process to further improve the accuracy of the output result of the neural network.
In addition to the above embodiments, after the lane lines in the road surface image are determined, the lane lines in the road surface image may be mapped into a world coordinate system, so as to obtain the positions of the lane lines in the road surface image in the world coordinate system.
Optionally, each pixel point belonging to the lane line in the road image may be subjected to coordinate mapping, so as to obtain lane line information in the world coordinate system, and assist driving or automatic driving is performed based on the obtained lane line information in the world coordinate system.
Fig. 8 is a block diagram of a first embodiment of a lane line detection apparatus according to an embodiment of the present invention, and as shown in fig. 8, the apparatus includes:
the first acquiring module 801 is configured to acquire a road surface image acquired by an onboard device installed on a vehicle.
A second obtaining module 802, configured to input the road surface image into a neural network, and output M probability maps corresponding to the road surface image through the neural network, where the M probability maps include N lane line probability maps and M-N non-lane line probability maps, and the N lane line probability maps respectively correspond to N lane lines on a road surface and are used to represent probabilities that pixel points in the road surface image belong to corresponding lane lines; the M-N non-lane line probability maps correspond to non-lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the non-lane lines, wherein N is a positive integer, and M is an integer larger than N.
The first determining module 803 is configured to determine a lane line in the road surface image according to the lane line probability map.
The device is used for realizing the method embodiments, the realization principle and the technical effect are similar, and the details are not repeated here.
Fig. 9 is a block diagram of a second embodiment of the lane line detection apparatus according to the embodiment of the present invention, and as shown in fig. 9, the first determining module 803 includes:
the first determining unit 8031 is configured to, when the lth lane line probability map includes a plurality of pixel points having probability values greater than or equal to a preset threshold, fit the lth lane line according to the plurality of pixel points having probability values greater than or equal to the preset threshold, where the lth lane line probability map is any one of the N lane line probability maps.
Fig. 10 is a block configuration diagram of a third embodiment of the lane line detection apparatus according to the embodiment of the present invention, and as shown in fig. 10, the first determining module 803 further includes:
the first determining unit 8032 is configured to, when all probability values of a first pixel point in multiple lane line probability maps are greater than or equal to a preset threshold, use the first pixel point as a pixel point when a first lane line is fitted, where the first lane line is a lane line corresponding to a lane line probability map corresponding to a maximum probability value in the multiple probability values.
Fig. 11 is a block diagram of a fourth embodiment of the lane line detection apparatus according to the embodiment of the present invention, and as shown in fig. 11, the first determining module 803 further includes:
a third determining unit 8033, configured to determine a non-lane line according to a plurality of pixel points having probability values greater than or equal to a preset threshold when the S-th non-lane line probability map includes a plurality of pixel points having probability values greater than or equal to the preset threshold, where the S-th non-lane line probability map is any one of the M-N non-lane line probability maps.
Fig. 12 is a block diagram of a fifth exemplary embodiment of a lane line detection apparatus according to an exemplary embodiment of the present invention, as shown in fig. 12, further including:
and the fusion module 804 is configured to perform fusion processing on the M probability maps to obtain a target probability map.
The adjusting module 805 is configured to adjust a pixel value of a pixel point in the target probability map corresponding to the first lane line probability map to a preset pixel value corresponding to the first lane line probability map.
The first lane line probability map is any one of the N lane line probability maps, and the pixel points corresponding to the first lane line probability map are the pixel points of the lane line which forms a fit in the first lane line probability map.
Fig. 13 is a block diagram of a sixth embodiment of the lane line detection apparatus according to the embodiment of the present invention, and as shown in fig. 13, the second obtaining module 802 includes:
a first obtaining unit 8021, configured to extract low-layer feature information of M channels of the road surface image through at least one convolutional layer of the neural network.
A second obtaining unit 8022, configured to extract, by at least one residual extraction layer of the neural network, high-level feature information of M channels of the road surface image based on the low-level feature information of the M channels.
A third obtaining unit 8023, configured to perform upsampling processing on the high-level feature information of the M channels through at least one upsampling layer of the neural network, so as to obtain M probability maps equal to the road surface image.
In another embodiment, the at least one convolutional layer is a connected 6-10 convolutional layer, the at least one residual extraction layer comprises a connected 7-12 residual extraction layers, and the at least one upsampling layer comprises a connected 1-4 upsampling layers.
In another embodiment, the neural network is obtained by supervised training of a road surface training image set including lane line or non-lane line marking information.
In another embodiment, the neural network is obtained by supervised training of a road surface training image set including lane line or non-lane line marking information, and includes:
inputting training images included in the road surface training image set into the neural network, and acquiring a predicted lane line probability map of the training images;
fitting the predicted lane line of the image for training according to a plurality of pixel points of which the probability values included in the predicted lane line probability graph are greater than or equal to a preset threshold value;
obtaining the loss between the predicted lane line of the image for training and the lane line in the lane line true value image of the image for training, wherein the lane line true value image is obtained based on the marking information of the lane line of the image for training;
adjusting network parameters of the neural network according to the loss.
Fig. 14 is a block diagram of a seventh embodiment of the lane marking detection apparatus according to the embodiment of the present invention, as shown in fig. 14, further including:
the acquisition module 806 is configured to acquire road surface images in multiple scenes, and use an image obtained by performing lane marking on the road surface images in the multiple scenes as a training image.
Wherein the plurality of scenes comprise at least two scenes of a rainy scene, a foggy scene, a straight scene, a curved scene, a tunnel scene, a strong light scene and a night scene.
Fig. 15 is a block diagram of an eighth embodiment of the lane line detection apparatus according to the embodiment of the present invention, as shown in fig. 15, further including:
a preprocessing module 807 for performing distortion removal processing on the road surface image.
Fig. 16 is a block diagram of a ninth embodiment of the lane line detection apparatus according to the embodiment of the present invention, as shown in fig. 16, further including:
and the mapping module 808 is configured to map the lane line in the road surface image to a world coordinate system, so as to obtain a position of the lane line in the road surface image in the world coordinate system.
Fig. 17 is a block diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 17, the electronic device 1700 includes:
the memory 1701 is used to store program instructions.
The processor 1702, which is adapted to call and execute program instructions in the memory 1701, performs the method steps described above with respect to the first aspect.
Fig. 18 is a schematic flow chart of a driving control method according to an embodiment of the present invention, and on the basis of the foregoing embodiment, the embodiment of the present invention further provides a driving control method, including:
s1801, the driving control device obtains a lane line detection result of the road surface image.
And S1802, outputting prompt information and/or carrying out intelligent driving control on the vehicle by the driving control device according to the lane line detection result.
The execution subject of the present embodiment is a driving control device, and the driving control device of the present embodiment and the electronic device described in the above embodiments may be located in the same device, or may be located in different devices separately. The driving control device of the present embodiment is in communication connection with the electronic device.
The lane line detection result of the road surface image is obtained by the lane line detection method in the above embodiment, and the specific process refers to the description of the above embodiment and is not described herein again.
Specifically, the electronic device executes the lane line detection method to obtain a lane line detection result of the road surface image, and outputs the lane line detection result of the road surface image. The driving control device obtains a lane line detection result of the road surface image, and outputs prompt information and/or carries out intelligent driving control on the vehicle according to the lane line detection result of the road surface image.
The prompt information may include a lane departure warning prompt, or a lane keeping prompt, and the like.
The smart driving of the present embodiment includes assisted driving and/or automatic driving.
The above-mentioned intelligent driving control may include: braking, changing the speed of travel, changing the direction of travel, lane keeping, changing the state of lights, driving mode switching, etc., wherein the driving mode switching may be switching between assisted driving and automated driving, e.g., switching assisted driving to automated driving.
According to the driving control method provided by the embodiment, the driving control device outputs the prompt information and/or performs intelligent driving control on the vehicle according to the lane line detection result of the road surface image by acquiring the lane line detection result of the road surface image, so that the safety and the reliability of intelligent driving are improved.
Fig. 19 is a schematic structural diagram of a driving control apparatus according to an embodiment of the present invention, and on the basis of the above-described embodiment, a driving control apparatus 1900 according to an embodiment of the present invention includes:
an obtaining module 1901, configured to obtain a lane line detection result of a road surface image, where the lane line detection result of the road surface image is obtained by using the above-mentioned lane line detection method;
and the driving control module 1902 is configured to output prompt information and/or perform intelligent driving control on the vehicle according to the lane line detection result.
The driving control device according to the embodiment of the present invention may be used to implement the technical solutions of the above-mentioned embodiments of the methods, and the implementation principles and technical effects are similar, which are not described herein again.
Fig. 20 is a schematic diagram of an intelligent driving system according to an embodiment of the present invention, and as shown in fig. 20, an intelligent driving system 2000 according to the embodiment includes: a camera 2001, an electronic device 1700, and a driving control device 1900 that are connected in communication, wherein the electronic device 1700 is shown in fig. 17, the driving control device 1900 is shown in fig. 19, and the camera 2001 is used to capture road surface images.
Specifically, as shown in fig. 20, in actual use, the camera 2001 captures a road surface image and transmits the road surface image to the electronic apparatus 1700, and the electronic apparatus 1700 receives the road surface image and processes the road surface image according to the above-described lane line detection method to obtain a lane line detection result of the road surface image. Then, electronic device 1700 transmits the lane line detection result of the obtained road surface image to driving control device 1900, and driving control device 1900 outputs prompt information and/or performs intelligent driving control on the vehicle according to the lane line detection result of the road surface image.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A lane line detection method is characterized by comprising the following steps:
acquiring a road surface image acquired by vehicle-mounted equipment installed on a vehicle;
inputting the road surface image into a neural network, and outputting M probability maps corresponding to the road surface image through the neural network, wherein the M probability maps comprise N lane line probability maps and M-N non-lane line probability maps, and the N lane line probability maps respectively correspond to N lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the corresponding lane lines; the M-N non-lane line probability maps correspond to non-lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the non-lane lines, wherein N is a positive integer, and M is an integer larger than N;
and determining the lane lines in the road surface image according to the lane line probability map.
2. The method of claim 1, wherein determining the lane lines in the road surface image from the lane line probability map comprises:
responding to that the Lth lane line probability map comprises a plurality of pixel points with probability values larger than or equal to a preset threshold value, and fitting the Lth lane line according to the plurality of pixel points with the probability values larger than or equal to the preset threshold value, wherein the Lth lane line probability map is any one of the N lane line probability maps.
3. The method according to claim 1 or 2, wherein the determining the lane lines in the road surface image from the lane line probability map comprises:
and in response to that a plurality of probability values corresponding to the first pixel point in the plurality of lane line probability graphs are all larger than or equal to a preset threshold value, taking the first pixel point as a pixel point when the first lane line is fitted, wherein the first lane line is a lane line corresponding to the lane line probability graph corresponding to the maximum probability value in the plurality of probability values.
4. The method according to any one of claims 1-3, wherein said determining a lane line in said road surface image from said lane line probability map comprises:
and in response to the S-th non-lane line probability map comprising a plurality of pixel points with probability values larger than or equal to a preset threshold, determining a non-lane line according to the plurality of pixel points with the probability values larger than or equal to the preset threshold, wherein the S-th non-lane line probability map is any one of the M-N non-lane line probability maps.
5. A lane line detection apparatus, comprising:
the first acquisition module is used for acquiring a road surface image acquired by vehicle-mounted equipment installed on a vehicle;
the second acquisition module is used for inputting the road surface image into a neural network and outputting M probability maps corresponding to the road surface image through the neural network, wherein the M probability maps comprise N lane line probability maps and M-N non-lane line probability maps, and the N lane line probability maps respectively correspond to N lane lines on a road surface and are used for representing the probability that pixel points in the road surface image belong to the corresponding lane lines; the M-N non-lane line probability maps correspond to non-lane lines on the road surface and are used for representing the probability that pixel points in the road surface image belong to the non-lane lines, wherein N is a positive integer, and M is an integer larger than N;
and the first determining module is used for determining the lane lines in the road surface image according to the lane line probability map.
6. A driving control method characterized by comprising:
the driving control device acquires a lane line detection result of a road surface image, which is obtained by the lane line detection method according to any one of claims 1 to 4;
and the driving control device outputs prompt information and/or carries out intelligent driving control on the vehicle according to the lane line detection result.
7. A driving control apparatus, characterized by comprising:
an obtaining module, configured to obtain a lane line detection result of a road surface image, where the lane line detection result of the road surface image is obtained by using the lane line detection method according to any one of claims 1 to 4;
and the driving control module is used for outputting prompt information and/or carrying out intelligent driving control on the vehicle according to the lane line detection result.
8. An electronic device, comprising:
a memory for storing program instructions;
a processor for invoking and executing program instructions in said memory for performing the method steps of any of claims 1-4.
9. An intelligent driving system, comprising: a communicatively connected camera for acquiring road surface images, an electronic device as claimed in claim 8 and a driving control apparatus as claimed in claim 7.
10. A readable storage medium, characterized in that a computer program is stored in the readable storage medium for performing the method steps of any of claims 1-4.
CN201811392943.8A 2018-11-21 2018-11-21 Lane line detection method and device, electronic device and readable storage medium Pending CN111209777A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201811392943.8A CN111209777A (en) 2018-11-21 2018-11-21 Lane line detection method and device, electronic device and readable storage medium
PCT/CN2019/119886 WO2020103892A1 (en) 2018-11-21 2019-11-21 Lane line detection method and apparatus, electronic device, and readable storage medium
KR1020217015000A KR20210080459A (en) 2018-11-21 2019-11-21 Lane detection method, apparatus, electronic device and readable storage medium
JP2021525040A JP2022506920A (en) 2018-11-21 2019-11-21 Compartment line detection methods, devices, electronic devices and readable storage media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811392943.8A CN111209777A (en) 2018-11-21 2018-11-21 Lane line detection method and device, electronic device and readable storage medium

Publications (1)

Publication Number Publication Date
CN111209777A true CN111209777A (en) 2020-05-29

Family

ID=70773344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811392943.8A Pending CN111209777A (en) 2018-11-21 2018-11-21 Lane line detection method and device, electronic device and readable storage medium

Country Status (4)

Country Link
JP (1) JP2022506920A (en)
KR (1) KR20210080459A (en)
CN (1) CN111209777A (en)
WO (1) WO2020103892A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539403A (en) * 2020-07-13 2020-08-14 航天宏图信息技术股份有限公司 Agricultural greenhouse identification method and device and electronic equipment
CN111860255A (en) * 2020-07-10 2020-10-30 东莞正扬电子机械有限公司 Training and using method, device, equipment and medium of driving detection model
CN112446344A (en) * 2020-12-08 2021-03-05 北京深睿博联科技有限责任公司 Road condition prompting method and device, electronic equipment and computer readable storage medium
CN112560684A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line detection method, lane line detection device, electronic apparatus, storage medium, and vehicle
CN112633151A (en) * 2020-12-22 2021-04-09 浙江大华技术股份有限公司 Method, device, equipment and medium for determining zebra crossing in monitored image
CN113739811A (en) * 2021-09-03 2021-12-03 阿波罗智能技术(北京)有限公司 Method and device for training key point detection model and generating high-precision map lane line

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178215B (en) * 2019-12-23 2024-03-08 深圳成谷科技有限公司 Sensor data fusion processing method and device
CN112373474B (en) * 2020-11-23 2022-05-17 重庆长安汽车股份有限公司 Lane line fusion and transverse control method, system, vehicle and storage medium
CN112464742B (en) * 2021-01-29 2024-05-24 福建农林大学 Method and device for automatically identifying red tide image
KR102487408B1 (en) * 2021-09-07 2023-01-12 포티투닷 주식회사 Apparatus and method for determining a routing route for mobility and medium recording it
CN114004809A (en) * 2021-10-29 2022-02-01 北京百度网讯科技有限公司 Skin image processing method, device, electronic equipment and medium
CN115131759A (en) * 2022-07-01 2022-09-30 上海商汤临港智能科技有限公司 Traffic marking recognition method, device, computer equipment and storage medium
CN116863429B (en) * 2023-07-26 2024-05-31 小米汽车科技有限公司 Training method of detection model, and determination method and device of exercisable area

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654064A (en) * 2016-01-25 2016-06-08 北京中科慧眼科技有限公司 Lane line detection method and device as well as advanced driver assistance system
CN108216229A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 The vehicles, road detection and driving control method and device
CN108280450A (en) * 2017-12-29 2018-07-13 安徽农业大学 A kind of express highway pavement detection method based on lane line
CN108846328A (en) * 2018-05-29 2018-11-20 上海交通大学 Lane detection method based on geometry regularization constraint

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4659631B2 (en) * 2005-04-26 2011-03-30 富士重工業株式会社 Lane recognition device
CN108052904B (en) * 2017-12-13 2021-11-30 辽宁工业大学 Method and device for acquiring lane line
CN108875603B (en) * 2018-05-31 2021-06-04 上海商汤智能科技有限公司 Intelligent driving control method and device based on lane line and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654064A (en) * 2016-01-25 2016-06-08 北京中科慧眼科技有限公司 Lane line detection method and device as well as advanced driver assistance system
CN108216229A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 The vehicles, road detection and driving control method and device
CN108280450A (en) * 2017-12-29 2018-07-13 安徽农业大学 A kind of express highway pavement detection method based on lane line
CN108846328A (en) * 2018-05-29 2018-11-20 上海交通大学 Lane detection method based on geometry regularization constraint

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860255A (en) * 2020-07-10 2020-10-30 东莞正扬电子机械有限公司 Training and using method, device, equipment and medium of driving detection model
CN111539403A (en) * 2020-07-13 2020-08-14 航天宏图信息技术股份有限公司 Agricultural greenhouse identification method and device and electronic equipment
CN112446344A (en) * 2020-12-08 2021-03-05 北京深睿博联科技有限责任公司 Road condition prompting method and device, electronic equipment and computer readable storage medium
CN112560684A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line detection method, lane line detection device, electronic apparatus, storage medium, and vehicle
CN112560684B (en) * 2020-12-16 2023-10-24 阿波罗智联(北京)科技有限公司 Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
US11967132B2 (en) 2020-12-16 2024-04-23 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Lane marking detecting method, apparatus, electronic device, storage medium, and vehicle
CN112633151A (en) * 2020-12-22 2021-04-09 浙江大华技术股份有限公司 Method, device, equipment and medium for determining zebra crossing in monitored image
CN112633151B (en) * 2020-12-22 2024-04-12 浙江大华技术股份有限公司 Method, device, equipment and medium for determining zebra stripes in monitoring images
CN113739811A (en) * 2021-09-03 2021-12-03 阿波罗智能技术(北京)有限公司 Method and device for training key point detection model and generating high-precision map lane line
CN113739811B (en) * 2021-09-03 2024-06-11 阿波罗智能技术(北京)有限公司 Method and equipment for training key point detection model and generating high-precision map lane line

Also Published As

Publication number Publication date
WO2020103892A1 (en) 2020-05-28
JP2022506920A (en) 2022-01-17
KR20210080459A (en) 2021-06-30

Similar Documents

Publication Publication Date Title
CN111209777A (en) Lane line detection method and device, electronic device and readable storage medium
CN111209780A (en) Lane line attribute detection method and device, electronic device and readable storage medium
US11694430B2 (en) Brake light detection
CN105512623B (en) Based on multisensor travelling in fog day vision enhancement and visibility early warning system and method
CN103714538B (en) Road edge detection method and device and vehicle
CN110689724B (en) Automatic motor vehicle zebra crossing present pedestrian auditing method based on deep learning
CN111222522B (en) Neural network training, road surface detection and intelligent driving control method and device
CN108399403A (en) A kind of vehicle distance detecting method calculated based on car plate size
CN103902985A (en) High-robustness real-time lane detection algorithm based on ROI
CN110348273B (en) Neural network model training method and system and lane line identification method and system
CN114418895A (en) Driving assistance method and device, vehicle-mounted device and storage medium
CN106650730A (en) Turn signal lamp detection method and system in car lane change process
CN105426863A (en) Method and device for detecting lane line
US20210117700A1 (en) Lane line attribute detection
CN111209779A (en) Method, device and system for detecting drivable area and controlling intelligent driving
CN107346547A (en) Real-time foreground extracting method and device based on monocular platform
CN110310485B (en) Surrounding information acquisition and display system
CN103886609A (en) Vehicle tracking method based on particle filtering and LBP features
CN117495847B (en) Intersection detection method, readable storage medium and intelligent device
CN110458029A (en) Vehicle checking method and device in a kind of foggy environment
CN104268859A (en) Image preprocessing method for night lane line detection
CN107452230B (en) Obstacle detection method and device, terminal equipment and storage medium
CN115100618A (en) Multi-source heterogeneous perception information multi-level fusion representation and target identification method
CN111824164B (en) Surrounding information acquisition and display method
CN114037969A (en) Automatic driving lane information detection method based on radar point cloud and image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200529