CN110688971A - Method, device and equipment for detecting dotted lane line - Google Patents

Method, device and equipment for detecting dotted lane line Download PDF

Info

Publication number
CN110688971A
CN110688971A CN201910944245.2A CN201910944245A CN110688971A CN 110688971 A CN110688971 A CN 110688971A CN 201910944245 A CN201910944245 A CN 201910944245A CN 110688971 A CN110688971 A CN 110688971A
Authority
CN
China
Prior art keywords
endpoint
lane line
road image
determining
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910944245.2A
Other languages
Chinese (zh)
Other versions
CN110688971B (en
Inventor
王哲
林逸群
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lingang Jueying Intelligent Technology Co ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN201910944245.2A priority Critical patent/CN110688971B/en
Publication of CN110688971A publication Critical patent/CN110688971A/en
Priority to JP2021571821A priority patent/JP2022535839A/en
Priority to KR1020217031171A priority patent/KR20210130222A/en
Priority to PCT/CN2020/117188 priority patent/WO2021063228A1/en
Application granted granted Critical
Publication of CN110688971B publication Critical patent/CN110688971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Transportation (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure provides a method, a device and equipment for detecting a dotted lane line, wherein the method for detecting the dotted lane line comprises the following steps: carrying out feature extraction on a road image to be detected to obtain a feature map of the road image; determining a lane line area in the road image and an endpoint pixel point in the road image according to the feature map; the endpoint pixel points are pixel points of endpoints which may be dotted lane lines in the road image; and determining a dotted line lane line in the road image based on the lane line region and the endpoint pixel points. The embodiment of the disclosure realizes the segmented detection of the dotted lane line.

Description

Method, device and equipment for detecting dotted lane line
Technical Field
The disclosure relates to a machine learning technology, in particular to a method, a device and equipment for detecting a dotted lane line.
Background
The detection of the lane information on the road provides great help for positioning, decision making and the like of automatic driving. In the related technology, the lane lines are detected through traditional detection algorithms such as Hough transform and the like, and the lane lines in the image are extracted through artificially designed features. However, in the conventional machine learning-based lane line detection, the broken-line lane line is detected as a continuous lane line during detection.
Disclosure of Invention
In view of the above, the present disclosure at least provides a method, an apparatus and a device for detecting a dashed lane line.
In a first aspect, a method for detecting a dashed lane line is provided, where the method includes: carrying out feature extraction on a road image to be detected to obtain a feature map of the road image; determining a lane line area in the road image and an endpoint pixel point in the road image according to the feature map; the endpoint pixel points are pixel points of endpoints which may be dotted lane lines in the road image; and determining a dotted line lane line in the road image based on the lane line region and the endpoint pixel points.
In some optional embodiments, the determining a lane line region in the road image according to the feature map includes: determining the region confidence of each pixel point in the road image according to the feature map, wherein the region confidence is the confidence that each pixel point in the road image belongs to a lane line region; and determining the region including the pixel points with the region confidence coefficient reaching the region threshold value as the lane line region.
In some optional embodiments, the determining, according to the feature map, an endpoint pixel point in the road image includes: determining an endpoint confidence coefficient of each pixel point in the road image according to the feature map, wherein the endpoint confidence coefficient is the confidence coefficient of the endpoint of each pixel point in the road image belonging to the dotted lane line; determining whether the endpoint confidence of each pixel point reaches an endpoint threshold value; and determining the pixel point with the endpoint confidence coefficient reaching the endpoint threshold value as the endpoint pixel point.
In some optional embodiments, after determining whether the endpoint confidence of each pixel reaches the endpoint threshold, before determining the pixel whose endpoint confidence reaches the endpoint threshold as the endpoint pixel, the method further includes: and determining that at least one adjacent pixel point with the endpoint confidence higher than the endpoint threshold exists in the point set formed by the adjacent pixel points of the pixel points with the endpoint confidence reaching the endpoint threshold.
In some optional embodiments, after determining whether the endpoint confidence of each pixel point reaches the endpoint threshold, the method further includes: determining a point set formed by adjacent pixel points of the pixel points with the endpoint confidence reaching the endpoint threshold, wherein no adjacent pixel point with the endpoint confidence higher than the endpoint threshold exists; and determining the pixel point of which the endpoint confidence reaches the endpoint threshold value, and not determining the pixel point of the endpoint.
In some optional embodiments, the endpoint pixel points within the preset region range form an endpoint pixel point set; determining a dashed lane line in the road image based on the lane line region and the endpoint pixel points includes: determining an endpoint coordinate in the road image according to the endpoint pixel point in each endpoint pixel point set and located in the lane line area; and determining a dotted lane line in the road image according to the endpoint coordinates in the road image.
In some optional embodiments, the determining the endpoint coordinates in the road image according to the endpoint pixel points located in the lane line region in each endpoint pixel point set includes: and carrying out weighted average on the coordinates of the endpoint pixel points in the lane line area in an endpoint pixel point set to obtain the endpoint coordinate of an endpoint in the road image.
In some optional embodiments, after determining the endpoint coordinates in the road image according to the endpoint pixel points in each endpoint pixel point set and located in the lane line region, the method further includes: determining the confidence coefficient of an endpoint in the road image according to the endpoint confidence coefficient of the endpoint pixel point in the endpoint pixel point set and in the lane line area; and removing the end points of which the confidence degrees are lower than a preset threshold value from the determined end points in the road image.
In some optional embodiments, after determining the endpoint coordinates in the road image according to the endpoint pixel points in each endpoint pixel point set located in the lane line region, and before determining the dashed lane line in the road image according to the endpoint coordinates in the road image, the method further includes: determining a near end point and a far end point in the end points in the road image according to the end point coordinates in the road image; determining a dotted lane line in the road image according to the endpoint coordinates in the road image, including: and determining a dotted lane line in the road image according to the lane line area and a near end point and a far end point of the end points in the road image.
In some optional embodiments, feature extraction is performed on a road image to be detected to obtain a feature map of the road image, and the feature extraction is performed by a feature extraction network; determining a lane line area in the road image according to the characteristic diagram, and executing the lane line area by an area prediction network; and determining endpoint pixel points in the road image according to the characteristic graph, and executing by an endpoint prediction network.
In some optional embodiments, the feature extraction network, the area prediction network, and the endpoint prediction network are trained by: carrying out feature extraction on the road sample image by using a feature extraction network to obtain a feature map of the road sample image; the road sample image comprises a dotted lane line; determining a lane line area in the road sample image according to the characteristic diagram of the road sample image by using an area prediction network; determining endpoint pixel points in the road sample image according to the feature map of the road sample image by utilizing an endpoint prediction network; determining a first network loss according to the difference between the determined lane line region in the road sample image and the marked lane line region in the road sample image; adjusting network parameters of the feature extraction network and network parameters of the regional prediction network according to the first network loss; determining a second network loss according to the difference between the determined endpoint pixel point in the road sample image and the marked endpoint pixel point in the road sample image; and adjusting the network parameters of the endpoint prediction network and the network parameters of the feature extraction network according to the second network loss.
In some optional embodiments, the marked endpoint pixel points in the road sample image include: and the pixel points of the actual end points of the dotted line lane lines in the road sample image and the adjacent pixel points of the actual end points.
In some optional embodiments, after the determining the dashed lane line in the road image, the method further comprises: and correcting the positioning information of the intelligent vehicle in the road shown by the road image according to the detected end point of the dotted lane line.
In some optional embodiments, the correcting, according to the detected end point of the dashed lane line, the positioning information of the intelligent vehicle in the road shown in the road image includes: determining a first distance by an image ranging method according to the detected end point of the dotted lane line, wherein the first distance represents the distance between the detected target end point of the dotted lane line and the intelligent vehicle; determining a second distance according to the positioning longitude and latitude of the intelligent vehicle and the endpoint longitude and latitude of the target endpoint in a driving assistance map used by the intelligent vehicle, wherein the second distance represents the distance between the target endpoint determined according to the driving assistance map and the intelligent vehicle; and correcting the positioning longitude and latitude of the intelligent vehicle according to the error between the first distance and the second distance.
In a second aspect, there is provided an apparatus for detecting a dashed lane line, the apparatus including: the characteristic extraction module is used for extracting the characteristics of the road image to be detected to obtain a characteristic map of the road image; the characteristic processing module is used for determining a lane line area in the road image and an endpoint pixel point in the road image according to the characteristic map; the endpoint pixel points are pixel points of endpoints which may be dotted lane lines in the road image; and the lane line determining module is used for determining a dotted lane line in the road image based on the lane line region and the endpoint pixel points.
In some optional embodiments, the feature processing module comprises: the region determining submodule is used for determining the region confidence coefficient of each pixel point in the road image according to the characteristic map, and the region confidence coefficient is the confidence coefficient that each pixel point in the road image belongs to the lane line region; and determining the region including the pixel points with the region confidence coefficient reaching the region threshold value as the lane line region.
In some optional embodiments, the feature processing module comprises: the end point pixel submodule is used for determining the end point confidence coefficient of each pixel point in the road image according to the feature map, and the end point confidence coefficient is the confidence coefficient of the end point of each pixel point in the road image belonging to a dotted line lane line; determining whether the endpoint confidence of each pixel point reaches an endpoint threshold value; and determining the pixel point with the endpoint confidence coefficient reaching the endpoint threshold value as the endpoint pixel point.
In some optional embodiments, the endpoint pixel submodule is further configured to: after determining whether the endpoint confidence of each pixel point reaches the endpoint threshold, determining the pixel point with the endpoint confidence reaching the endpoint threshold as a point set formed by adjacent pixel points of the pixel point with the endpoint confidence reaching the endpoint threshold, wherein at least one adjacent pixel point with the endpoint confidence higher than the endpoint threshold exists.
In some optional embodiments, the endpoint pixel submodule is further configured to: after determining whether the endpoint confidence of each pixel point reaches an endpoint threshold, determining that no adjacent pixel point with the endpoint confidence higher than the endpoint threshold exists in a point set formed by adjacent pixel points of the pixel points with the endpoint confidence reaching the endpoint threshold; and determining the pixel point of which the endpoint confidence reaches the endpoint threshold value, and not determining the pixel point of the endpoint.
In some optional embodiments, the lane line determining module is specifically configured to: determining an endpoint coordinate in the road image according to the endpoint pixel point in each endpoint pixel point set and located in the lane line area; and determining a dotted lane line in the road image according to the endpoint coordinates in the road image.
In some optional embodiments, the lane line determining module, configured to, when determining the endpoint coordinate in the road image according to the endpoint pixel point located in the lane line region in each endpoint pixel point set, include: and carrying out weighted average on the coordinates of the endpoint pixel points in the lane line area in an endpoint pixel point set to obtain the endpoint coordinate of an endpoint in the road image.
In some optional embodiments, the lane line determining module is further configured to: after determining the endpoint coordinates in the road image according to the endpoint pixel points in each endpoint pixel point set and in the lane line region, determining the confidence of an endpoint in the road image according to the endpoint confidence of the endpoint pixel points in one endpoint pixel point set and in the lane line region; and removing the end points of which the confidence degrees are lower than a preset threshold value from the determined end points in the road image.
In some optional embodiments, the lane line determining module is further configured to: determining a near end point and a far end point in the end points in the road image according to the end point coordinates in the road image before determining a broken line lane line in the road image according to the end point coordinates in the road image after determining the end point coordinates in the road image according to the end point pixels in each end point pixel set which are located in the lane line region; and determining a dotted lane line in the road image according to the lane line area and a near end point and a far end point of the end points in the road image.
In some optional embodiments, the feature extraction module is specifically configured to perform feature extraction on a road image to be detected through a feature extraction network to obtain a feature map of the road image; the feature processing module is specifically configured to: and determining a lane line region in the road image according to the feature map through a region prediction network, and determining an endpoint pixel point in the road image according to the feature map through an endpoint prediction network.
In some optional embodiments, the apparatus further comprises: a network training module for training the feature extraction network, the area prediction network, and the endpoint prediction network by: carrying out feature extraction on the road sample image by using a feature extraction network to obtain a feature map of the road sample image; determining a lane line area in the road sample image according to the characteristic diagram of the road sample image by using an area prediction network; the road sample image comprises a dotted lane line; determining endpoint pixel points in the road sample image according to the feature map of the road sample image by utilizing an endpoint prediction network; determining a first network loss according to the difference between the determined lane line region in the road sample image and the marked lane line region in the road sample image; adjusting network parameters of the feature extraction network and network parameters of the regional prediction network according to the first network loss; determining a second network loss according to the difference between the determined endpoint pixel point in the road sample image and the marked endpoint pixel point in the road sample image; and adjusting the network parameters of the endpoint prediction network and the network parameters of the feature extraction network according to the second network loss.
In some optional embodiments, the marked endpoint pixel points in the road sample image include: and the pixel points of the actual end points of the dotted line lane lines in the road sample image and the adjacent pixel points of the actual end points.
In some optional embodiments, the apparatus further comprises: and the positioning correction module is used for correcting the positioning information of the intelligent vehicle in the road shown by the road image according to the detected end point of the dotted lane line.
In some optional embodiments, the positioning correction module is specifically configured to: determining a first distance by an image ranging method according to the detected end point of the dotted lane line, wherein the first distance represents the distance between the detected target end point of the dotted lane line and the intelligent vehicle; determining a second distance according to the positioning longitude and latitude of the intelligent vehicle and the endpoint longitude and latitude of the target endpoint in a driving assistance map used by the intelligent vehicle, wherein the second distance represents the distance between the target endpoint determined according to the driving assistance map and the intelligent vehicle; and correcting the positioning longitude and latitude of the intelligent vehicle according to the error between the first distance and the second distance.
In a third aspect, an electronic device is provided, the device comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method according to any of the embodiments of the present application when executing the computer instructions.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any of the embodiments of the application.
According to the method, the device and the equipment for detecting the dotted lane line, provided by the embodiment of the disclosure, the lane line region and the endpoint pixel points are detected according to the road image, and each section in the dotted lane line is determined by combining the lane line region and the endpoint pixel points, so that the sectional detection of the dotted lane line is realized.
Drawings
In order to more clearly illustrate one or more embodiments of the present disclosure or technical solutions in related arts, the drawings used in the description of the embodiments or related arts will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in one or more embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 illustrates a method for detecting a dashed lane line according to at least one embodiment of the present disclosure;
fig. 2 illustrates a method for detecting a dashed lane line according to at least one embodiment of the present disclosure;
fig. 3 illustrates a schematic diagram of an endpoint pixel point set provided by at least one embodiment of the present disclosure;
fig. 4 illustrates a detection network of a dashed lane line provided by at least one embodiment of the present disclosure;
fig. 5 illustrates a training method of a detection network of a dashed lane line provided in at least one embodiment of the present disclosure;
fig. 6 illustrates an image processing process provided by at least one embodiment of the present disclosure;
fig. 7 illustrates a method for detecting a dashed lane line according to at least one embodiment of the present disclosure;
fig. 8 illustrates a detection apparatus for a dashed lane line provided in at least one embodiment of the present disclosure;
fig. 9 illustrates another dashed lane line detection apparatus provided in at least one embodiment of the present disclosure;
fig. 10 illustrates a detection apparatus for a dashed lane line provided in at least one embodiment of the present disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in one or more embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art based on one or more embodiments of the disclosure without inventive faculty are intended to be within the scope of the disclosure.
In automatic driving, it is necessary to detect some feature points on a road to assist positioning, for example, the current position of a vehicle can be more accurately positioned by detecting the feature points. The end points of the dashed lane lines on the road are also available road feature points. However, in the current lane line detection, the dashed lane line is detected as a continuous lane line, and the end point of the dashed lane line is not identified and not fully utilized. Therefore, it is necessary to develop a method for detecting the end point of the dashed lane line.
In view of this, at least one embodiment of the present disclosure provides a method for detecting a dashed lane line, which can accurately detect a lane line with a segment when detecting the dashed lane line, so that an end point of the dashed lane line can be detected, and feature points that can be utilized in automatic driving can be increased.
Fig. 1 provides a method for detecting a dashed lane line, which may include the following processes:
in step 100, feature extraction is performed on a road image to be detected to obtain a feature map of the road image.
In this step, the road image to be detected includes a dashed lane line.
The embodiment does not limit the obtaining manner of the feature map of the road image, and for example, the feature map may be extracted in a network manner, or may be obtained in other manners.
In step 102, a lane line region in the road image and an endpoint pixel point in the road image are determined according to the feature map.
The end point pixel points are pixel points of end points which may be dotted line lane lines in the road image.
For example, the region confidence of each pixel point in the road image may be determined according to a feature map, where the region confidence is the confidence that each pixel point in the road image belongs to a lane line region; and determining the region including the pixel points with the region confidence coefficient reaching the region threshold value as the lane line region.
For example, according to the feature map, determining an endpoint confidence of each pixel point in the road image, where the endpoint confidence is a confidence that each pixel point in the road image belongs to an endpoint of a dashed lane line; determining whether the endpoint confidence of each pixel point reaches an endpoint threshold value; and determining the pixel point with the endpoint confidence coefficient reaching the endpoint threshold value as the endpoint pixel point.
In step 104, a dashed lane line in the road image is determined based on the lane line region and the end point pixel points.
For example, the dotted lane line is in the lane line region, and therefore, the endpoint pixel points that are not in the lane line region may be removed, so that the endpoint of the dotted lane line may be determined only according to the endpoint pixel points located in the lane line region, and a section of the dotted lane line may be obtained according to the endpoint.
In the method for detecting the dashed line lane line according to the embodiment, the region of the lane line and the endpoint pixel points are detected according to the road image, and each segment in the dashed line lane line is determined by combining the region of the lane line and the endpoint pixel points, so that the segmented detection of the dashed line lane line is realized.
Fig. 2 is a method for detecting a dashed lane line, which is provided in at least one embodiment of the present disclosure and describes a detection process of the dashed lane line in more detail. As shown in fig. 2, the method may include the following processing, where it should be noted that the execution order of each step is not limited by the present embodiment.
In step 200, feature extraction is performed on the road image to be detected to obtain a feature map of the road image.
In this step, the road image may be, for example, a road image acquired by a vehicle-mounted camera, a road reflectance image based on a laser radar, or a high-definition road image captured by a satellite, which may be used for high-precision mapping. For example, the road image may be an image captured by the intelligent driving device on the road on which the intelligent driving device is driving, and various types of lane lines, such as a solid lane line, a dashed lane line, and the like, may be included in the road image.
In step 202, a lane line region in the road image is determined according to the feature map.
In this step, the confidence that each pixel point in the road image belongs to the lane line region can be determined according to the feature map, and the lane line region is determined by the pixel point with the confidence higher than the region threshold.
For example, a threshold is set, which may be referred to as a region threshold, and if the confidence of a pixel belonging to a lane line region is higher than the region threshold, the pixel is reserved as the lane line region; otherwise, if the confidence of a pixel point belonging to the lane line region is lower than the region threshold, the pixel point is considered not to belong to the lane line region.
In step 204, the end point confidence of each pixel point in the road image is determined according to the feature map. In this step, the confidence that each pixel point in the road image belongs to the endpoint can be determined according to the feature map.
In step 206, the pixel point with the endpoint confidence reaching the endpoint threshold is selected.
In one example, a threshold may be set, which may be referred to as an endpoint threshold. If the confidence of a pixel point belonging to the endpoint is lower than the endpoint threshold, the pixel point can be considered not to belong to the endpoint, that is, the pixel point with the confidence lower than the endpoint threshold is deleted from the prediction result of the endpoint. If the confidence of a pixel point belonging to an endpoint is higher than the endpoint threshold, the pixel point can be considered to belong to the endpoint of the dashed lane line.
In step 208, it is determined whether at least one neighboring pixel point with an endpoint confidence higher than the endpoint threshold exists in the point set formed by neighboring pixel points of the pixel point among the pixel points with the endpoint confidence reaching the endpoint threshold.
Optionally, in order to make the prediction result of the endpoint more accurate, the predicted endpoint pixel point may be further screened. If the confidence coefficient of at least one adjacent pixel belonging to the endpoint in the adjacent pixel set of the endpoint pixel is higher than the endpoint threshold, the endpoint pixel is reserved. Otherwise, if the confidence belonging to the endpoint is lower than the endpoint threshold in all the adjacent pixels of the endpoint pixel, the endpoint pixel is an isolated point, and in the actual prediction result, the endpoint of a dashed lane line comprises a plurality of adjacent pixels, and the isolated point is unlikely to be the endpoint and can be eliminated.
In a specific implementation, an optional implementation manner of the above screening of isolated points may be: acquiring the endpoint confidence of all adjacent pixels of the endpoint pixel, wherein the endpoint confidence is the confidence that the adjacent pixels belong to the endpoint, and if the adjacent pixels with the endpoint confidence larger than the endpoint threshold exist in all the adjacent pixels of the endpoint pixel, determining that the endpoint pixel is not an isolated point; otherwise, if the confidences of all adjacent pixels of an endpoint pixel are lower than the endpoint threshold, the endpoint pixel is an isolated point.
If the determination result in step 208 is yes, step 210 is executed.
If the determination result in step 208 is negative, that is, the endpoint pixel is an isolated point, step 212 is executed.
In step 210, it is determined that the pixel whose endpoint confidence reaches the endpoint threshold is an endpoint pixel. Execution continues with step 214.
In step 212, it is determined that the pixel whose endpoint confidence reaches the endpoint threshold is not an endpoint pixel.
In step 214, the endpoint coordinates in the road image are determined according to the endpoint pixel points in the lane line region in each endpoint pixel point set.
For example, an endpoint of the dashed lane line may be considered to be formed by a plurality of pixels, and these pixels may be the predicted endpoint pixels. The coordinates of the endpoint pixel points in the lane line region in an endpoint pixel point set can be weighted and averaged to obtain the endpoint coordinate of an endpoint in the road image.
The endpoint pixel point set in this step is a set formed by at least one endpoint pixel point within a preset area range. Because a plurality of endpoint pixel points at the endpoint of one section of the dashed line and in the neighborhood range can be used as one endpoint pixel point set, one endpoint pixel point set can be regarded as a pixel point corresponding to the endpoint of one section of the dashed line in one dashed line lane and a pixel point in the neighborhood thereof.
As shown in fig. 3, at least one endpoint pixel, for example, the endpoint pixel 31, is included in the preset region L, and these endpoint pixels form an endpoint pixel set. From these endpoint pixels, the endpoint coordinates of a corresponding one of endpoints 32 may be determined.
For example, if all the endpoint pixel points in the preset area range L are located in the lane line area, the coordinates of at least one endpoint pixel point may be weighted, and assuming that the coordinates of each endpoint pixel point are represented as (x, y), the x coordinates of all the endpoint pixel points are weighted and averaged to obtain the x coordinate of the endpoint 320Coordinates; the y coordinate of the endpoint 32 can be obtained by weighted averaging the y coordinates of all endpoint pixels0And (4) coordinates. Of course, the above manner is merely an example, and the actual implementation is not limited thereto.
(x) above0,y0) May be referred to as the endpoint coordinates of the endpoint 32, which endpoint 32 is the endpoint of one of the dashed lane segments in the road image. A dashed lane line may include multiple segments, each of which may be referred to as a dashed lane segment. A dashed lane segment includes two endpoints. As described above, the endpoint coordinates of each endpoint are obtained by weighted averaging the coordinates of a plurality of endpoint pixels.
Further, in one embodiment, after determining the endpoint coordinates in the road image, the method further comprises: determining the confidence coefficient of an endpoint in the road image according to the endpoint confidence coefficient of the endpoint pixel point in the endpoint pixel point set and in the lane line area; and removing the end points of which the confidence degrees are lower than a preset threshold value from the determined end points in the road image.
In step 216, a dashed lane line in the road image is determined according to the endpoint coordinates in the road image.
In this step, after the end point coordinates in the road image are determined, a near end point and a far end point of the end points in the road image are determined according to the end point coordinates in the road image. The intelligent driving device is arranged on the vehicle body, the vehicle body is provided with an image acquisition device, the image acquisition device is arranged on the vehicle body, the vehicle body is provided with a virtual lane line, the virtual lane line is arranged on the.
After the near-end point and the far-end point are determined, the broken-line lane line in the road image is determined according to the lane line area and the near-end point and the far-end point in the end points in the road image. For example, a segment of the dashed lane line is obtained by connecting the near end point with the far end point and combining the lane line region.
In another embodiment, a plurality of endpoints located in one lane line area may be subjected to coordinate sorting to determine a starting endpoint and an ending endpoint of each dashed lane line. For example, the image height direction of the road image may be taken as the y direction, and the near end point and the far end point may be determined by sorting according to the y coordinates of the respective end points. The near end point is an end point with a smaller y coordinate, and the far end point is an end point with a larger y coordinate.
In addition, the method further comprises: determining the confidence of each endpoint in the road image according to the endpoint confidence of the endpoint pixel point in each endpoint pixel point set and in the lane line area; this may also be done by determining the confidence of each endpoint in the road image by means of a weighted average. And then removing the endpoints with the confidence degrees lower than a preset threshold value from the determined multiple endpoints. This way, the end points of the blur at greater distances in some road images can be removed.
In the method for detecting a dashed-line lane line provided in this embodiment, after determining a lane line region and a detection result of an endpoint pixel point, the endpoint pixel point is screened, only the endpoint pixel point located in the lane line region is retained, then the lane line endpoint is determined according to the screened endpoint pixel point, and then a fuzzy endpoint far away from a lane image is eliminated according to a confidence of each endpoint, so that accuracy of endpoint detection is improved, and when a dashed-line lane line is determined according to a detection result of the lane line region and a detection result of the endpoint, the detection accuracy of the dashed-line lane line is improved.
In an implementation manner, the method for detecting a dashed line lane line provided in the embodiment of the present application is performed through a detection network of the dashed line lane line, where the detection network may include a feature extraction network, a region prediction network, and an endpoint prediction network, the feature extraction network is configured to perform feature extraction on a road image to obtain a feature map of the road image, the region prediction network is configured to determine a lane line region in the road image according to the feature map, and the endpoint prediction network is configured to determine an endpoint pixel point in the road image according to the feature map.
The structure and training of the detection network of the dashed lane lines will be described first, and then how the detection network is used to detect the dashed lane lines will be described.
Fig. 4 illustrates a detection network of a dashed lane line. The detection network may include: a feature extraction network 41, a regional prediction network 42, and an endpoint prediction network 43.
The feature extraction network 41 is configured to extract image features from an input road image, and obtain a feature map of the road image.
The regional prediction network 42 is configured to predict a lane line region according to the feature map of the road image, that is, predict the probability that each pixel in the road image belongs to a pixel in the lane line region through the regional prediction network 42. When the detection network is not trained yet, there may be a certain prediction error, for example, a pixel point not located on the lane line region is also predicted as a pixel point on the lane line region.
And the endpoint prediction network 43 is configured to predict, according to the feature map of the road image, output endpoint pixel points, that is, predict the probability that each pixel point in the road image is an endpoint pixel point.
In practical implementation, the network prediction may output a confidence that a pixel belongs to a certain category. For example, the area prediction network may predict the confidence that each pixel point in the output road image belongs to the lane line area. And the endpoint prediction network can predict the confidence that each pixel point in the output road sample image belongs to the endpoint pixel point.
Fig. 5 illustrates a training method of a detection network of a dashed lane line, that is, a method for training a neural network having the structure shown in fig. 4 according to at least one embodiment of the present disclosure, where the embodiment of the present disclosure does not limit the network structure of the feature extraction network, and in the following description, an FCN (full volumetric network) is taken as an example.
In the process of training the detection network illustrated in fig. 4, the image received by the feature extraction network 41 is a road sample image, the road sample image includes a dashed lane line, and the road sample image further sets the following two types of label information:
the label information is set to be the lane line area label information of the dotted lane line in the road sample image, and which pixel points in the road sample image belong to the lane line area are identified.
And the other label information is the label information of the end point pixel point of the dotted line lane line in the set road sample image. The endpoint pixel point label information is to identify the pixel points at the two endpoints of each segment in the dashed lane line as endpoint pixel points. For example, one of the segments of the dashed lane lines has two end points, where a preset region range is respectively identified, and the pixels in the range are labeled as end point pixels.
As shown in fig. 5, the method may include the following processes:
in step 500, an input road sample image is acquired, where the road sample image includes a to-be-detected dashed lane line.
In this step, in the training stage of the detection network, the image input into the detection network may be referred to as a road sample image, and the detection network is trained by using a training set including a large number of road sample images.
Wherein each road sample image includes a dashed lane line.
In step 502, image features of the road sample image are extracted through a feature extraction network to obtain a feature map.
Fig. 6 shows a process of processing an input road sample image by the detection network when the feature extraction network is the FCN.
For example, after downsampling is performed by convolution for a plurality of times, a high-dimensional feature map of the road sample image is obtained, and the high-dimensional feature map can be represented by conv 1. The high-dimensional feature map conv1 is then deconvoluted and up-sampled to obtain the image feature us _ conv 1. The image feature us _ conv1 is then input to the area prediction network and the endpoint prediction network.
In step 504, the feature maps (e.g., image feature us _ conv1) are input into an area prediction network and an endpoint prediction network, respectively, the lane line areas in the output road sample image are predicted by the area prediction network, and the endpoint pixel points in the output road sample image are predicted by the endpoint prediction network.
For example, the confidence that each pixel point in the output road sample image belongs to the lane line region can be predicted through a region prediction network; and predicting the confidence coefficient of each pixel point in the output road sample image belonging to the endpoint pixel point through an endpoint prediction network.
In step 506, network parameters of the feature extraction network, the area prediction network, and the endpoint prediction network are adjusted.
In this step, a first network loss may be determined according to a difference between the determined lane line region in the road sample image and the marked lane line region in the road sample image; and adjusting the network parameters of the feature extraction network and the network parameters of the regional prediction network according to the first network loss. Determining a second network loss according to the difference between the determined endpoint pixel point in the road sample image and the marked endpoint pixel point in the road sample image; and adjusting the network parameters of the endpoint prediction network and the network parameters of the feature extraction network according to the second network loss.
In particular, the network parameters in the network can be detected through back propagation adjustment. And when a network iteration ending condition is reached, ending the network training, wherein the ending condition can be that the iteration reaches a certain number of times or the loss value is less than a certain threshold value.
In another embodiment, in one road sample image, the number of visible segments of the dashed lane line is small, that is, the proportion of positive samples in the image is low, and in order to improve the accuracy of the detection network training, the proportion of positive samples can be improved. For example, the end point pixel points labeled in the road sample image not only include the pixel points of the actual end points of the dotted line lane lines in the road sample image, but also include the adjacent pixel points of the actual end points, so that the end point range of the dotted line lane lines is expanded, more pixel points are marked as end points, and the proportion of positive samples is increased.
After the detection network is trained, the detection network can be used to detect the dashed lane lines.
Application detection network identification dotted lane line
Fig. 7 is a method for detecting a dashed lane line according to an embodiment of the present application, and as shown in fig. 7, the method describes a method for detecting a dashed lane line by taking a trained detection network as an example, and the method may include the following steps:
in step 700, receiving a road image to be detected;
in this step, the road image is an image of a road on which the vehicle is traveling, which is acquired by the intelligent driving device.
In step 702, the image features of the road image are extracted through a feature extraction network, so as to obtain a feature map of the road image.
In this embodiment, the detection network including the feature extraction network, the area prediction network, and the endpoint prediction network is obtained by training according to the training method of any of the embodiments described above in the present application.
The feature extraction network can extract the image features of the road image through operations such as multiple convolution, deconvolution and the like.
In step 704, the image features are respectively input into a region prediction network and an endpoint prediction network, a lane line region in the road image is predicted and output through the region prediction network, and endpoint pixel points in the road image are predicted and output through the endpoint prediction network.
The image features obtained in step 702 may be input into two parallel branch networks, namely, a regional prediction network and an endpoint prediction network. Through the regional prediction network, the prediction result of the lane line region in the road image can be predicted and output, and the prediction result comprises the confidence coefficient of each pixel point belonging to the lane line region. And predicting the prediction result of the endpoint pixel points in the output road image through the endpoint prediction network, wherein the prediction result comprises the confidence coefficient of each pixel point belonging to the endpoint pixel point.
In one example, on the basis of obtaining the prediction result, the lane line region may be determined according to a pixel point whose confidence is higher than a region threshold. For example, a threshold is set, which may be referred to as a region threshold, and if the confidence of a pixel belonging to a lane line region is higher than the region threshold, the pixel is reserved as the lane line region; otherwise, if the confidence of a pixel point belonging to the lane line region is lower than the region threshold, the pixel point is considered not to belong to the lane line region. When the confidence of the pixel point is equal to the threshold of the area, the pixel point can be set to belong to or not belong to the lane line area by itself, for example, the pixel point can be set to be equal to the threshold of the area and not belong to the lane line area.
In one example, a threshold may also be set, which may be referred to as an endpoint threshold. On the basis of obtaining the prediction result, if the confidence of a pixel point belonging to the endpoint is lower than the endpoint threshold, the pixel point can be considered not to belong to the endpoint, that is, the pixel point with the confidence lower than the endpoint threshold is deleted from the prediction result of the endpoint pixel. If the confidence of a pixel belonging to an endpoint is higher than the endpoint threshold, the pixel can be considered to belong to the endpoint pixel. Similarly, for a pixel whose confidence is equal to the endpoint threshold, it may be set to belong to or not belong to the endpoint autonomously, for example, it may be considered to belong to the endpoint.
In step 706, at least one endpoint pixel point located in the lane line region is obtained according to the lane line region.
In this step, after the lane line region and the endpoint pixel points are predicted and output through the region prediction network and the endpoint prediction network, the two prediction results may be combined, and only the endpoint pixel points located in the lane line region are reserved. Of course, if a predicted endpoint pixel is not within the lane line region at all, it is unlikely to be the endpoint of the dashed lane line.
Optionally, in order to make the prediction result of the endpoint pixel point more accurate, the predicted endpoint pixel point may be further screened. If the confidence coefficient of at least one adjacent pixel belonging to the endpoint in the adjacent pixel set of the endpoint pixel is higher than the endpoint threshold, the endpoint pixel is reserved. Otherwise, if the confidence belonging to the endpoint is lower than the endpoint threshold in all the adjacent pixels of the endpoint pixel, the endpoint pixel is an isolated point, and in the actual prediction result, the endpoint of a dashed lane line comprises a plurality of adjacent pixels, and the isolated point is unlikely to be the endpoint and can be eliminated. For the screening and exclusion of the isolated points, reference may be made to the foregoing embodiments of the present application, which are not described herein again.
In step 708, endpoint coordinates of the dashed lane lines are determined based on the at least one endpoint pixel.
In step 710, a dashed lane line is obtained based on the endpoint coordinates located in the same lane line region.
The method for determining the endpoint coordinates of the dashed lane lines according to the endpoint pixel points, and screening and rejecting the endpoints may refer to the foregoing embodiments, and the specific process is not described herein again.
After the end point of the dashed lane line is detected, the intelligent driving device can be positioned by using the end point. The intelligent driving device comprises various intelligent vehicles such as an automatic driving vehicle or a vehicle with an auxiliary driving system. In addition, the detected broken line lane line and the end point coordinates can be used for manufacturing a high-precision map.
Therefore, the method for detecting a dashed line lane line according to the embodiment of the present application may correct, after detecting the dashed line lane line, the positioning information of the smart vehicle in the road shown in the road image according to the end point of the detected dashed line lane line. For example:
in one aspect, a first distance representing a distance between a target end point in the detected dashed lane line and the smart vehicle is determined by an image ranging method according to the end point of the detected dashed lane line. The intelligent vehicle can comprise an automatic driving vehicle and a vehicle with an auxiliary driving system.
In an exemplary example, assuming that the smart vehicle is traveling, the target endpoint may be an endpoint of a dashed lane line closest in front of the smart vehicle, e.g., the smart vehicle travels 10 meters further to reach the endpoint, and the first distance is 10 meters.
On the other hand, a second distance is determined according to the positioning longitude and latitude of the intelligent vehicle and the endpoint longitude and latitude of the target endpoint in the driving assistance map used by the intelligent vehicle, and the second distance represents the distance between the target endpoint determined according to the driving assistance map and the intelligent vehicle.
And finally, correcting the self positioning longitude and latitude of the intelligent vehicle according to the error between the first distance and the second distance. For example, if the second distance is 8 meters, the two distances have errors, and the positioning latitude and longitude of the intelligent vehicle can be corrected accordingly.
Fig. 8 provides a detection apparatus of a dotted lane line, which may include, as shown in fig. 8: a feature extraction module 81, a feature processing module 82 and a lane line determination module 83.
The feature extraction module 81 is configured to perform feature extraction on a road image to be detected to obtain a feature map of the road image;
the feature processing module 82 is configured to determine a lane line region in the road image and an endpoint pixel point in the road image according to the feature map; the endpoint pixel points are pixel points of endpoints which may be dotted lane lines in the road image;
and a lane line determining module 83, configured to determine a dashed lane line in the road image based on the lane line region and the endpoint pixel point.
In one example, as shown in fig. 9, the feature processing module 82 includes:
the region determining submodule 821 is configured to determine, according to the feature map, a region confidence of each pixel point in the road image, where the region confidence is a confidence that each pixel point in the road image belongs to a lane line region; and determining the region including the pixel points with the region confidence coefficient reaching the region threshold value as the lane line region.
The endpoint pixel submodule 822 is configured to determine an endpoint confidence of each pixel point in the road image according to the feature map, where the endpoint confidence is a confidence that each pixel point in the road image belongs to an endpoint of a dashed lane line; determining whether the endpoint confidence of each pixel point reaches an endpoint threshold value; and determining the pixel point with the endpoint confidence coefficient reaching the endpoint threshold value as the endpoint pixel point.
In one example, the endpoint pixel submodule 822 is further configured to: after determining whether the endpoint confidence of each pixel point reaches the endpoint threshold, determining the pixel point with the endpoint confidence reaching the endpoint threshold as a point set formed by adjacent pixel points of the pixel point with the endpoint confidence reaching the endpoint threshold, wherein at least one adjacent pixel point with the endpoint confidence higher than the endpoint threshold exists.
In one example, the endpoint pixel submodule 822 is further configured to: after determining whether the endpoint confidence of each pixel point reaches an endpoint threshold, determining that no adjacent pixel point with the endpoint confidence higher than the endpoint threshold exists in a point set formed by adjacent pixel points of the pixel points with the endpoint confidence reaching the endpoint threshold; and determining the pixel point of which the endpoint confidence reaches the endpoint threshold value, and not determining the pixel point of the endpoint.
In one example, the lane line determining module 83 is specifically configured to: determining an endpoint coordinate in the road image according to the endpoint pixel point in each endpoint pixel point set and located in the lane line area; and determining a dotted lane line in the road image according to the endpoint coordinates in the road image.
In one example, the lane line determining module 83, configured to, when determining the endpoint coordinate in the road image according to the endpoint pixel point located in the lane line region in each endpoint pixel point set, include: and carrying out weighted average on the coordinates of the endpoint pixel points in the lane line area in an endpoint pixel point set to obtain the endpoint coordinate of an endpoint in the road image.
In one example, the lane line determining module 83 is further configured to: after determining the endpoint coordinates in the road image according to the endpoint pixel points in each endpoint pixel point set and in the lane line region, determining the confidence of an endpoint in the road image according to the endpoint confidence of the endpoint pixel points in one endpoint pixel point set and in the lane line region; and removing the end points of which the confidence degrees are lower than a preset threshold value from the determined end points in the road image.
In one example, the lane line determining module 83 is further configured to: determining a near end point and a far end point in the end points in the road image according to the end point coordinates in the road image before determining a broken line lane line in the road image according to the end point coordinates in the road image after determining the end point coordinates in the road image according to the end point pixels in each end point pixel set which are located in the lane line region; and determining a dotted lane line in the road image according to the lane line area and a near end point and a far end point of the end points in the road image.
In an example, the feature extraction module 81 is specifically configured to perform feature extraction on a road image to be detected through a feature extraction network to obtain a feature map of the road image;
the feature processing module 82 is specifically configured to: and determining a lane line region in the road image according to the feature map through a region prediction network, and determining an endpoint pixel point in the road image according to the feature map through an endpoint prediction network.
In one example, the apparatus further comprises: a network training module for training the feature extraction network, the area prediction network, and the endpoint prediction network by: carrying out feature extraction on a road sample image by using a feature extraction network to obtain a feature map of the road sample image, wherein the road sample image comprises a dotted lane line; determining a lane line area in the road sample image according to the characteristic diagram of the road sample image by using an area prediction network; determining endpoint pixel points in the road sample image according to the feature map of the road sample image by utilizing an endpoint prediction network; determining a first network loss according to the difference between the determined lane line region in the road sample image and the marked lane line region in the road sample image; adjusting network parameters of the feature extraction network and network parameters of the regional prediction network according to the first network loss; determining a second network loss according to the difference between the determined endpoint pixel point in the road sample image and the marked endpoint pixel point in the road sample image; and adjusting the network parameters of the endpoint prediction network and the network parameters of the feature extraction network according to the second network loss.
In one example, the labeled endpoint pixel points in the road sample image include: and the pixel points of the actual end points of the dotted line lane lines in the road sample image and the adjacent pixel points of the actual end points.
In one example, as shown in fig. 10, the apparatus further comprises: and a positioning correction module 84, configured to correct the positioning information of the intelligent vehicle in the road shown in the road image according to the detected end point of the dashed lane line.
In one example, the positioning correction module 84 is specifically configured to: determining a first distance by an image ranging method according to the detected end point of the dotted lane line, wherein the first distance represents the distance between the detected target end point of the dotted lane line and the intelligent vehicle; determining a second distance according to the positioning longitude and latitude of the intelligent vehicle and the endpoint longitude and latitude of the target endpoint in a driving assistance map used by the intelligent vehicle, wherein the second distance represents the distance between the target endpoint determined according to the driving assistance map and the intelligent vehicle; and correcting the positioning longitude and latitude of the intelligent vehicle according to the error between the first distance and the second distance.
The present disclosure also provides a computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the method for detecting a dashed lane line according to any of the embodiments of the present disclosure.
The present disclosure also provides an electronic device, which includes a memory and a processor, where the memory is used to store computer instructions executable on the processor, and the processor is used to implement the method for detecting a dashed lane line according to any embodiment of the present disclosure when executing the computer instructions.
One skilled in the art will appreciate that one or more embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program may be stored, where the computer program, when executed by a processor, implements the steps of the method for training a neural network for word recognition described in any of the embodiments of the present disclosure, and/or implements the steps of the method for word recognition described in any of the embodiments of the present disclosure. Wherein "and/or" means having at least one of the two, e.g., "multi and/or B" includes three schemes: poly, B, and "poly and B".
The embodiments in the disclosure are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the data processing apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment.
The foregoing description of specific embodiments of the present disclosure has been described. Other embodiments are within the scope of the following claims. In some cases, the acts or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Embodiments of the subject matter and functional operations described in this disclosure may be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this disclosure and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this disclosure can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this disclosure can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPG multi (field programmable gate array) or a SIC multi (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Further, the computer may be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PD multi), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
Although this disclosure contains many specific implementation details, these should not be construed as limiting the scope of any disclosure or of what may be claimed, but rather as merely describing features of particular embodiments of the disclosure. Certain features that are described in this disclosure in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure, and is not intended to limit the scope of the present disclosure, which is to be construed as being limited by the appended claims.

Claims (10)

1. A method for detecting a dashed lane line, the method comprising:
carrying out feature extraction on a road image to be detected to obtain a feature map of the road image;
determining a lane line area in the road image and an endpoint pixel point in the road image according to the feature map; the endpoint pixel points are pixel points of endpoints which may be dotted lane lines in the road image;
and determining a dotted line lane line in the road image based on the lane line region and the endpoint pixel points.
2. The method of claim 1, wherein determining a lane line region in the road image from the feature map comprises:
determining the region confidence of each pixel point in the road image according to the feature map, wherein the region confidence is the confidence that each pixel point in the road image belongs to a lane line region;
and determining the region including the pixel points with the region confidence coefficient reaching the region threshold value as the lane line region.
3. The method according to claim 1 or 2, wherein the determining an endpoint pixel point in the road image according to the feature map comprises:
determining an endpoint confidence coefficient of each pixel point in the road image according to the feature map, wherein the endpoint confidence coefficient is the confidence coefficient of the endpoint of each pixel point in the road image belonging to the dotted lane line;
determining whether the endpoint confidence of each pixel point reaches an endpoint threshold value;
and determining the pixel point with the endpoint confidence coefficient reaching the endpoint threshold value as the endpoint pixel point.
4. The method of claim 1, wherein the endpoint pixels within the predetermined area form an endpoint pixel set;
determining a dashed lane line in the road image based on the lane line region and the endpoint pixel points includes:
determining an endpoint coordinate in the road image according to the endpoint pixel point in each endpoint pixel point set and located in the lane line area;
and determining a dotted lane line in the road image according to the endpoint coordinates in the road image.
5. The method of claim 1,
carrying out feature extraction on a road image to be detected to obtain a feature map of the road image, and executing the feature map by a feature extraction network;
determining a lane line area in the road image according to the characteristic diagram, and executing the lane line area by an area prediction network;
and determining endpoint pixel points in the road image according to the characteristic graph, and executing by an endpoint prediction network.
6. The method of claim 1, wherein after the determining the dashed lane lines in the road image, the method further comprises:
and correcting the positioning information of the intelligent vehicle in the road shown by the road image according to the detected end point of the dotted lane line.
7. A detection apparatus of a broken-line lane line, characterized by comprising:
the characteristic extraction module is used for extracting the characteristics of the road image to be detected to obtain a characteristic map of the road image;
the characteristic processing module is used for determining a lane line area in the road image and an endpoint pixel point in the road image according to the characteristic map; the endpoint pixel points are pixel points of endpoints which may be dotted lane lines in the road image;
and the lane line determining module is used for determining a dotted lane line in the road image based on the lane line region and the endpoint pixel points.
8. The apparatus of claim 7,
the feature extraction module is specifically used for extracting features of a road image to be detected through a feature extraction network to obtain a feature map of the road image;
the feature processing module is specifically configured to: and determining a lane line region in the road image according to the feature map through a region prediction network, and determining an endpoint pixel point in the road image according to the feature map through an endpoint prediction network.
9. An electronic device, comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of any one of claims 1 to 6 when executing the computer instructions.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 6.
CN201910944245.2A 2019-09-30 2019-09-30 Method, device and equipment for detecting dotted lane line Active CN110688971B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201910944245.2A CN110688971B (en) 2019-09-30 2019-09-30 Method, device and equipment for detecting dotted lane line
JP2021571821A JP2022535839A (en) 2019-09-30 2020-09-23 Broken lane detection method, device and electronic device
KR1020217031171A KR20210130222A (en) 2019-09-30 2020-09-23 Dotted line detection method, apparatus and electronic device
PCT/CN2020/117188 WO2021063228A1 (en) 2019-09-30 2020-09-23 Dashed lane line detection method and device, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910944245.2A CN110688971B (en) 2019-09-30 2019-09-30 Method, device and equipment for detecting dotted lane line

Publications (2)

Publication Number Publication Date
CN110688971A true CN110688971A (en) 2020-01-14
CN110688971B CN110688971B (en) 2022-06-24

Family

ID=69111427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910944245.2A Active CN110688971B (en) 2019-09-30 2019-09-30 Method, device and equipment for detecting dotted lane line

Country Status (4)

Country Link
JP (1) JP2022535839A (en)
KR (1) KR20210130222A (en)
CN (1) CN110688971B (en)
WO (1) WO2021063228A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291681A (en) * 2020-02-07 2020-06-16 北京百度网讯科技有限公司 Method, device and equipment for detecting lane line change information
CN111460073A (en) * 2020-04-01 2020-07-28 北京百度网讯科技有限公司 Lane line detection method, apparatus, device, and storage medium
CN111707277A (en) * 2020-05-22 2020-09-25 上海商汤临港智能科技有限公司 Method, device and medium for acquiring road semantic information
CN112434591A (en) * 2020-11-19 2021-03-02 腾讯科技(深圳)有限公司 Lane line determination method and device
CN112528864A (en) * 2020-12-14 2021-03-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium
WO2021063228A1 (en) * 2019-09-30 2021-04-08 上海商汤临港智能科技有限公司 Dashed lane line detection method and device, and electronic apparatus
CN113739811A (en) * 2021-09-03 2021-12-03 阿波罗智能技术(北京)有限公司 Method and device for training key point detection model and generating high-precision map lane line
CN116994145A (en) * 2023-09-05 2023-11-03 腾讯科技(深圳)有限公司 Lane change point identification method and device, storage medium and computer equipment

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656529B (en) * 2021-09-16 2023-01-17 北京百度网讯科技有限公司 Road precision determination method and device and electronic equipment
CN114136327B (en) * 2021-11-22 2023-08-01 武汉中海庭数据技术有限公司 Automatic checking method and system for recall ratio of broken line segment
CN114782549B (en) * 2022-04-22 2023-11-24 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification
CN115082888B (en) * 2022-08-18 2022-10-25 北京轻舟智航智能技术有限公司 Lane line detection method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583393A (en) * 2018-12-05 2019-04-05 宽凳(北京)科技有限公司 A kind of lane line endpoints recognition methods and device, equipment, medium
CN109960959A (en) * 2017-12-14 2019-07-02 百度在线网络技术(北京)有限公司 Method and apparatus for handling image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6193819B2 (en) * 2014-07-11 2017-09-06 株式会社Soken Traveling line recognition device
CN108090401B (en) * 2016-11-23 2021-12-14 株式会社理光 Line detection method and line detection apparatus
CN110688971B (en) * 2019-09-30 2022-06-24 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960959A (en) * 2017-12-14 2019-07-02 百度在线网络技术(北京)有限公司 Method and apparatus for handling image
CN109583393A (en) * 2018-12-05 2019-04-05 宽凳(北京)科技有限公司 A kind of lane line endpoints recognition methods and device, equipment, medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
EUN SEOK JANG等: "Lane Endpoint Detection and Position Accuracy Evaluation for Sensor Fusion-Based Vehicle Localization on Highways", 《SENSORS 2018》 *
GUOXIANG QU 等: "StripNet: Towards Topology Consistent Strip Structure Segmentation", 《MM’18》 *
TOAN MINH HOANG等: "Road Lane Detection by Discriminating Dashed and Solid Road Lanes Using a Visible Light Camera Sensor", 《SENSORS 2016》 *
TOAN MINH HOANG等: "Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor", 《SENSORS 2017》 *
XINGANG PAN 等: "Spatial as Deep: Spatial CNN for Traffic Scene Understanding", 《HTTPS://ARXIV.ORG/ABS/1712.06080》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021063228A1 (en) * 2019-09-30 2021-04-08 上海商汤临港智能科技有限公司 Dashed lane line detection method and device, and electronic apparatus
CN111291681A (en) * 2020-02-07 2020-06-16 北京百度网讯科技有限公司 Method, device and equipment for detecting lane line change information
CN111291681B (en) * 2020-02-07 2023-10-20 北京百度网讯科技有限公司 Method, device and equipment for detecting lane change information
CN111460073A (en) * 2020-04-01 2020-07-28 北京百度网讯科技有限公司 Lane line detection method, apparatus, device, and storage medium
CN111460073B (en) * 2020-04-01 2023-10-20 北京百度网讯科技有限公司 Lane line detection method, device, equipment and storage medium
CN111707277A (en) * 2020-05-22 2020-09-25 上海商汤临港智能科技有限公司 Method, device and medium for acquiring road semantic information
CN112434591A (en) * 2020-11-19 2021-03-02 腾讯科技(深圳)有限公司 Lane line determination method and device
CN112528864A (en) * 2020-12-14 2021-03-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium
CN113739811A (en) * 2021-09-03 2021-12-03 阿波罗智能技术(北京)有限公司 Method and device for training key point detection model and generating high-precision map lane line
CN113739811B (en) * 2021-09-03 2024-06-11 阿波罗智能技术(北京)有限公司 Method and equipment for training key point detection model and generating high-precision map lane line
CN116994145A (en) * 2023-09-05 2023-11-03 腾讯科技(深圳)有限公司 Lane change point identification method and device, storage medium and computer equipment
CN116994145B (en) * 2023-09-05 2024-06-11 腾讯科技(深圳)有限公司 Lane change point identification method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
JP2022535839A (en) 2022-08-10
WO2021063228A1 (en) 2021-04-08
KR20210130222A (en) 2021-10-29
CN110688971B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN110688971B (en) Method, device and equipment for detecting dotted lane line
CN109099929B (en) Intelligent vehicle positioning device and method based on scene fingerprints
CN111539907B (en) Image processing method and device for target detection
CN114419165B (en) Camera external parameter correction method, camera external parameter correction device, electronic equipment and storage medium
JP6456682B2 (en) Traveling line recognition device
KR20150112656A (en) Method to calibrate camera and apparatus therefor
CN113221750A (en) Vehicle tracking method, device, equipment and storage medium
CN111950498A (en) Lane line detection method and device based on end-to-end instance segmentation
US20200340816A1 (en) Hybrid positioning system with scene detection
CN112001403A (en) Image contour detection method and system
CN114842449A (en) Target detection method, electronic device, medium, and vehicle
CN115900682A (en) Method for improving road topology through sequence estimation and anchor point detection
KR20200057513A (en) Vehicle location estimation apparatus and method
US11669998B2 (en) Method and system for learning a neural network to determine a pose of a vehicle in an environment
US11512969B2 (en) Method for ascertaining in a backend, and providing for a vehicle, a data record, describing a landmark, for the vehicle to determine its own position
CN114220087A (en) License plate detection method, license plate detector and related equipment
CN108256510B (en) Road edge line detection method and device and terminal
US11815362B2 (en) Map data generation apparatus
KR20200084943A (en) Apparatus and method for estimating self-location of a vehicle
CN111460854A (en) Remote target detection method, device and system
Sukumar et al. A Robust Vision-based Lane Detection using RANSAC Algorithm
JP6354186B2 (en) Information processing apparatus, blur condition calculation method, and program
CN111488771B (en) OCR hooking method, device and equipment
CN114373081A (en) Image processing method and device, electronic device and storage medium
US20240085210A1 (en) Hill climbing algorithm for constructing a lane line map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 200232 room 01, 2nd floor, No. 29 and 30, Lane 1775, Qiushan Road, Nicheng Town, Pudong New Area, Shanghai

Patentee after: Shanghai Lingang Jueying Intelligent Technology Co.,Ltd.

Address before: 200232 room 01, 2nd floor, No. 29 and 30, Lane 1775, Qiushan Road, Nicheng Town, Pudong New Area, Shanghai

Patentee before: Shanghai Shangtang Lingang Intelligent Technology Co.,Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Detection methods, devices, and equipment for dashed lane lines

Effective date of registration: 20230914

Granted publication date: 20220624

Pledgee: Bank of Shanghai Limited by Share Ltd. Pudong branch

Pledgor: Shanghai Lingang Jueying Intelligent Technology Co.,Ltd.

Registration number: Y2023310000549