WO2021063228A1 - 虚线车道线的检测方法、装置和电子设备 - Google Patents
虚线车道线的检测方法、装置和电子设备 Download PDFInfo
- Publication number
- WO2021063228A1 WO2021063228A1 PCT/CN2020/117188 CN2020117188W WO2021063228A1 WO 2021063228 A1 WO2021063228 A1 WO 2021063228A1 CN 2020117188 W CN2020117188 W CN 2020117188W WO 2021063228 A1 WO2021063228 A1 WO 2021063228A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- endpoint
- lane line
- pixel
- road image
- road
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 33
- 238000000605 extraction Methods 0.000 claims abstract description 61
- 238000000034 method Methods 0.000 claims description 59
- 238000012545 processing Methods 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 4
- 230000000717 retained effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo or light sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Definitions
- the present disclosure relates to machine learning technology, and in particular to methods, devices and electronic equipment for detecting broken lane lines.
- the lane lines in the image can be extracted by using the features designed by hand and the detection algorithm such as Hough transform.
- the dashed lane line may be detected as a continuous lane line.
- the present disclosure provides a method, device and electronic equipment for detecting a broken line of lane.
- a method for detecting dashed lane lines comprising: performing feature extraction on a road image to be detected to obtain a feature map of the road image; and determining the road image according to the feature map
- the lane line area in and the endpoint pixels in the road image; the endpoint pixels are the pixels that may belong to the endpoints of the dashed lane line in the road image; based on the lane line area and the endpoint pixels Point to determine the dashed lane line in the road image.
- the determining the lane line area in the road image according to the feature map includes: determining the regional confidence of each pixel in the road image according to the feature map, and The area confidence indicates the confidence that each pixel in the road image belongs to the lane line area; the area including the pixel points whose area confidence is not lower than the area threshold is determined as the lane line area.
- the determining the endpoint pixels in the road image according to the feature map includes: determining the endpoint confidence of each pixel in the road image according to the feature map, The endpoint confidence level represents the confidence that each pixel in the road image belongs to the endpoint of the dashed lane line; it is determined whether the endpoint confidence of each pixel is not lower than the endpoint threshold; the endpoint confidence is not low At least one pixel at the endpoint threshold is determined to be the endpoint pixel.
- the determining the at least one pixel point whose endpoint confidence is not lower than the endpoint threshold as the endpoint pixel point further includes: for the pixel whose endpoint confidence is not lower than the endpoint threshold For each of the points, if it is determined that there is at least one adjacent pixel point whose endpoint confidence is not lower than the endpoint threshold among the neighboring pixels of the pixel whose endpoint confidence is not lower than the endpoint threshold, then the pixel is determined Is the endpoint pixel.
- the determining the endpoint pixels in the road image according to the feature map further includes: for each pixel whose endpoint confidence is not lower than an endpoint threshold, if it is determined If there is no neighboring pixel with the endpoint confidence level not lower than the endpoint threshold among the neighboring pixels of the pixel, it is determined that the pixel is not the endpoint pixel.
- the endpoint pixels located in the range of each preset area constitute a corresponding endpoint pixel set; the determining the road image based on the lane line area and the endpoint pixel points
- the dashed lane line in includes: determining the end point coordinates in the road image according to the end point pixels in each of the end point pixel points set in the lane line area; according to the end point coordinates in the road image To determine the dashed lane line in the road image.
- the determining the endpoint coordinates in the road image according to the endpoint pixels in each of the endpoint pixel points set and located in the lane line area includes: In the point set, the coordinates of the endpoint pixel points located in the lane line area are weighted and averaged to obtain the coordinates of an endpoint in the road image.
- the determining the dashed lane line in the road image based on the lane line area and the endpoint pixel points further includes: according to the endpoint pixel point set that is located in the lane The endpoint confidence of the endpoint pixel in the line area determines the confidence of an endpoint in the road image; if the confidence of the determined endpoint is lower than the preset threshold, the determined endpoint is removed.
- determining the dashed lane line in the road image according to the end point coordinates in the road image includes: determining the end point in the road image according to the end point coordinates in the road image The near-end endpoint and the far-end endpoint in the road image; the dashed lane line in the road image is determined according to the near-end endpoint and the far-end endpoint in the lane line area and the endpoints in the road image.
- feature extraction is performed on the road image to be detected to obtain a feature map of the road image, which is executed by a feature extraction network; the lane line area in the road image is determined according to the feature map, and The regional prediction network is executed; the endpoint pixel points in the road image are determined according to the feature map, and the endpoint prediction network is executed.
- the feature extraction network, the area prediction network, and the endpoint prediction network are trained by the following operations: use the feature extraction network to perform feature extraction on the road sample image to obtain the image of the road sample image A feature map, the sample road image includes a sample dashed lane line, and also carries the first label information marking the lane line area in the road sample image and the second label information marking the end pixels of the sample dashed lane line Label information; use the area prediction network to predict the lane line area in the road sample image based on the feature map of the road sample image to obtain the lane line area prediction information; use the endpoint prediction network to predict the location based on the feature map of the road sample image The endpoint pixel points in the road sample image to obtain endpoint pixel point prediction information; according to the difference between the lane line area prediction information and the first label information, the first network loss is determined, and according to the first network The loss adjusts the network parameters of the feature extraction network and the network parameters of the regional prediction network; according to the difference between the endpoint pixel prediction
- the endpoint pixels marked by the second label information in the sample road image include: pixels of actual endpoints of the sample dashed lane line and pixels of the actual endpoints Adjacent pixels.
- the method further includes: correcting the positioning information of the smart vehicle on the road shown in the road image according to the determined end point of the dashed lane line.
- correcting the positioning information of the smart vehicle on the road corresponding to the road image according to the determined endpoint coordinates of the dashed lane line includes: according to the determined endpoint coordinates of the dashed lane line, passing The image ranging method determines the first distance, which represents the distance between the determined target endpoint of the dashed lane line and the smart vehicle; according to the positioning information of the smart vehicle, it is compared with all the driving assistance maps used by the smart vehicle.
- the latitude and longitude of the target endpoint determine a second distance, where the second distance represents the distance between the target endpoint and the smart vehicle determined according to the driving assistance map; according to the error between the first distance and the second distance, Correct the positioning information of the smart vehicle.
- a detection device for dashed lane lines comprising: a feature extraction module for performing feature extraction on a road image to be detected to obtain a feature map of the road image; a feature processing module, Used to determine the lane line area in the road image and the endpoint pixels in the road image according to the feature map; the endpoint pixels are pixels that may belong to the endpoints of the dashed lane line in the road image
- the lane line determination module is used to determine the dashed lane line in the road image based on the lane line area and the endpoint pixels.
- the feature processing module includes: an area determination sub-module, configured to determine the area confidence of each pixel in the road image according to the feature map, and the area confidence indicates the The confidence that each pixel in the road image belongs to the lane line area; the area including the pixel points whose regional confidence is not lower than the area threshold is determined as the lane line area.
- the feature processing module includes: an endpoint pixel sub-module, configured to determine the endpoint confidence of each pixel in the road image according to the feature map, and the endpoint confidence represents The confidence that each pixel in the road image belongs to the endpoint of the dashed lane line; determine whether the endpoint confidence of each pixel is not lower than the endpoint threshold; the endpoint confidence is not less than the endpoint threshold At least one pixel is determined as the endpoint pixel.
- the endpoint pixel sub-module is further configured to: for each of the pixel points whose endpoint confidence is not lower than the endpoint threshold, if it is determined that among the adjacent pixels of the pixel point If there is at least one adjacent pixel point whose endpoint confidence is not lower than the endpoint threshold, then the pixel point is determined as the endpoint pixel point.
- the endpoint pixel sub-module is further configured to: for each of the pixel points whose endpoint confidence is not lower than the endpoint threshold, if it is determined that among the adjacent pixels of the pixel point If there is no adjacent pixel point whose endpoint confidence is not lower than the endpoint threshold, it is determined that the pixel point is not the endpoint pixel point.
- the lane line determination module is configured to: determine the end point in the road image according to the end point pixels in each of the end point pixel points set that are located in the lane line area Coordinates; according to the endpoint coordinates in the road image, determine the dashed lane line in the road image.
- the lane line determination module is configured to perform a weighted average of the coordinates of the endpoint pixel points in the set of endpoint pixels that are located in the lane line area to obtain the road image The coordinates of an endpoint.
- the lane line determination module is further configured to: determine the road according to the endpoint confidence of the endpoint pixel points located in the lane line area in the endpoint pixel point set The confidence of an endpoint in the image; if the confidence of the determined endpoint is lower than the preset threshold, the determined endpoint is removed.
- the lane line determination module is further configured to: determine the near end and the far end of the end points in the road image according to the end point coordinates in the road image; The lane line area and the near-end endpoint and the far-end endpoint among the endpoints in the road image are used to determine the dashed lane line in the road image.
- the feature extraction module is configured to perform feature extraction on a road image to be detected through a feature extraction network to obtain a feature map of the road image; the feature processing module is configured to: pass through a region The prediction network determines the lane line area in the road image according to the feature map, and the endpoint prediction network determines the endpoint pixel points in the road image according to the feature map.
- the device further includes: a network training module, configured to train the feature extraction network, the area prediction network, and the endpoint prediction network through the following operations: use the feature extraction network to compare road samples Image feature extraction is performed to obtain a feature map of the road sample image.
- the road sample image includes sample dashed lane lines, and also carries the first label information for marking the lane line area in the road sample image and the marked location.
- the second label information of the endpoint pixel points of the sample dashed lane line use the area prediction network to predict the lane line area in the road sample image according to the feature map of the road sample image to obtain the lane line area prediction information; use the endpoint prediction
- the network predicts the endpoint pixel points in the road sample image according to the feature map of the road sample image to obtain endpoint pixel point prediction information; determines according to the difference between the lane line area prediction information and the first label information
- the first network loss and adjust the network parameters of the feature extraction network and the network parameters of the regional prediction network according to the first network loss; according to the difference between the endpoint pixel prediction information and the second label information Difference, determine the second network loss, and adjust the network parameter of the endpoint prediction network and the network parameter of the feature extraction network according to the second network loss.
- the endpoint pixels marked by the second label information in the sample road image include: pixels of actual endpoints of the sample dashed lane line and pixels of the actual endpoints Adjacent pixels.
- the device further includes a positioning correction module, configured to correct the positioning information of the intelligent vehicle on the road corresponding to the road image according to the determined end point of the dashed lane line.
- the positioning correction module is configured to: determine a first distance through an image ranging method according to the determined end point coordinates of the dashed lane line, and the first distance represents the determined dashed lane The distance between the target end point of the line and the smart vehicle; the second distance is determined according to the positioning information of the smart vehicle itself and the longitude and latitude of the target end point in the driving assistance map used by the smart vehicle. The distance between the target endpoint and the smart vehicle determined by the auxiliary map; and the positioning information of the smart vehicle is corrected according to the error between the first distance and the second distance.
- an electronic device the device includes a processor; and a memory, configured to store instructions, the instructions can be executed by the processor to implement the method according to any embodiment of the present disclosure method.
- a computer-readable storage medium having a computer program stored thereon, and the computer program can be executed by a processor to implement the method according to any embodiment of the present disclosure.
- the lane line area and the endpoint pixels can be detected from the road image, and each segment of the dashed lane line can be determined based on the lane line area and the endpoint pixels, thereby realizing segment detection of the dashed lane line.
- Fig. 1 shows a flow chart of a method for detecting dashed lane lines according to at least one embodiment of the present disclosure
- FIG. 2 shows a flowchart of another method for detecting dashed lane lines according to at least one embodiment of the present disclosure
- Fig. 3 shows a schematic diagram of a set of endpoint pixels provided by at least one embodiment of the present disclosure
- FIG. 4 shows a block diagram of a detection network for dashed lane lines provided by at least one embodiment of the present disclosure
- FIG. 5 shows a flowchart of a method for training a dotted lane line detection network provided by at least one embodiment of the present disclosure
- Fig. 6 shows a flowchart of an image processing process provided by at least one embodiment of the present disclosure
- FIG. 7 shows a flow chart of a method for detecting dashed lane lines according to at least one embodiment of the present disclosure
- FIG. 8 shows a block diagram of a detection device for a dashed lane line provided by at least one embodiment of the present disclosure
- FIG. 9 shows a block diagram of another device for detecting dashed lane lines according to at least one embodiment of the present disclosure.
- Fig. 10 shows a block diagram of yet another device for detecting dashed lane lines provided by at least one embodiment of the present disclosure.
- each dashed lane line on the road generally may include multiple dashed lane line segments, and each dashed lane line segment may have two end points, which are also available road feature points. Therefore, it is desirable to provide a method that can detect the end points of the dashed lane line.
- At least one embodiment of the present disclosure provides a method for detecting dashed lane lines. This method can accurately detect section by section lane lines, and can also detect the end points of dashed lane lines, thereby increasing the feature points that can be used in automatic driving.
- Fig. 1 shows a flow chart of a method for detecting dashed lane lines according to at least one embodiment of the present disclosure. The method may include the following steps.
- step 100 feature extraction is performed on the road image to be detected to obtain a feature map of the road image.
- the road image to be detected contains dashed lane lines.
- step 102 the lane line area in the road image and the endpoint pixels in the road image are determined according to the feature map.
- the endpoint pixels are pixels that may belong to the endpoints of the dashed lane line in the road image.
- the regional confidence level of each pixel in the road image can be determined according to the feature map, where the regional confidence is the confidence that each pixel in the road image belongs to the lane line area; the regional confidence level will be included.
- the area of the pixel points below the area threshold is determined as the lane line area.
- the endpoint confidence of each pixel in the road image may be determined according to the feature map, where the endpoint confidence is the confidence that each pixel in the road image belongs to the endpoint of a dashed lane line; Whether the endpoint confidence of each pixel is not lower than the endpoint threshold; determine the pixel with the endpoint confidence not lower than the endpoint threshold as the endpoint pixel.
- step 104 a dashed lane line in the road image is determined based on the lane line area and the endpoint pixels.
- the dashed lane line should be in the lane line area. Therefore, the endpoint pixels that are not in the lane line area can be removed, so that the endpoints of the dashed lane line can be determined only based on multiple endpoint pixels located in the lane line area. Then, a segment of the dashed lane line is obtained according to each end point.
- the lane line area and endpoint pixels can be detected from the road image, and the segments in the dashed lane line can be determined based on the lane line area and the endpoint pixels, so as to realize the detection of the dashed lane line. Segment detection.
- Fig. 2 is a flowchart of another method for detecting dashed lane lines according to at least one embodiment of the present disclosure. As shown in Figure 2, the method may include the following steps.
- step 200 feature extraction is performed on the road image to be detected to obtain a feature map of the road image.
- the road image may be, for example, a road image collected by a vehicle-mounted camera, or a road reflectance image collected by a lidar, or a high-definition road image that can be used for high-precision map production and captured by satellites.
- the road image may be an image collected by the smart driving device on the road on which it is traveling, and the road image may include various types of lane lines, such as solid lane lines, dashed lane lines, and so on.
- step 202 the lane line area in the road image is determined according to the feature map.
- the confidence that each pixel in the road image belongs to the lane line area can be determined according to the feature map, and the area including the pixel points whose confidence is not lower than the area threshold is determined as the lane line area.
- the area threshold can be set. If the confidence that a pixel belongs to the lane line area is not lower than the threshold of the area, then the pixel is considered to belong to the lane line area; if the confidence of the pixel to belong to the lane line area is lower than the threshold of the area, then the pixel can be considered The point does not belong to the lane line area.
- step 204 the endpoint confidence that each pixel in the road image belongs to the endpoint of the dashed lane line is determined according to the feature map.
- step 206 the pixels whose endpoint confidence is not lower than the endpoint threshold are selected.
- the endpoint threshold can be set. If the endpoint confidence of a pixel is lower than the endpoint threshold, it can be considered that the pixel does not belong to the endpoint of the dashed lane line, and the pixel can be deleted from the prediction result of the endpoint. If the endpoint confidence of a pixel is not lower than the endpoint threshold, it can be considered that the pixel may belong to the endpoint of the dashed lane line.
- step 208 for each pixel whose endpoint confidence is not lower than the endpoint threshold, it is determined whether there is at least one neighboring pixel whose endpoint confidence is not lower than the endpoint threshold among the neighboring pixels of the pixel. .
- the selected pixels can be further screened. If the end point confidence of at least one adjacent pixel point among the adjacent pixels of a selected pixel point is not lower than the end point threshold, the pixel point is retained. If the endpoint confidence of all neighboring pixels of a selected pixel is lower than the endpoint threshold, it indicates that the pixel is an isolated point. An actual end point of a dashed lane line should have multiple adjacent pixels. Therefore, such isolated points are unlikely to be the end points of the dashed lane line and can be eliminated.
- step 210 is executed. If the judgment result of step 208 is no, that is, the pixel is an isolated point, then step 212 is executed.
- step 210 it is determined that the pixel is an endpoint pixel. Proceed to step 214.
- step 212 it is determined that the pixel is not an endpoint pixel.
- step 214 the endpoint coordinates in the road image are determined according to the endpoint pixel points located in the lane line area in each endpoint pixel point set.
- an end point of a dashed lane line may include multiple pixels, and these pixels may be the aforementioned predicted end point pixels.
- the coordinates of the endpoint pixel points located in the lane line area in an endpoint pixel point set may be weighted and averaged to obtain the coordinates of an endpoint in the road image.
- the endpoint pixel point set is a set composed of at least one endpoint pixel point within a preset area range. For example, multiple endpoint pixels at the endpoint of a segment of the lane line in a dashed lane line and multiple endpoint pixels in its neighborhood can form an endpoint pixel set. Therefore, an endpoint pixel set can include a segment of the dashed lane line. The pixel point corresponding to the end of the line and the pixel point in its neighborhood.
- At least one endpoint pixel point is included in the preset area range L, for example, the endpoint pixel point 31, and these endpoint pixels constitute an endpoint pixel point set. According to these endpoint pixels, the coordinates of a corresponding endpoint 32 can be determined.
- the end point 32 may be the end point of a dashed lane line segment in the dashed lane line in the road image.
- the coordinates of all these endpoint pixels can be weighted and averaged.
- the coordinates of each endpoint pixel are expressed as (x, y)
- the x 0 coordinates of endpoint 32 can be obtained by weighted average of the x coordinates of all endpoint pixels
- endpoint 32 can be obtained by weighted average of the y coordinates of all endpoint pixels.
- the y 0 coordinate In this way, the coordinates (x 0 , y 0 ) of the end point 32 can be obtained.
- the endpoint confidence of the endpoint pixels located in the lane line area in each endpoint pixel set may be determined, for example, by combining these endpoint confidences.
- a weighted average is used to obtain the confidence of each end point in the road image, and then the determined end point in the road image whose confidence is lower than a preset threshold is removed from each end point in the road image. In this way, some distant fuzzy endpoints in the road image can be removed.
- step 216 the dashed lane line in the road image is determined according to the endpoint coordinates in the road image.
- the near-end endpoint and the far-end endpoint among the endpoints in the road image may be determined. For example, among the two endpoints of a dashed lane line segment in a dashed lane line, the one that is closer to the intelligent driving device installed with the image capture device can be called the near-end endpoint, and the other endpoint that is farther from the intelligent driving device can be Called the remote endpoint. Then, the dashed lane line in the road image can be determined according to the near-end end point and the far-end end point in the lane line area and each end point in the road image. For example, by connecting a near-end endpoint with a corresponding far-end endpoint, and combining with the lane line area, a segment of the dashed lane line can be obtained.
- multiple endpoints located in a lane line area can be sorted by coordinates, and the starting point and end point of each dashed lane line segment can be determined.
- the image height direction of the road image can be taken as the y direction
- each end point in a lane line area can be sorted according to their y coordinate, and then the end point with the smaller y coordinate can be determined as the near end end point, and the y coordinate
- the larger endpoint is determined to be the remote endpoint.
- the end pixels can be filtered, and only the end pixels located in the lane line area are retained.
- the endpoints of the dashed lane line can be determined based on the filtered endpoint pixels, and then based on the confidence of each endpoint, some distant fuzzy endpoints in the road image can be eliminated. In this way, the accuracy of the detection of the endpoint of the dashed lane line can be improved, and the detection accuracy of the dashed lane line can be improved.
- the above-mentioned detection method of dashed lane lines can be implemented by a pre-trained detection network of dashed lane lines.
- FIG. 4 illustrates a block diagram of a detection network for dashed lane lines.
- the detection network 40 may include: a feature extraction network 41, an area prediction network 42 and an endpoint prediction network 43.
- the feature extraction network 41 can extract image features from the input road image to obtain a feature map of the road image.
- the area prediction network 42 can predict the lane line area based on the feature map of the road image, that is, predict the probability that each pixel in the road image belongs to the pixel in the lane line area.
- the detection network 40 has not been trained yet, there may be a certain prediction error. For example, pixels that are not located in the lane line area are also predicted as pixels in the lane line area.
- the endpoint prediction network 43 can predict and output endpoint pixels according to the feature map of the road image, that is, predict the probability that each pixel in the road image is an endpoint pixel.
- the prediction output of the detection network 40 may be the confidence that the pixel belongs to a certain category.
- the regional prediction network 42 can predict the confidence that each pixel in the output road image belongs to the lane line area
- the endpoint prediction network 43 can predict the confidence that each pixel in the output road image belongs to the endpoint pixel.
- Each road sample image can contain dashed lane lines, and also carry lane line area label information and endpoint pixel label information.
- the lane line area label information marks the lane line area in the road sample image, that is, marks those pixels in the road sample image that belong to the lane line area.
- the endpoint pixel label information marks the endpoint pixels of the dashed lane line in the road sample image, that is, the pixels at the two endpoints of each segment of the dashed lane line are marked as the endpoint pixels.
- a segment of the dashed lane line has two end points, and a preset area range can be marked respectively at the two end points, and all the pixels in the range are marked as end pixels.
- FIG. 5 shows a flowchart of a method for training the detection network of the dashed lane line shown in FIG. 4 provided by at least one embodiment of the present disclosure. As shown in Figure 5, the method may include the following steps.
- step 500 the obtained multiple road sample images are input to the feature extraction network 41, and each road sample image includes a dashed lane line to be detected, and also carries lane line area label information and endpoint pixel point label information.
- step 502 through the feature extraction network 41, the image features of each input road sample image are extracted to obtain a corresponding feature map.
- FIG. 6 takes the feature extraction network 41 as an FCN (Fully Convolutional Network, Fully Convolutional Network) as an example, and shows the process of the detection network 40 processing the input road sample image.
- FCN Full Convolutional Network
- the input road sample image can be convolved (down-sampling) multiple times to obtain the high-dimensional feature conv1 of the road sample image.
- the high-dimensional feature conv1 can be deconvolved (up-sampling) to obtain the feature map us_conv1.
- the feature map us_conv1 can be input to the regional prediction network 42 and the endpoint prediction network 43, respectively.
- step 504 the feature map (such as the feature map us_conv1) is input into the regional prediction network 42 and the endpoint prediction network 43, respectively, and the lane line region in the sample road image is predicted by the regional prediction network 42 and passed through the The endpoint prediction network 43 predicts the endpoint pixels in the output road sample image.
- the feature map (such as the feature map us_conv1) is input into the regional prediction network 42 and the endpoint prediction network 43, respectively, and the lane line region in the sample road image is predicted by the regional prediction network 42 and passed through the The endpoint prediction network 43 predicts the endpoint pixels in the output road sample image.
- the regional prediction network 42 can be used to predict the confidence that each pixel in the road sample image belongs to the lane line area
- the endpoint prediction network 43 can predict the confidence that each pixel in the road sample image belongs to the endpoint pixel.
- step 506 based on the prediction result, the network parameters of the feature extraction network 41, the regional prediction network 42, and the endpoint prediction network 43 are adjusted.
- the first network loss can be determined based on the predicted difference between the lane line area in the road sample image and the lane line area marked by the lane line area label information in the road sample image.
- the first network loss adjusts the network parameters of the feature extraction network 41 and the network parameters of the regional prediction network 42. It is also possible to determine the second network loss according to the predicted difference between the endpoint pixels in the road sample image and the endpoint pixel points marked by the endpoint pixel label information in the road sample image, and according to the The second network loss adjusts the network parameters of the endpoint prediction network 43 and the network parameters of the feature extraction network 41.
- the network parameters in the detection network 40 can be adjusted through back propagation.
- the end condition of the network iteration is met, the network training ends.
- the end condition may be that the iteration reaches a certain number of times, or the loss value is less than a certain threshold.
- the proportion of positive samples can be increased to improve the trained detection network The accuracy of detection.
- the range of the endpoint pixels of the dashed lane line marked by the endpoint pixel point label information in the road sample image can be expanded, so that the endpoint pixels marked in the road sample image not only include the road sample image.
- the pixel points of the actual endpoint of the dashed lane line in also include adjacent pixels of the pixel point of the actual endpoint. In this way, more pixels can be marked as endpoints and the proportion of positive samples can be increased.
- Fig. 7 is a flow chart of a method for detecting a dashed lane line using a trained detection network according to an embodiment of the disclosure.
- the trained detection network may include a feature extraction network, a regional prediction network, and an endpoint prediction network.
- the method may include the following steps.
- a road image to be detected is received.
- the road image may be an image of the road on which the smart driving device is traveling.
- step 702 the image feature of the road image is extracted through the feature extraction network to obtain a feature map of the road image.
- the feature extraction network can obtain the feature map of the road image through multiple convolution, deconvolution and other operations.
- step 704 the feature maps are input into the regional prediction network and the endpoint prediction network respectively, and the lane line area in the road image is predicted and output by the regional prediction network, and the road is predicted and output by the endpoint prediction network. End pixels in the image.
- the feature map obtained in step 702 can be input to two parallel branch networks, namely, the regional prediction network and the endpoint prediction network.
- the area prediction network Through the area prediction network, the first prediction result of the lane line area in the output road image can be predicted, including the first confidence that each pixel in the road image belongs to the lane line area. It is also possible to predict and output the second prediction result of the endpoint pixel in the road image through the endpoint prediction network, including the second confidence that each pixel in the road image belongs to the endpoint pixel.
- the lane line area may be determined according to the pixels with the first confidence level not lower than the area threshold.
- the area threshold can be set. If the first confidence of a pixel is not lower than the threshold of the area, it is considered that the pixel belongs to the lane line area; if the first confidence of a pixel is lower than the threshold of the area, it can be considered that the pixel does not belong to the lane Line area.
- an endpoint threshold may also be set. If the second confidence of a pixel is lower than the endpoint threshold, it can be considered that the pixel does not belong to the endpoint pixel, that is, the pixel with the second confidence lower than the endpoint threshold is deleted from the prediction result of the endpoint pixel. If the second confidence of a pixel is not lower than the endpoint threshold, it can be considered that the pixel belongs to the endpoint pixel.
- step 706 at least one endpoint pixel point located in the lane line area among the predicted endpoint pixel points is obtained.
- step 706 the two prediction results obtained in step 704 can be integrated, and only the endpoint pixels located in the lane line area are retained.
- the predicted endpoint pixels can be further screened. If the second confidence that at least one adjacent pixel point exists among adjacent pixels of an endpoint pixel is not lower than the endpoint threshold, then the endpoint pixel is retained. If the second confidence of all adjacent pixels of an endpoint pixel is lower than the endpoint threshold, it indicates that the endpoint pixel is an isolated point. An actual end point of a dashed lane line should have multiple adjacent pixels. Therefore, such isolated points are unlikely to be the end points of the dashed lane line and can be excluded, as described above.
- step 708 the endpoint coordinates of the dashed lane line are determined according to the obtained at least one endpoint pixel point.
- a dashed lane line is determined according to the coordinates of the endpoints located in the same lane line area.
- steps 702 to 710 can be implemented in a related manner in the embodiment described above with reference to FIG. 1 or FIG. 2, and will not be repeated here.
- the end point can be used to assist the positioning of the intelligent driving device.
- Intelligent driving equipment includes various intelligent vehicles such as self-driving vehicles or vehicles with assisted driving systems.
- the detected dashed lane lines and endpoint coordinates can also be used in the production of high-precision maps.
- the positioning information of the smart vehicle on the road corresponding to the road image can be corrected according to the endpoint of the detected dashed lane line.
- the first distance is determined by the image ranging method, and the first distance represents the distance between the detected target endpoints of the dashed lane lines and the smart vehicle.
- the target end point may be the end point of the nearest segment of the dashed lane line in front of the smart vehicle. For example, if the smart vehicle travels another 10 meters to reach the target endpoint, the first distance is 10 meters.
- the second distance is determined according to the latitude and longitude of the positioning of the smart vehicle itself and the latitude and longitude of the target endpoint in the driving assistance map used by the smart vehicle, and the second distance represents the distance between the target endpoint and the target endpoint determined according to the driving assistance map. The distance between the smart vehicle itself.
- the positioning latitude and longitude of the smart vehicle itself is corrected. For example, assuming that the second distance is 8 meters, then the error between the first distance and the second distance is 2 meters, and the positioning latitude and longitude of the smart vehicle itself can be corrected accordingly.
- FIG. 8 provides a detection device for dashed lane lines.
- the device may include: a feature extraction module 81, a feature processing module 82 and a lane line determination module 83.
- the feature extraction module 81 is configured to perform feature extraction on a road image to be detected to obtain a feature map of the road image;
- the feature processing module 82 is configured to determine the lane line area in the road image and the endpoint pixels in the road image according to the feature map; the endpoint pixels are the dashed lane lines in the road image Pixels of the endpoints;
- the lane line determination module 83 is configured to determine the dashed lane line in the road image based on the lane line area and the endpoint pixel points.
- the feature processing module 82 includes:
- the area determination sub-module 821 is configured to determine the area confidence of each pixel in the road image according to the feature map, where the area confidence indicates the confidence that each pixel in the road image belongs to the lane line area ; Determine the area including the pixel points whose area confidence is not lower than the area threshold as the lane line area.
- the endpoint pixel sub-module 822 is configured to determine the endpoint confidence of each pixel in the road image according to the feature map, the endpoint confidence indicating that each pixel in the road image belongs to the endpoint of a dashed lane line Determine whether the endpoint confidence of each pixel is not lower than an endpoint threshold; determine at least one pixel whose endpoint confidence is not less than the endpoint threshold as the endpoint pixel.
- the endpoint pixel sub-module 822 is configured to: for each of the pixel points whose endpoint confidence is not lower than the endpoint threshold, determine that there is at least one endpoint confidence failure among neighboring pixels of the pixel. If the adjacent pixel point is lower than the endpoint threshold, the pixel point is determined as the endpoint pixel point.
- the endpoint pixel sub-module 822 is configured to: for each of the pixel points whose endpoint confidence is not lower than the endpoint threshold, if it is determined that there is no endpoint confidence failure in the neighboring pixels of the pixel. If the adjacent pixel is lower than the endpoint threshold, it is determined that the pixel is not the endpoint pixel.
- the lane line determination module 83 is configured to: determine the endpoint coordinates in the road image according to the endpoint pixel points in each of the endpoint pixel points set and located in the lane line area; The endpoint coordinates in the road image determine the dashed lane line in the road image.
- the lane line determination module 83 performs a weighted average on the coordinates of the endpoint pixel points located in the lane line area in the set of endpoint pixels to obtain the coordinates of an endpoint in the road image.
- the lane line determination module 83 is further configured to: determine the confidence of an endpoint in the road image according to the endpoint confidence of the endpoint pixel in the set of endpoint pixels and located in the lane line area Degree; if the confidence of the determined endpoint is lower than the preset threshold, the determined endpoint is removed.
- the lane line determination module 83 is further configured to: determine the near end point and the far end point in the end point in the road image according to the end point coordinates in the road image; according to the lane line area and The near-end end point and the far-end end point among the end points in the road image determine the dashed lane line in the road image.
- the feature extraction module 81 is configured to perform feature extraction on a road image to be detected through a feature extraction network to obtain a feature map of the road image;
- the feature processing module 82 is configured to determine the lane line area in the road image according to the feature map through an area prediction network, and determine the endpoint pixel points in the road image according to the feature map through an endpoint prediction network.
- the device further includes: a network training module for training the feature extraction network, the area prediction network, and the endpoint prediction network by adopting the following steps: feature extraction on road sample images using a feature extraction network
- the sample road image includes a sample dashed lane line, and also carries first label information for marking the lane line area in the road sample image and marking the sample dashed lane The second label information of the end pixels of the line; use the area prediction network to predict the lane line area in the road sample image according to the feature map of the road sample image to obtain the lane line area prediction information; use the end point prediction network according to the The feature map of the road sample image predicts the endpoint pixels in the road sample image; the first network loss is determined according to the difference between the lane line area prediction information and the first label information; and according to the first Network loss adjusts the network parameters of the feature extraction network and the network parameters of the regional prediction network; determines the second network loss according to the difference between the endpoint pixel prediction information and the
- the endpoint pixels marked by the second label information in the sample road image include: pixels of actual endpoints of the sample dashed lane line and neighboring pixels of pixels of the actual endpoints point.
- the device further includes: a positioning correction module 84, configured to correct the positioning of the intelligent vehicle on the road corresponding to the road image according to the determined endpoint coordinates of the dashed lane line information.
- a positioning correction module 84 configured to correct the positioning of the intelligent vehicle on the road corresponding to the road image according to the determined endpoint coordinates of the dashed lane line information.
- the positioning correction module 84 is specifically configured to: determine a first distance through an image ranging method according to the determined end point coordinates of the dashed lane line, and the first distance represents the center of the determined dashed lane line The distance between the target endpoint of the smart vehicle and the smart vehicle; the second distance is determined according to the positioning information of the smart vehicle and the latitude and longitude of the target endpoint in the driving assistance map used by the smart vehicle, and the second distance indicates that according to the driving assistance map The determined distance between the target endpoint and the smart vehicle; according to the error between the first distance and the second distance, the positioning information of the smart vehicle is corrected.
- the present disclosure also provides a computer-readable storage medium on which a computer program is stored.
- the processor is prompted to implement the detection of the broken lane line according to any embodiment of the present disclosure. method.
- the present disclosure also provides an electronic device.
- the electronic device includes a processor and a memory for storing instructions executable by the processor.
- the instructions when executed, cause the processor to implement any of the embodiments of the present disclosure.
- one or more embodiments of the present disclosure may be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of the present disclosure may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more embodiments of the present disclosure may adopt computer programs implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. The form of the product.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- the embodiments of the present disclosure also provide a computer-readable storage medium, and the storage medium may store a computer program.
- the program When the program is executed by a processor, the neural network for detecting dashed lane lines described in any of the embodiments of the present disclosure is implemented.
- the "and/or" means having at least one of the two, for example, "A and/or B" includes three schemes: A, B, and "A and B".
- Embodiments of the subject matter described in the present disclosure can be implemented as one or more computer programs, that is, one or one of the computer program instructions encoded on a tangible non-transitory program carrier to be executed by a data processing device or to control the operation of the data processing device Multiple modules.
- the program instructions may be encoded on artificially generated propagated signals, such as machine-generated electrical, optical or electromagnetic signals, which are generated to encode information and transmit it to a suitable receiver device for data transmission.
- the processing device executes.
- the computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
- the processing and logic flow described in the present disclosure can be executed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating according to input data and generating output.
- the processing and logic flow can also be executed by a dedicated logic circuit, such as FPGA (Field Programmable Gate Array) or ASIC (Application Specific Integrated Circuit), and the device can also be implemented as a dedicated logic circuit.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- Computers suitable for executing computer programs include, for example, general-purpose and/or special-purpose microprocessors, or any other type of central processing unit.
- the central processing unit will receive instructions and data from a read-only memory and/or a random access memory.
- the basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data.
- the computer will also include one or more mass storage devices for storing data, such as magnetic disks, magneto-optical disks, or optical disks, or the computer will be operatively coupled to this mass storage device to receive data from or send data to it. It transmits data, or both.
- the computer does not have to have such equipment.
- the computer can be embedded in another device, such as a mobile phone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or, for example, a universal serial bus (USB ) Flash drives are portable storage devices, just to name a few.
- PDA personal digital assistant
- GPS global positioning system
- USB universal serial bus
- Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media, and memory devices, including, for example, semiconductor memory devices (such as EPROM, EEPROM, and flash memory devices), magnetic disks (such as internal hard disks or Removable disks), magneto-optical disks, CD ROM and DVD-ROM disks.
- semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
- magnetic disks such as internal hard disks or Removable disks
- magneto-optical disks CD ROM and DVD-ROM disks.
- the processor and the memory can be supplemented by or incorporated into a dedicated logic circuit.
Abstract
Description
Claims (30)
- 一种虚线车道线的检测方法,包括:对待检测的道路图像进行特征提取,得到所述道路图像的特征图;根据所述特征图确定所述道路图像中的车道线区域、以及所述道路图像中的端点像素点,所述端点像素点为所述道路图像中可能属于虚线车道线的端点的像素点;基于所述车道线区域和所述端点像素点,确定所述道路图像中的虚线车道线。
- 根据权利要求1所述的方法,其中,所述根据所述特征图确定所述道路图像中的车道线区域,包括:根据所述特征图确定所述道路图像中的各像素点的区域置信度,所述区域置信度表示所述道路图像中的各像素点属于所述车道线区域的置信度;将包括区域置信度不低于区域阈值的像素点的区域确定为所述车道线区域。
- 根据权利要求1或2所述的方法,其中,所述根据所述特征图确定所述道路图像中的端点像素点,包括:根据所述特征图,确定所述道路图像中的各像素点的端点置信度,所述端点置信度表示所述道路图像中的各像素点属于虚线车道线的端点的置信度;确定所述各像素点的端点置信度是否不低于端点阈值;将所述端点置信度不低于所述端点阈值的至少一个像素点确定为所述端点像素点。
- 根据权利要求3所述的方法,其中,所述将所述端点置信度不低于端点阈值的至少一个像素点确定为所述端点像素点包括:对于所述端点置信度不低于端点阈值的像素点中的每一个,若确定该像素点的相邻像素点中存在至少一个端点置信度不低于端点阈值的相邻像素点,则将该像素点确定为所述端点像素点。
- 根据权利要求3所述的方法,其中,所述根据所述特征图确定所述道路图像中的端点像素点还包括:对于所述端点置信度不低于端点阈值的像素点中的每一个,若确定该像素点的相邻像素点中不存在端点置信度不低于端点阈值的相邻像素点,则确定该像素点不是所述端点像素点。
- 根据权利要求1至5中任一所述的方法,其中,位于各个预设区域范围内的所述端点像素点构成相应的端点像素点集合;所述基于所述车道线区域和所述端点像素点,确定所述道路图像中的虚线车道线,包括:根据每个所述端点像素点集合中、位于所述车道线区域中的端点像素点,确定所述道路图像中的端点坐标;根据所述道路图像中的端点坐标,确定所述道路图像中的虚线车道线。
- 根据权利要求6所述的方法,其中,所述根据每个所述端点像素点集合中、位于所述车道线区域中的端点像素点,确定所述道路图像中的端点坐标,包括:将该端点像素点集合中、位于所述车道线区域中的端点像素点的坐标进行加权平均,得到所述道路图像中的一个端点的坐标。
- 根据权利要求7所述的方法,其中,所述基于所述车道线区域和所述端点像素点,确定所述道路图像中的虚线车道线,还包括:根据该端点像素点集合中、位于所述车道线区域中的端点像素点的端点置信度,确定所述道路图像中的一个端点的置信度;若所确定的端点的置信度低于预设阈值,则将所确定的端点去除。
- 根据权利要求6至8中任一所述的方法,其中,根据所述道路图像中的端点坐标,确定所述道路图像中的虚线车道线,包括:根据所述道路图像中的端点坐标,确定所述道路图像中的端点中的近端端点和远端端点;根据所述车道线区域和所述道路图像中的端点中的近端端点和远端端点,确定所述道路图像中的虚线车道线。
- 根据权利要求1至9中任一所述的方法,其中,对待检测的道路图像进行特征提取,得到所述道路图像的特征图,由特征提取网络执行;根据所述特征图确定所述道路图像中的车道线区域,由区域预测网络执行;根据所述特征图确定所述道路图像中的端点像素点,由端点预测网络执行。
- 根据权利要求10所述的方法,其中,所述特征提取网络、所述区域预测网络和所述端点预测网络被通过下列操作训练:利用特征提取网络对道路样本图像进行特征提取,得到所述道路样本图像的特征图,所述道路样本图像中包括样本虚线车道线,并且还携带有标注所述道路样本图像中的车道线区域的第一标签信息以及标注所述样本虚线车道线的端点像素点的第二标签信息;利用区域预测网络根据所述道路样本图像的特征图预测所述道路样本图像中的车道线区域,得到车道线区域预测信息;利用端点预测网络根据所述道路样本图像的特征图预测所述道路样本图像中的端 点像素点,得到端点像素点预测信息;根据所述车道线区域预测信息与所述第一标签信息之间的差别,确定第一网络损失,并根据所述第一网络损失调整所述特征提取网络的网络参数和所述区域预测网络的网络参数;根据所述端点像素点预测信息与所述第二标签信息之间的差别,确定第二网络损失,并根据所述第二网络损失调整所述端点预测网络的网络参数和所述特征提取网络的所述网络参数。
- 根据权利要求11所述的方法,其中,所述道路样本图像中的由所述第二标签信息标注的端点像素点包括:所述样本虚线车道线的实际端点的像素点以及所述实际端点的像素点的相邻像素点。
- 根据权利要求6至9中任一所述的方法,所述方法还包括:根据所确定的虚线车道线的端点坐标,修正所述道路图像对应的道路中的智能车辆的定位信息。
- 根据权利要求13所述的方法,其中,根据所确定的虚线车道线的端点坐标,修正所述道路图像对应的道路中的智能车辆的定位信息,包括:根据所确定的虚线车道线的端点坐标,通过图像测距方法确定第一距离,所述第一距离表示所确定的虚线车道线的目标端点与智能车辆之间的距离;根据智能车辆的定位信息与智能车辆使用的驾驶辅助地图中的所述目标端点的经纬度,确定第二距离,所述第二距离表示根据驾驶辅助地图确定的所述目标端点与智能车辆之间的距离;根据所述第一距离和第二距离之间的误差,对所述智能车辆的定位信息进行修正。
- 一种虚线车道线的检测装置,所述装置包括:特征提取模块,用于对待检测的道路图像进行特征提取,得到所述道路图像的特征图;特征处理模块,用于根据所述特征图确定所述道路图像中的车道线区域、以及所述道路图像中的端点像素点;所述端点像素点为所述道路图像中可能属于虚线车道线的端点的像素点;车道线确定模块,用于基于所述车道线区域和所述端点像素点,确定所述道路图像中的虚线车道线。
- 根据权利要求15所述的装置,其中,所述特征处理模块包括:区域确定子模块,用于根据所述特征图确定所述道路图像中的各像素点的区域置信 度,所述区域置信度表示所述道路图像中的各像素点属于所述车道线区域的置信度;将包括区域置信度不低于区域阈值的像素点的区域,确定为所述车道线区域。
- 根据权利要求15或16所述的装置,其中,所述特征处理模块包括:端点像素子模块,用于根据所述特征图,确定所述道路图像中的各像素点的端点置信度,所述端点置信度表示所述道路图像中的各像素点属于虚线车道线的端点的置信度;确定所述各像素点的端点置信度是否不低于端点阈值;将所述端点置信度不低于所述端点阈值的至少一个像素点,确定为所述端点像素点。
- 根据权利要求17所述的装置,其中,所述端点像素子模块,还用于:对于所述端点置信度不低于端点阈值的像素点中的每一个,若确定该像素点的相邻像素点中存在至少一个端点置信度不低于端点阈值的相邻像素点,则将该像素点确定为所述端点像素点。
- 根据权利要求17所述的装置,其中,所述端点像素子模块,还用于:对于所述端点置信度不低于端点阈值的像素点中的每一个,若确定该像素点的相邻像素点中不存在端点置信度不低于端点阈值的相邻像素点,则确定该像素点不是所述端点像素点。
- 根据权利要求15至19中任一所述的装置,其中,所述车道线确定模块,用于:根据每个所述端点像素点集合中、位于所述车道线区域中的端点像素点,确定所述道路图像中的端点坐标;根据所述道路图像中的端点坐标,确定所述道路图像中的虚线车道线。
- 根据权利要求20所述的装置,其中,所述车道线确定模块,用于将该端点像素点集合中、位于所述车道线区域中的端点像素点的坐标进行加权平均,得到所述道路图像中的一个端点的坐标。
- 根据权利要求21所述的装置所述车道线确定模块,还用于:根据该所述端点像素点集合中、位于所述车道线区域中的端点像素点的端点置信度,确定所述道路图像中的一个端点的置信度;若所确定的端点的置信度低于预设阈值,则将所确定的端点去除。
- 根据权利要求20至22中任一所述的装置,其中,所述车道线确定模块,还用于:根据所述道路图像中的端点坐标,确定所述道路图像中的端点中的近端端点和远端端点;根据所述车道线区域和所述道路图像中的端点中的近端端点和远端端点,确定所述道路图像中的虚线车道线。
- 根据权利要求15至23中任一所述的装置,其中,所述特征提取模块,用于通过特征提取网络对待检测的道路图像进行特征提取,得到所述道路图像的特征图;所述特征处理模块,用于:通过区域预测网络根据所述特征图确定所述道路图像中的车道线区域,通过端点预测网络根据所述特征图确定所述道路图像中的端点像素点。
- 根据权利要求24所述的装置,其中,所述装置还包括:网络训练模块,用于通过下列操作训练所述特征提取网络、所述区域预测网络和所述端点预测网络:利用特征提取网络对道路样本图像进行特征提取,得到所述道路样本图像的特征图,所述道路样本图像中包括样本虚线车道线,并且还携带有标注所述道路样本图像中的车道线区域的第一标签信息以及标注所述样本虚线车道线的端点像素点的第二标签信息;利用区域预测网络根据所述道路样本图像的特征图预测所述道路样本图像中的车道线区域,得到车道线区域预测信息;利用端点预测网络根据所述道路样本图像的特征图预测所述道路样本图像中的端点像素点,得到端点像素点预测信息;根据所述车道线区域预测信息与所述第一标签信息之间的差别,确定第一网络损失,并根据所述第一网络损失调整所述特征提取网络的网络参数和所述区域预测网络的网络参数;根据所述端点像素点预测信息与所述第二标签信息之间的差别,确定第二网络损失,并根据所述第二网络损失调整所述端点预测网络的网络参数和所述特征提取网络的所述网络参数。
- 根据权利要求25所述的装置,其中,所述道路样本图像中的由所述第二标签信息标注的端点像素点包括:所述样本虚线车道线的实际端点的像素点以及所述实际端点的像素点的相邻像素点。
- 根据权利要求20至23任一所述的装置,其中,所述装置还包括:定位修正模块,用于根据所确定的虚线车道线的端点,修正所述道路图像对应的道路中的智能车辆的定位信息。
- 根据权利要求27所述的装置,其中,所述定位修正模块,用于:根据所确定的虚线车道线的端点坐标,通过图像测距方法确定第一距离,所述第一距离表示所确定的虚线车道线的目标端点与智能车辆之间的距离;根据智能车辆自身的定位信息,与智能车辆使用的驾驶辅助地图中的所述目标端点的经纬度,确定第二距离,所述第二距离表示根据驾驶辅助地图确定的所述目标端点 与智能车辆之间的距离;根据所述第一距离和第二距离之间的误差,对所述智能车辆的定位信息进行修正。
- 一种电子设备,包括:处理器;以及存储器,用于存储指令,所述指令可由所述处理器执行,以实现根据权利要求1至14中任一所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序可由处理器执行,以实现根据权利要求1至14中任一所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021571821A JP2022535839A (ja) | 2019-09-30 | 2020-09-23 | 破線車線検出方法、装置及び電子機器 |
KR1020217031171A KR20210130222A (ko) | 2019-09-30 | 2020-09-23 | 점선 차선 검출 방법, 장치 및 전자 기기 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910944245.2A CN110688971B (zh) | 2019-09-30 | 2019-09-30 | 虚线车道线的检测方法、装置和设备 |
CN201910944245.2 | 2019-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021063228A1 true WO2021063228A1 (zh) | 2021-04-08 |
Family
ID=69111427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/117188 WO2021063228A1 (zh) | 2019-09-30 | 2020-09-23 | 虚线车道线的检测方法、装置和电子设备 |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP2022535839A (zh) |
KR (1) | KR20210130222A (zh) |
CN (1) | CN110688971B (zh) |
WO (1) | WO2021063228A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113656529A (zh) * | 2021-09-16 | 2021-11-16 | 北京百度网讯科技有限公司 | 道路精度的确定方法、装置和电子设备 |
CN114136327A (zh) * | 2021-11-22 | 2022-03-04 | 武汉中海庭数据技术有限公司 | 一种虚线段的查全率的自动化检查方法及系统 |
CN114782549A (zh) * | 2022-04-22 | 2022-07-22 | 南京新远见智能科技有限公司 | 基于定点标识的相机标定方法及系统 |
CN115082888A (zh) * | 2022-08-18 | 2022-09-20 | 北京轻舟智航智能技术有限公司 | 一种车道线检测方法和装置 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110688971B (zh) * | 2019-09-30 | 2022-06-24 | 上海商汤临港智能科技有限公司 | 虚线车道线的检测方法、装置和设备 |
CN111291681B (zh) * | 2020-02-07 | 2023-10-20 | 北京百度网讯科技有限公司 | 车道线变化信息的检测方法、装置及设备 |
CN111460073B (zh) * | 2020-04-01 | 2023-10-20 | 北京百度网讯科技有限公司 | 车道线检测方法、装置、设备和存储介质 |
CN111707277B (zh) * | 2020-05-22 | 2022-01-04 | 上海商汤临港智能科技有限公司 | 获取道路语义信息的方法、装置及介质 |
CN112434591B (zh) * | 2020-11-19 | 2022-06-17 | 腾讯科技(深圳)有限公司 | 车道线确定方法、装置 |
CN112528864A (zh) * | 2020-12-14 | 2021-03-19 | 北京百度网讯科技有限公司 | 模型生成方法、装置、电子设备和存储介质 |
CN113739811A (zh) * | 2021-09-03 | 2021-12-03 | 阿波罗智能技术(北京)有限公司 | 关键点检测模型的训练和高精地图车道线的生成方法设备 |
CN116994145A (zh) * | 2023-09-05 | 2023-11-03 | 腾讯科技(深圳)有限公司 | 车道线变化点的识别方法、装置、存储介质及计算机设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160012300A1 (en) * | 2014-07-11 | 2016-01-14 | Denso Corporation | Lane boundary line recognition device |
CN108090401A (zh) * | 2016-11-23 | 2018-05-29 | 株式会社理光 | 线检测方法和线检测设备 |
CN109583393A (zh) * | 2018-12-05 | 2019-04-05 | 宽凳(北京)科技有限公司 | 一种车道线端点识别方法及装置、设备、介质 |
CN109960959A (zh) * | 2017-12-14 | 2019-07-02 | 百度在线网络技术(北京)有限公司 | 用于处理图像的方法和装置 |
CN110688971A (zh) * | 2019-09-30 | 2020-01-14 | 上海商汤临港智能科技有限公司 | 虚线车道线的检测方法、装置和设备 |
-
2019
- 2019-09-30 CN CN201910944245.2A patent/CN110688971B/zh active Active
-
2020
- 2020-09-23 JP JP2021571821A patent/JP2022535839A/ja not_active Withdrawn
- 2020-09-23 KR KR1020217031171A patent/KR20210130222A/ko not_active Application Discontinuation
- 2020-09-23 WO PCT/CN2020/117188 patent/WO2021063228A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160012300A1 (en) * | 2014-07-11 | 2016-01-14 | Denso Corporation | Lane boundary line recognition device |
CN108090401A (zh) * | 2016-11-23 | 2018-05-29 | 株式会社理光 | 线检测方法和线检测设备 |
CN109960959A (zh) * | 2017-12-14 | 2019-07-02 | 百度在线网络技术(北京)有限公司 | 用于处理图像的方法和装置 |
CN109583393A (zh) * | 2018-12-05 | 2019-04-05 | 宽凳(北京)科技有限公司 | 一种车道线端点识别方法及装置、设备、介质 |
CN110688971A (zh) * | 2019-09-30 | 2020-01-14 | 上海商汤临港智能科技有限公司 | 虚线车道线的检测方法、装置和设备 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113656529A (zh) * | 2021-09-16 | 2021-11-16 | 北京百度网讯科技有限公司 | 道路精度的确定方法、装置和电子设备 |
CN114136327A (zh) * | 2021-11-22 | 2022-03-04 | 武汉中海庭数据技术有限公司 | 一种虚线段的查全率的自动化检查方法及系统 |
CN114782549A (zh) * | 2022-04-22 | 2022-07-22 | 南京新远见智能科技有限公司 | 基于定点标识的相机标定方法及系统 |
CN114782549B (zh) * | 2022-04-22 | 2023-11-24 | 南京新远见智能科技有限公司 | 基于定点标识的相机标定方法及系统 |
CN115082888A (zh) * | 2022-08-18 | 2022-09-20 | 北京轻舟智航智能技术有限公司 | 一种车道线检测方法和装置 |
CN115082888B (zh) * | 2022-08-18 | 2022-10-25 | 北京轻舟智航智能技术有限公司 | 一种车道线检测方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
CN110688971A (zh) | 2020-01-14 |
KR20210130222A (ko) | 2021-10-29 |
JP2022535839A (ja) | 2022-08-10 |
CN110688971B (zh) | 2022-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021063228A1 (zh) | 虚线车道线的检测方法、装置和电子设备 | |
US10605606B2 (en) | Vision-aided aerial navigation | |
WO2022083402A1 (zh) | 障碍物检测方法、装置、计算机设备和存储介质 | |
CN107703528B (zh) | 自动驾驶中结合低精度gps的视觉定位方法及系统 | |
US10030969B2 (en) | Road curvature detection device | |
KR20190090393A (ko) | 차선 결정 방법, 디바이스 및 저장 매체 | |
EP4152204A1 (en) | Lane line detection method, and related apparatus | |
US10679077B2 (en) | Road marking recognition device | |
KR102157810B1 (ko) | 내비게이션 시스템의 지도 매칭 장치 및 방법 | |
CN112699708A (zh) | 一种车道级拓扑网的生成方法及装置 | |
EP3690728A1 (en) | Method and device for detecting parking area using semantic segmentation in automatic parking system | |
CN107977654B (zh) | 一种道路区域检测方法、装置及终端 | |
JP5742558B2 (ja) | 位置判定装置およびナビゲーション装置並びに位置判定方法,プログラム | |
CN111539907A (zh) | 用于目标检测的图像处理方法及装置 | |
CN111062971B (zh) | 一种基于深度学习多模态的跨摄像头泥头车追踪方法 | |
KR20200095888A (ko) | 무인 선박 시스템의 상황인지 방법 및 장치 | |
US20200340816A1 (en) | Hybrid positioning system with scene detection | |
CN113112524B (zh) | 自动驾驶中移动对象的轨迹预测方法、装置及计算设备 | |
CN114419165B (zh) | 相机外参校正方法、装置、电子设备和存储介质 | |
JP2020026985A (ja) | 車両位置推定装置及びプログラム | |
CN107844749B (zh) | 路面检测方法及装置、电子设备、存储介质 | |
CN115393655A (zh) | 基于YOLOv5s网络模型的工业运载车的检测方法 | |
TW202340752A (zh) | 邊界估計 | |
CN115249407B (zh) | 指示灯状态识别方法、装置、电子设备、存储介质及产品 | |
US11669998B2 (en) | Method and system for learning a neural network to determine a pose of a vehicle in an environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20872929 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217031171 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021571821 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20872929 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20872929 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.10.2022) |