CN115049995B - Lane line detection method and device, electronic equipment and storage medium - Google Patents

Lane line detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115049995B
CN115049995B CN202210161798.2A CN202210161798A CN115049995B CN 115049995 B CN115049995 B CN 115049995B CN 202210161798 A CN202210161798 A CN 202210161798A CN 115049995 B CN115049995 B CN 115049995B
Authority
CN
China
Prior art keywords
point
lane line
current
points
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210161798.2A
Other languages
Chinese (zh)
Other versions
CN115049995A (en
Inventor
谢术富
李�浩
王军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202210161798.2A priority Critical patent/CN115049995B/en
Publication of CN115049995A publication Critical patent/CN115049995A/en
Application granted granted Critical
Publication of CN115049995B publication Critical patent/CN115049995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a lane line detection method, a lane line detection device, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, and particularly relates to the field of neural networks and automatic driving. The specific implementation scheme is as follows: inputting the original image into a trained network model, and outputting lane line characteristics corresponding to the original image through the network model; wherein, lane line characteristics include: classifying a feature map, a current line offset regression feature map and an adjacent line offset regression feature map; calculating a current lane line point and an adjacent lane line point corresponding to each pixel position based on the lane line characteristics; and detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions. According to the method and the device for detecting the lane lines, the detection precision of the lane lines can be greatly improved, the grouping strategy of the lane lines is simplified, the lane line information can be accurately detected, and the guarantee is provided for normal propulsion of pilot-assisted driving projects in a high-speed domain.

Description

Lane line detection method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, and further relates to a neural network and an automatic driving technology, in particular to a lane line detection method, a lane line detection device, electronic equipment and a storage medium.
Background
Lane lines are important constituent elements of roads, and have a very important role in automatic driving: in a pilot assisted driving (Apollo Navigator Pilot, ANP for short) project in a high speed region, a lane keeping or main vehicle changes lanes, and vehicles are controlled according to visual lane line results; in the ANP project of the urban area, accurate positioning information can be provided based on the matching of the visual lane lines and the high-precision map, and the normal running of the vehicle is ensured.
Existing lane line detection schemes generally fall into two categories: the first category is that a deep learning model is utilized to detect lane line points on an image, and then the lane line points are grouped and output based on a post-processing strategy; the scheme has no strong assumption on the lane lines, has better universality, but the post-processing strategy is complicated in tuning, and has poor universality when complex scenes (such as bifurcation, intersection and the like) are processed. The second type is to encode the lane line points according to the positions, and the model directly outputs grouping information of the lane line points; the scheme has stronger assumption on the lane structure, and the model needs to give reasonable category information for the detection of lane lines under the conditions of bifurcation, intersection, complex lanes and the like, so that the universality is poor.
Disclosure of Invention
The disclosure provides a lane line detection method, a lane line detection device, electronic equipment and a storage medium.
In a first aspect, the present application provides a lane line detection method, the method including:
inputting an original image into a trained network model, and outputting lane line characteristics corresponding to the original image through the network model; wherein the lane line feature comprises: classifying a feature map, a current line offset regression feature map and an adjacent line offset regression feature map;
calculating a current lane line point and an adjacent lane line point corresponding to each pixel position based on the lane line characteristics;
and detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions.
In a second aspect, the present application provides a lane line detection apparatus, the apparatus comprising: the device comprises a feature extraction module, a calculation module and a detection module; wherein,,
the feature extraction module is used for inputting an original image into a trained network model, and outputting lane line features corresponding to the original image through the network model; wherein the lane line feature comprises: classifying a feature map, a current line offset regression feature map and an adjacent line offset regression feature map;
The calculation module is used for calculating a current lane line point and an adjacent lane line point corresponding to each pixel position based on the lane line characteristics;
the detection module is used for detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions.
In a third aspect, an embodiment of the present application provides an electronic device, including:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the lane line detection method described in any embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a storage medium having stored thereon a computer program that, when executed by a processor, implements the lane line detection method described in any embodiment of the present application.
In a fifth aspect, a computer program product is provided, which when executed by a computer device implements the lane line detection method according to any embodiment of the present application.
According to the method and the device, the technical problems that the post-processing strategy of the first type of scheme in the prior art is complicated in tuning, strategy universality is poor when complex scenes are processed, the second type of scheme has stronger assumption on the lane structure, and the model needs to give reasonable type information for lane line detection under the conditions of bifurcation, intersection, complex lanes and the like, and the universality is poor are solved. According to the technical scheme, the detection precision of the lane lines can be greatly improved, the grouping strategy of the lane lines is simplified, the lane line information can be accurately detected, and the guarantee is provided for normal propulsion of pilot-aided driving projects in a high-speed domain.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow chart of a lane line detection method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a lane line training sample according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a current row offset and an adjacent row offset provided by an embodiment of the present application;
fig. 4 is a second flow chart of the lane line detection method according to the embodiment of the present application;
fig. 5 is a third flow chart of the lane line detection method according to the embodiment of the present application;
fig. 6 is a schematic diagram of a lane line extraction method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a lane line extraction device according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing the lane line detection method of the embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example 1
Fig. 1 is a schematic flow chart of a lane line detection method provided in an embodiment of the present application, where the method may be performed by a lane line detection apparatus or an electronic device, and the apparatus or the electronic device may be implemented by software and/or hardware, and the apparatus or the electronic device may be integrated into any intelligent device having a network communication function. As shown in fig. 1, the lane line detection method may include the steps of:
s101, inputting an original image into a trained network model, and outputting lane line characteristics corresponding to the original image through the network model; wherein, lane line characteristics include: classification feature map, current line offset regression feature map and adjacent line offset regression feature map.
In this step, the electronic device may input the raw image into a trained network model through which it is inputThe lane line characteristics corresponding to the original image are obtained; wherein, lane line characteristics include: classification feature map, current line offset regression feature map and adjacent line offset regression feature map. Specifically, an input image I is given, cut out according to a set region of interest (region of interest, abbreviated as ROI), and then adjusted to the size of the network input image; obtaining lane line output corresponding to an input image through network model reasoning: classification feature map Cls (I), current line Offset regression feature map Offset cur (I) Regression feature map Offset for adjacent line Offset nb (I)。
S102, calculating a current lane line point and an adjacent lane line point corresponding to each pixel position based on the lane line characteristics.
In this step, the electronic device may calculate the current lane line point and the adjacent lane line point corresponding to each pixel position based on the lane line features. Specifically, the electronic device may first extract a feature point from all feature points with confidence degrees greater than a first threshold in the classification feature map as a current feature point; then, calculating a current lane line point projected by the current feature point according to the pixel position of the current feature point and the current line offset regression feature map; calculating adjacent lane line points projected by the current feature points according to the pixel positions of the current feature points and the adjacent line offset regression feature map; and repeatedly executing the operation until the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated. Furthermore, when the electronic device calculates the current lane line point projected by the current feature point, the electronic device may extract the current line offset regression feature point corresponding to the current feature point from the current line offset regression feature map; and then calculating the current lane line point projected by the current feature point based on the pixel position of the current feature point and the pixel position of the current line offset regression feature point. Similarly, when the electronic device calculates the adjacent lane line point projected by the current feature point, the adjacent line offset regression feature point corresponding to the current feature point can be extracted from the adjacent line offset regression feature map; and then, calculating the adjacent lane line points projected by the current feature points based on the pixel positions of the current feature points and the pixel positions of the adjacent line offset regression feature points.
And S103, detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions.
In this step, the electronic device may detect the lane line in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position. Specifically, the electronic device may first select one of the dot pairs corresponding to each pixel position as a seed dot pair; wherein, this point pair includes: a current lane line point and an adjacent lane line point; then, on the image line where the adjacent lane line points in the seed point pair are located, calculating the distances between the starting points of all the point pairs and the adjacent lane line points in the seed point pair; determining target point pairs corresponding to the seed point pairs in all adjacent lane line points based on the distances between the starting points of all the point pairs and the adjacent lane line points of the seed point pairs; combining the seed point pairs with the target point pairs, and taking the target point pairs as new seed point pairs, and repeatedly executing the operation until all the point pairs are processed.
Preferably, before detecting the lane line in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position, the electronic device may further filter out the point pair with the vote value lower than the second threshold value in the point pair formed by the current lane line point and the adjacent lane line point, and then execute the operation of detecting the lane line in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position in the remaining point pair.
In particular embodiments of the present application, the network model may be trained prior to inputting the raw image into the trained network model. That is, the technical solution of the present application may include two parts: model training and lane line detection using the model. In the model training process, if the network model does not meet the preset convergence condition, the method comprises the steps of electronicThe device may first extract a sampling point from the pre-constructed sampling points as the current sample; wherein, the sampling point includes: positive example sampling points and negative example sampling points; training the network model based on the current sample and a pre-constructed loss function until the network model meets a convergence condition; wherein the loss function comprises: the classification loss function, the current line offset loss function, and the adjacent line offset loss function. Fig. 2 is a schematic structural diagram of a lane line training sample according to an embodiment of the present application. As shown in fig. 2, the open circles represent lane marking points; the black circles represent positive sampling points; the shaded circles represent negative sampling points; the lane line marking points are points sampled on the lane line; the positive sampling points are points sampled at the left side and the right side (on the same line) of the lane line point; the negative sample points are points which are randomly sampled from the rest positions after the sampling of all positive sample points is completed. Specifically, the process of model training provided in the embodiment of the present application is as follows: 1) Preparing training data: and sampling positive examples and negative examples on the image according to the marked lane line information and the set window size, sampling positive example sample points on the left side and the right side (on the same row) of the lane line point, and taking random sampling from the rest positions as negative examples after sampling all the positive example sample points. 2) Designing a network: the network selects a one-stage network (e.g., U-Net, resNet 34) that resembles image segmentation, but to reduce network computation, the present application may set the size of the network output profile to 1/8 or 1/4 of the size of the input profile. The loss function (loss) of the network model may include two parts: classification loss and regression loss; the classification loss adopts cross entropy loss and is mainly divided into two types (0: negative sampling point; 1: lane marking point); the regression loss adopts common Smooth-L1 loss and comprises two parts: current row offset (dx) cur ) loss and adjacent row offset (dx) nb ) And loss. 3) And starting model training according to the prepared training data and training frames until the number of model iterations reaches a preset threshold or the training loss is lower than the threshold.
Fig. 3 is a schematic diagram of a current row offset and an adjacent row offset provided in an embodiment of the present application. As shown in fig. 3, the open circles represent lane marking points; black heartCircles represent positive example sampling points. Current row offset (dx) cur ) Representing the distance between the sampling point of the positive example and the lane line of the current line; offset of adjacent rows (dx) nb ) The distance between the sampling point of the positive example and the lane line of the adjacent row is represented.
According to the lane line detection method, an original image is input into a trained network model, and lane line characteristics corresponding to the original image are output through the network model; wherein, lane line characteristics include: classifying a feature map, a current line offset regression feature map and an adjacent line offset regression feature map; then calculating a current lane line point and an adjacent lane line point corresponding to each pixel position based on the lane line characteristics; and detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions. That is, the application can output information of two adjacent points on the same line on the basis of the network model; by utilizing the information of two adjacent points, the grouping strategy of the lane lines is greatly simplified, and the detection precision of the lane lines is greatly improved through the iteration of the data driving model. In the existing lane line detection method, a deep learning model is utilized to detect lane line points on an image, and then the lane line points are grouped and output based on a post-processing strategy; or the lane line points are encoded according to the positions, and the model directly outputs grouping information of the lane line points. Because the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated by adopting the regression feature map based on the classification feature map, the current line offset regression feature map and the adjacent line offset regression feature map; the technical means for detecting the lane lines in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position overcomes the technical problems that the post-processing strategy of the first type of scheme in the prior art is complicated in adjustment and optimization, the strategy is poor in universality when complex scenes are processed, the second type of scheme has stronger assumptions on the lane structures, and the model needs to give reasonable category information for the detection of the lane lines under the conditions of bifurcation, intersection, complex lanes and the like, so that the technical scheme provided by the application can greatly improve the detection precision of the lane lines, simplify the grouping strategy of the lane lines, accurately detect the lane line information and provide guarantee for normal propulsion of pilot-assisted driving projects in a high-speed domain; in addition, the technical scheme of the embodiment of the application is simple and convenient to realize, convenient to popularize and wider in application range.
Example two
Fig. 4 is a second flow chart of the lane line detection method according to the embodiment of the present application. Further optimization and expansion based on the above technical solution can be combined with the above various alternative embodiments. As shown in fig. 4, the lane line detection method may include the steps of:
s401, inputting an original image into a trained network model, and outputting lane line characteristics corresponding to the original image through the network model; wherein, lane line characteristics include: classification feature map, current line offset regression feature map and adjacent line offset regression feature map.
S402, extracting a feature point from all feature points with confidence coefficient larger than a first threshold value in the classification feature map as a current feature point.
In this step, the electronic device may extract, as the current feature point, one feature point from all feature points in the classification feature map whose confidence degrees are greater than the first threshold. Specifically, the classification feature map, the current line offset regression feature map and the adjacent line offset regression feature map have the same size; each point in the above feature map may represent a possible feature point.
S403, calculating a current lane line point projected by the current feature point according to the pixel position of the current feature point and the current line offset regression feature map; calculating adjacent lane line points projected by the current feature points according to the pixel positions of the current feature points and the adjacent line offset regression feature map; and repeatedly executing the operation until the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated.
In the step, the electronic device can calculate the current lane line point projected by the current feature point according to the pixel position of the current feature point and the current line offset regression feature map; calculating adjacent lane line points projected by the current feature points according to the pixel positions of the current feature points and the adjacent line offset regression feature map; and repeatedly executing the operation until the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated. Specifically, when calculating the current lane line point projected by the current feature point, the electronic device may extract the current line offset regression feature point corresponding to the current feature point from the current line offset regression feature map; and then calculating the current lane line point projected by the current feature point based on the pixel position of the current feature point and the pixel position of the current line offset regression feature point. Similarly, when the electronic device calculates the adjacent lane line point projected by the current feature point, the adjacent line offset regression feature point corresponding to the current feature point can be extracted from the adjacent line offset regression feature map; and then, calculating the adjacent lane line points projected by the current feature points based on the pixel positions of the current feature points and the pixel positions of the adjacent line offset regression feature points.
Specifically, the electronic device may calculate the current lane line point projected by the current feature point according to the following formula: x is x cur =x-offset cur (x,y);y cur =y; wherein, (x, y) is the pixel position where the current feature point is located; offset (offset) cur (x, y) is the offset of the current feature point in the x-axis direction of the current lane line. In addition, the electronic device may further calculate the adjacent lane line point projected by the current feature point according to the following formula: x is x nb =x-offset nb (x,y);y nb =y-1; wherein, (x, y) is the pixel position where the current feature point is located; offset (offset) nb (x, y) is the offset of the current feature point in the x-axis direction of the adjacent lane line.
S404, detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions.
In this step, the electronic device may detect the lane line in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position. Specifically, the electronic device may first select a lowest point pair as a seed point pair; wherein, the point pair includes: a current lane line point and an adjacent lane line point; then, on the image line where the adjacent lane line points in the seed point pair are located, calculating the distances between the starting points of all the point pairs and the adjacent lane line points in the seed point pair; determining target point pairs corresponding to the seed point pairs in all adjacent lane line points based on the distances between the starting points of all the point pairs and the adjacent lane line points of the seed point pairs; combining the seed point pairs with the target point pairs, and taking the target point pairs as new seed point pairs, and repeatedly executing the operation until all the point pairs are processed. Further, when determining the target point pairs corresponding to the seed point pairs, the electronic device may first select a minimum distance between the start points of all the point pairs and the adjacent lane line points in the seed point pairs based on the distances between the start points of all the point pairs and the adjacent lane line points in the seed point pairs; if the minimum distance is smaller than the preset threshold value, the electronic device can determine the adjacent lane line point with the minimum distance from the seed point and the next adjacent lane line point of the adjacent lane line point with the minimum distance from the seed point as a target point pair corresponding to the seed point pair.
According to the lane line detection method, an original image is input into a trained network model, and lane line characteristics corresponding to the original image are output through the network model; wherein, lane line characteristics include: classifying a feature map, a current line offset regression feature map and an adjacent line offset regression feature map; then calculating a current lane line point and an adjacent lane line point corresponding to each pixel position based on the lane line characteristics; and detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions. That is, the application can output information of two adjacent points on the same line on the basis of the network model; by utilizing the information of two adjacent points, the grouping strategy of the lane lines is greatly simplified, and the detection precision of the lane lines is greatly improved through the iteration of the data driving model. In the existing lane line detection method, a deep learning model is utilized to detect lane line points on an image, and then the lane line points are grouped and output based on a post-processing strategy; or the lane line points are encoded according to the positions, and the model directly outputs grouping information of the lane line points. Because the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated by adopting the regression feature map based on the classification feature map, the current line offset regression feature map and the adjacent line offset regression feature map; the technical means for detecting the lane lines in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position overcomes the technical problems that the post-processing strategy of the first type of scheme in the prior art is complicated in adjustment and optimization, the strategy is poor in universality when complex scenes are processed, the second type of scheme has stronger assumptions on the lane structures, and the model needs to give reasonable category information for the detection of the lane lines under the conditions of bifurcation, intersection, complex lanes and the like, so that the technical scheme provided by the application can greatly improve the detection precision of the lane lines, simplify the grouping strategy of the lane lines, accurately detect the lane line information and provide guarantee for normal propulsion of pilot-assisted driving projects in a high-speed domain; in addition, the technical scheme of the embodiment of the application is simple and convenient to realize, convenient to popularize and wider in application range.
Example III
Fig. 5 is a third flow chart of the lane line detection method according to the embodiment of the present application. Further optimization and expansion based on the above technical solution can be combined with the above various alternative embodiments. As shown in fig. 5, the lane line detection method may include the steps of:
s501, inputting an original image into a trained network model, and outputting lane line characteristics corresponding to the original image through the network model; wherein, lane line characteristics include: classification feature map, current line offset regression feature map and adjacent line offset regression feature map.
S502, extracting a feature point from all feature points with confidence coefficient larger than a first threshold value in the classification feature map as a current feature point.
S503, calculating a current lane line point projected by the current feature point according to the pixel position of the current feature point and the current line offset regression feature map; calculating adjacent lane line points projected by the current feature points according to the pixel positions of the current feature points and the adjacent line offset regression feature map; and repeatedly executing the operation until the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated.
S504, filtering out the point pairs with the vote value lower than the second threshold value from the point pairs consisting of the current lane line point and the adjacent lane line points.
S505, selecting the lowest point pair from the rest point pairs as a seed point pair; wherein, the point pair includes: a current lane line point and an adjacent lane line point.
In this step, the electronic device may select the lowest one of the dot pairs as the seed dot pair; wherein, the point pair includes: a current lane line point and an adjacent lane line point. Specifically, the electronic device can select the point pair with the largest y-coordinate value of the pixel as a seed point pair p cur p nb
S506, calculating the distances between the starting points of all the point pairs and the adjacent lane line points in the seed point pair on the image line where the adjacent lane line points in the seed point pair are located.
In this step, the electronic device may calculate, on the image line where the adjacent lane line points in the seed point pair are located, distances between the start points of all the point pairs and the adjacent lane line points in the seed point pair. Specifically, at p nb On the image line where the points are located, calculating the starting points and p of all the point pairs on the line nb The distance between the points, the minimum distance (distmin) and the corresponding point pair p are selected nb-1 p nb-2 The method comprises the steps of carrying out a first treatment on the surface of the If there is no lane line point pair on the line, then move to the next line p nb-1 The same operation is repeated.
S507, determining a target point pair corresponding to the seed point pair in all adjacent lane line points based on the distances between the starting points of all the point pairs and the adjacent lane line points of the seed point pair; combining the seed point pairs with the target point pairs, and taking the target point pairs as new seed point pairs, and repeatedly executing the operation until all the point pairs are processed.
In this step, the electronic device may determine, from among all the adjacent lane line points, the target point pair corresponding to the seed point pair based on the distances between the start points of all the point pairs and the adjacent lane line points of the seed point pair; combining the seed point pairs with the target point pairs, and taking the target point pairs as new seed point pairs, and repeatedly executing the operation until all the point pairs are processed. Specifically, the electronic device may select a minimum distance between the start points of all the pairs of points and the adjacent lane line points of the pair of seed points based on the distance between the start points of all the pairs of points and the adjacent lane line points of the pair of seed points; if the minimum distance is smaller than the preset threshold value, determining the adjacent lane line point with the minimum distance from the seed point and the next adjacent lane line point of the adjacent lane line point with the minimum distance from the seed point as a target point pair corresponding to the seed point pair.
In a specific embodiment of the present application, the model-based lane line detection process is as follows: 1) Given an input image I, the image is cut out according to the set ROI area, and then the image is adjusted to the size of the image input by the network. 2) Obtaining lane line output corresponding to an input image through model reasoning: classification characteristic diagram C1s (I), current line Offset regression characteristic diagram Offset cur (I) Regression feature map Offset for adjacent line Offset nb (I) A. The invention relates to a method for producing a fibre-reinforced plastic composite 3) Feature points with confidence coefficient exceeding a set threshold t1 on the classification feature map CIs (I) are calculated according to the pixel positions (x, y) where the feature points are located and the offset regression feature map, and the projected current lane line point (x) cur ,y cur ) Adjacent lane line points (x) nb ,y nb ) And record the projection to the current lane line point (x cur ,y cur ) Corresponding vote value vote (x cur ,y cur ):x cur =x-offset cur (x,y);y cur =y; wherein, (x, y) is the pixel position where the current feature point is located; offset (offset) cur (x, y) is the offset of the current feature point in the x-axis direction of the current lane line. X is x nb =x-offset nb (x,y);y nb =y-1; wherein, (x, y) is the pixel position where the current feature point is located; offset (offset) nb (x, y) is the offset of the current feature point in the x-axis direction of the adjacent lane line. 4) Pairs of points resolved for the model (x cur ,y cur ),(x nb ,y nb ) Filtering out point pairs with voting values lower than t2 according to a set voting threshold t 2; 5) And extracting the lane lines from the reserved point pair sets by using a similar region growing scheme.
Fig. 6 is a schematic diagram of a lane line extraction method according to an embodiment of the present application. As shown in fig. 6, for the remaining point pair sets, the method for extracting lane lines using the region-like growth scheme may include the steps of: a) Selecting the point pair with the largest pixel y coordinate value as a seed point pair P cur P nb The method comprises the steps of carrying out a first treatment on the surface of the b) At P nb On the image line where the points are located, calculating the starting points and P of all the point pairs on the line nb The distance between the points, the minimum distance dist is selected min Corresponding point pair P nb-1 P nb-2 The method comprises the steps of carrying out a first treatment on the surface of the If there is no lane line point pair on the line, then move to the next line P nb-1 Repeating the same operation; c) If dist min Less than threshold t3, P cur P nb And Pnb-1P nb-2 Combine and combine P nb-1 P nb-2 As a new seed point pair, jumping to b) to continue execution until the top position of the image is searched, and completing the extraction of a lane line; d) Repeating a) to c) for the remaining lane line point pairs until all the point pairs are processed; e) And setting a length threshold t4 for extracting the lane lines, and filtering the lane lines with too short length.
According to the lane line detection method, an original image is input into a trained network model, and lane line characteristics corresponding to the original image are output through the network model; wherein, lane line characteristics include: classifying a feature map, a current line offset regression feature map and an adjacent line offset regression feature map; then calculating a current lane line point and an adjacent lane line point corresponding to each pixel position based on the lane line characteristics; and detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions. That is, the application can output information of two adjacent points on the same line on the basis of the network model; by utilizing the information of two adjacent points, the grouping strategy of the lane lines is greatly simplified, and the detection precision of the lane lines is greatly improved through the iteration of the data driving model. In the existing lane line detection method, a deep learning model is utilized to detect lane line points on an image, and then the lane line points are grouped and output based on a post-processing strategy; or the lane line points are encoded according to the positions, and the model directly outputs grouping information of the lane line points. Because the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated by adopting the regression feature map based on the classification feature map, the current line offset regression feature map and the adjacent line offset regression feature map; the technical means for detecting the lane lines in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position overcomes the technical problems that the post-processing strategy of the first type of scheme in the prior art is complicated in adjustment and optimization, the strategy is poor in universality when complex scenes are processed, the second type of scheme has stronger assumptions on the lane structures, and the model needs to give reasonable category information for the detection of the lane lines under the conditions of bifurcation, intersection, complex lanes and the like, so that the technical scheme provided by the application can greatly improve the detection precision of the lane lines, simplify the grouping strategy of the lane lines, accurately detect the lane line information and provide guarantee for normal propulsion of pilot-assisted driving projects in a high-speed domain; in addition, the technical scheme of the embodiment of the application is simple and convenient to realize, convenient to popularize and wider in application range.
Example IV
Fig. 7 is a schematic structural diagram of a lane line detection apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus 700 includes: a feature extraction module 701, a calculation module 702, and a detection module 703; wherein,,
the feature extraction module 701 is configured to input an original image into a trained network model, and output lane line features corresponding to the original image through the network model; wherein the lane line feature comprises: classifying a feature map, a current line offset regression feature map and an adjacent line offset regression feature map;
the calculating module 702 is configured to calculate a current lane line point and an adjacent lane line point corresponding to each pixel location based on the lane line feature;
the detection module 703 is configured to detect a lane line in the original image based on a current lane line point and an adjacent lane line point corresponding to each pixel position.
Further, the calculating module 702 is specifically configured to extract a feature point from all feature points with confidence degrees greater than a first threshold in the classification feature map as a current feature point; calculating a current lane line point projected by the current feature point according to the pixel position of the current feature point and the current line offset regression feature map; calculating adjacent lane line points projected by the current feature points according to the pixel positions of the current feature points and the adjacent line offset regression feature map; and repeatedly executing the operation until the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated.
Further, the calculating module 702 is specifically configured to extract a current line offset regression feature point corresponding to the current feature point from the current line offset regression feature map; and calculating a current lane line point projected by the current feature point based on the pixel position of the current feature point and the pixel position of the current line offset regression feature point.
Further, the calculating module 702 is specifically configured to extract, in the adjacent line offset regression feature map, an adjacent line offset regression feature point corresponding to the current feature point; and calculating adjacent lane line points projected by the current feature points based on the pixel positions of the current feature points and the pixel positions of the adjacent line offset regression feature points.
Further, the detection module 703 is further configured to filter out, from among the point pairs formed by the current lane line point and the adjacent lane line point, the point pairs with the vote value lower than the second threshold value, and execute the operation of detecting the lane line in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position in the remaining point pairs.
Further, the detection module 703 is specifically configured to select a lowest point pair as a seed point pair; wherein the point pairs include: a current lane line point and an adjacent lane line point; on the image line where the adjacent lane line points in the seed point pair are located, calculating the distances between the starting points of all the point pairs and the adjacent lane line points in the seed point pair; determining a target point pair corresponding to the seed point pair in all adjacent lane line points based on the distance between the starting points of all the point pairs and the adjacent lane line points of the seed point pair; combining the seed point pair with the target point pair, and taking the target point pair as a new seed point pair, and repeatedly executing the operation until all the point pairs are processed.
Further, the detection module 703 is specifically configured to select a minimum distance between the start points of all the point pairs and the adjacent lane line points in the seed point pair based on the distances between the start points of all the point pairs and the adjacent lane line points in the seed point pair; and if the minimum distance is smaller than a preset threshold value, determining the adjacent lane line point with the minimum distance from the seed point and the next adjacent lane line point of the adjacent lane line point with the minimum distance from the seed point as a target point pair corresponding to the seed point pair.
Further, the device further comprises: the training module is used for extracting a sampling point from the pre-constructed sampling points to serve as a current sample if the network model does not meet a preset convergence condition; wherein, the sampling point includes: positive example sampling points and negative example sampling points; training the network model based on the current sample and a pre-constructed loss function until the network model meets the convergence condition; wherein the loss function comprises: the classification loss function, the current line offset loss function, and the adjacent line offset loss function.
The lane line detection device can execute the method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in the present embodiment may be referred to the lane line detection method provided in any embodiment of the present application.
Example five
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 801 performs the respective methods and processes described above, such as a lane line detection method. For example, in some embodiments, the lane line detection method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the lane line detection method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the lane line detection method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application are achieved, and are not limited herein. In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (16)

1. A lane line detection method, the method comprising:
inputting an original image into a trained network model, and outputting lane line characteristics corresponding to the original image through the network model; wherein the lane line feature comprises: classifying a feature map, a current line offset regression feature map and an adjacent line offset regression feature map;
Extracting a feature point from all feature points with confidence coefficient larger than a first threshold value in the classification feature map as a current feature point;
calculating a current lane line point projected by the current feature point according to the pixel position of the current feature point and the current line offset regression feature map; calculating adjacent lane line points projected by the current feature points according to the pixel positions of the current feature points and the adjacent line offset regression feature map; repeatedly executing the operation until the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated;
and detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions.
2. The method of claim 1, calculating a current lane line point projected by the current feature point according to a pixel position where the current feature point is located and the current line offset regression feature map, comprising:
extracting a current line offset regression feature point corresponding to the current feature point from the current line offset regression feature map;
and calculating a current lane line point projected by the current feature point based on the pixel position of the current feature point and the pixel position of the current line offset regression feature point.
3. The method of claim 1, calculating an adjacent lane line point projected by the current feature point according to a pixel position where the current feature point is located and the adjacent line offset regression feature map, comprising:
extracting adjacent line offset regression feature points corresponding to the current feature points from the adjacent line offset regression feature map;
and calculating adjacent lane line points projected by the current feature points based on the pixel positions of the current feature points and the pixel positions of the adjacent line offset regression feature points.
4. The method of claim 1, further comprising, prior to detecting a lane line in the original image based on a current lane line point and an adjacent lane line point corresponding to each pixel location:
and filtering out the point pairs with the vote value lower than a second threshold value in the point pairs formed by the current lane line point and the adjacent lane line point, and executing the operation of detecting the lane lines in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position in the rest point pairs.
5. The method of claim 4, detecting lane lines in the original image based on current lane line points and adjacent lane line points corresponding to respective pixel locations, comprising:
Selecting the lowest point pair as a seed point pair; wherein the point pairs include: a current lane line point and an adjacent lane line point;
on the image line where the adjacent lane line points in the seed point pair are located, calculating the distances between the starting points of all the point pairs and the adjacent lane line points in the seed point pair;
determining a target point pair corresponding to the seed point pair in all adjacent lane line points based on the distance between the starting points of all the point pairs and the adjacent lane line points of the seed point pair; combining the seed point pair with the target point pair, and taking the target point pair as a new seed point pair, and repeatedly executing the operation until all the point pairs are processed.
6. The method of claim 5, determining a target point pair corresponding to the seed point pair among all adjacent lane line points based on distances between starting points of all point pairs and adjacent lane line points of the seed point pair, comprising:
selecting a minimum distance between the starting points of all the point pairs and the adjacent lane line points of the seed point pair based on the distance between the starting points of all the point pairs and the adjacent lane line points of the seed point pair;
And if the minimum distance is smaller than a preset threshold value, determining the adjacent lane line point with the minimum distance from the seed point and the next adjacent lane line point of the adjacent lane line point with the minimum distance from the seed point as a target point pair corresponding to the seed point pair.
7. The method of claim 1, prior to inputting the original image into the trained network model, the method further comprising:
if the network model does not meet the preset convergence condition, extracting a sampling point from the preset sampling points to serve as a current sample; wherein, the sampling point includes: positive example sampling points and negative example sampling points;
training the network model based on the current sample and a pre-constructed loss function until the network model meets the convergence condition; wherein the loss function comprises: the classification loss function, the current line offset loss function, and the adjacent line offset loss function.
8. A lane line detection apparatus, the apparatus comprising: the device comprises a feature extraction module, a calculation module and a detection module; wherein,,
the feature extraction module is used for inputting an original image into a trained network model, and outputting lane line features corresponding to the original image through the network model; wherein the lane line feature comprises: classifying a feature map, a current line offset regression feature map and an adjacent line offset regression feature map;
The computing module is used for extracting a characteristic point from all the characteristic points with the confidence coefficient larger than a first threshold value in the classification characteristic map as a current characteristic point; calculating a current lane line point projected by the current feature point according to the pixel position of the current feature point and the current line offset regression feature map; calculating adjacent lane line points projected by the current feature points according to the pixel positions of the current feature points and the adjacent line offset regression feature map; repeatedly executing the operation until the current lane line point and the adjacent lane line point corresponding to each pixel position are calculated;
the detection module is used for detecting the lane lines in the original image based on the current lane line point and the adjacent lane line points corresponding to the pixel positions.
9. The apparatus of claim 8, wherein the computing module is specifically configured to extract, in the current line offset regression feature map, a current line offset regression feature point corresponding to the current feature point; and calculating a current lane line point projected by the current feature point based on the pixel position of the current feature point and the pixel position of the current line offset regression feature point.
10. The apparatus of claim 8, wherein the computing module is specifically configured to extract, in the adjacent line offset regression feature map, an adjacent line offset regression feature point corresponding to the current feature point; and calculating adjacent lane line points projected by the current feature points based on the pixel positions of the current feature points and the pixel positions of the adjacent line offset regression feature points.
11. The apparatus of claim 8, the detection module further configured to filter out pairs of points having a vote value lower than a second threshold among pairs of points consisting of a current lane line point and an adjacent lane line point, and perform the operation of detecting a lane line in the original image based on the current lane line point and the adjacent lane line point corresponding to each pixel position in remaining pairs of points.
12. The apparatus according to claim 11, the detection module being in particular configured to select a lowest one of the pairs of points as a seed pair of points; wherein the point pairs include: a current lane line point and an adjacent lane line point; on the image line where the adjacent lane line points in the seed point pair are located, calculating the distances between the starting points of all the point pairs and the adjacent lane line points in the seed point pair; determining a target point pair corresponding to the seed point pair in all adjacent lane line points based on the distance between the starting points of all the point pairs and the adjacent lane line points of the seed point pair; combining the seed point pair with the target point pair, and taking the target point pair as a new seed point pair, and repeatedly executing the operation until all the point pairs are processed.
13. The apparatus of claim 12, the detection module being specifically configured to select a minimum distance from adjacent lane line points in the seed point pair based on distances between start points of all point pairs and adjacent lane line points in the seed point pair; and if the minimum distance is smaller than a preset threshold value, determining the adjacent lane line point with the minimum distance from the seed point and the next adjacent lane line point of the adjacent lane line point with the minimum distance from the seed point as a target point pair corresponding to the seed point pair.
14. The apparatus of claim 8, the apparatus further comprising: the training module is used for extracting a sampling point from the pre-constructed sampling points to serve as a current sample if the network model does not meet a preset convergence condition; wherein, the sampling point includes: positive example sampling points and negative example sampling points; training the network model based on the current sample and a pre-constructed loss function until the network model meets the convergence condition; wherein the loss function comprises: the classification loss function, the current line offset loss function, and the adjacent line offset loss function.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202210161798.2A 2022-02-22 2022-02-22 Lane line detection method and device, electronic equipment and storage medium Active CN115049995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210161798.2A CN115049995B (en) 2022-02-22 2022-02-22 Lane line detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210161798.2A CN115049995B (en) 2022-02-22 2022-02-22 Lane line detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115049995A CN115049995A (en) 2022-09-13
CN115049995B true CN115049995B (en) 2023-07-04

Family

ID=83157395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210161798.2A Active CN115049995B (en) 2022-02-22 2022-02-22 Lane line detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115049995B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263713B (en) * 2019-06-20 2021-08-10 百度在线网络技术(北京)有限公司 Lane line detection method, lane line detection device, electronic device, and storage medium
CN111126327B (en) * 2019-12-30 2023-09-15 中国科学院自动化研究所 Lane line detection method and system, vehicle-mounted system and vehicle
CN112560684B (en) * 2020-12-16 2023-10-24 阿波罗智联(北京)科技有限公司 Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN112926548A (en) * 2021-04-14 2021-06-08 北京车和家信息技术有限公司 Lane line detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115049995A (en) 2022-09-13

Similar Documents

Publication Publication Date Title
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN109117876B (en) Dense small target detection model construction method, dense small target detection model and dense small target detection method
CN113065594B (en) Road network extraction method and device based on Beidou data and remote sensing image fusion
US20230063099A1 (en) Method and apparatus for correcting positioning information, and storage medium
CN103530894A (en) Video target tracking method based on multi-scale block sparse representation and system thereof
CN113205041B (en) Structured information extraction method, device, equipment and storage medium
CN114063858B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110428438B (en) Single-tree modeling method and device and storage medium
CN116168132B (en) Street view reconstruction model acquisition method, device, equipment and medium
CN114419165A (en) Camera external parameter correcting method, device, electronic equipment and storage medium
CN116052094A (en) Ship detection method, system and computer storage medium
CN109784297A (en) A kind of Three-dimensional target recognition based on deep learning and Optimal Grasp method
CN114743178B (en) Road edge line generation method, device, equipment and storage medium
CN114283343B (en) Map updating method, training method and device based on remote sensing satellite image
CN108573238A (en) A kind of vehicle checking method based on dual network structure
CN103413332A (en) Image segmentation method based on two-channel texture segmentation active contour model
CN115049995B (en) Lane line detection method and device, electronic equipment and storage medium
CN110705695A (en) Method, device, equipment and storage medium for searching model structure
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN115761698A (en) Target detection method, device, equipment and storage medium
CN112580743B (en) Classification method and device for lane sideline data in crowdsourcing data road segment
CN114445649A (en) Method for detecting RGB-D single image shadow by multi-scale super-pixel fusion
CN112465829B (en) Interactive point cloud segmentation method based on feedback control
Wu et al. OC-SLAM: steadily tracking and mapping in dynamic environments
CN113408661B (en) Method, apparatus, device and medium for determining mismatching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant