CN111310593B - Ultra-fast lane line detection method based on structure perception - Google Patents
Ultra-fast lane line detection method based on structure perception Download PDFInfo
- Publication number
- CN111310593B CN111310593B CN202010065160.XA CN202010065160A CN111310593B CN 111310593 B CN111310593 B CN 111310593B CN 202010065160 A CN202010065160 A CN 202010065160A CN 111310593 B CN111310593 B CN 111310593B
- Authority
- CN
- China
- Prior art keywords
- lane line
- lane
- image
- model
- cls
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an ultra-fast lane line detection method based on structure perception, which is used for carrying out ultra-fast lane line detection in a lane image. The method specifically comprises the following steps: acquiring a lane image data set for training a lane line detection method, and defining an algorithm target; establishing a lane line prediction model based on classification; establishing a lane line structure model; establishing a context global segmentation feature model; training a prediction model based on the modeling result; using lane line prediction results of the learning framework. The method is used for lane line detection under complex scenes (complex illumination and shielding), has better effect and robustness in the face of various complex conditions, and has extremely high detection speed (more than 300 frames per second).
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to an ultra-high-speed lane line detection method based on structure perception.
Background
Lane line detection is generally defined as the following problem: the position of the lane line is detected in the vehicle travel video or image. In recent years, the automatic driving base is gradually matured and applied, and the lane line detection task is regarded as a key problem in the field of automatic driving. The task has three key points: the first is that under the condition that the lane line is mostly or completely shielded, the specific position of the lane line is detected at a higher semantic level; secondly, detect the trend of lane line according to current road conditions information, if: in traffic lights or sidewalks and other road conditions, it is necessary to identify and judge whether a lane line exists in front. The third point is the requirement on speed, and the detection accuracy is ensured under the condition that the detection algorithm for the lane line in automatic driving meets the actual requirement.
Aiming at the points, the invention redefines the problems and provides a brand new method, and the first point is that the invention considers that the lane line needs to be detected at a higher semantic level in the detection process, and the distribution condition among the lines and the continuous structural characteristics of the lane line are considered; for the second point, the invention considers that the detection process needs to be combined with global information to carry out integral, continuous and global feature extraction on the global information. Conventional methods generally define this task as a segmentation task, i.e. a pixel-level classification of the image. The method focuses more on the situation information and characteristics of the object, but usually ignores the global information and the self-structure information and distribution condition of the detected object. Finally, at execution speed, conventional methods cannot or can only perform computations at a lower frame rate. This cannot satisfy the basic requirement of the execution efficiency on the lane line detection task in the automatic driving. The invention meets the speed requirement through a brand-new detection scheme, and has almost no loss of precision performance.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide an ultrafast lane line detection method based on structure sensing. The method is based on a deep neural network, and realizes the detection of the lane line by using a detection mechanism based on classification. The invention considers the structural information of the lane line, realizes the stable and robust detection of the lane line by the explicit modeling of the structural information and the assistance of the global segmentation characteristic, and can deal with various complex scenes, such as the problems of complex illumination influence, serious shielding and the like. In addition, the designed classification mechanism effectively reduces the operation complexity, and the method can operate efficiently.
In order to achieve the purpose, the technical scheme of the invention is as follows:
an ultra-fast lane line detection method based on structure perception comprises the following steps:
s1, acquiring a lane image data set for training a lane line detection method, and defining an algorithm target;
s2, establishing a classification-based lane line prediction model, wherein the establishing process is as follows, S21-S22:
s21, dividing the image to be detected into H multiplied by W grids, wherein H is the number of rows and W is the number of columns;
s22, marking the position of the lane line contained in each horizontal grid of the image to be detected as the abscissa Loc of the position, and marking the lane image IiAs a classification-based lane line prediction model fclsThe algorithm of (1) inputs and outputs a result of (f)cls(Ii;θcls) Then the loss of the classification-based lane line prediction model is:
Lcls=CrossEntropy(fcls(Ii;θcls),Loc);
in the formula: thetaclsFor classification-based lane line prediction models fclsThe model parameters of (1); cross Encopy represents cross entropy;
s3, establishing a lane line structure model, wherein the establishing process is as follows, S31-S32:
s31, in the lane line structure model, the lane line positions contained in two adjacent lines of horizontal direction grids of the image to be detected meet approximate straight line constraint, namely the second-order difference of the lane line positions tends to zero:
(Locj-Locj+1)-(Locj+1-Locj+2)→0
wherein LocjRepresenting the position of a lane line contained in a jth line horizontal direction grid of the image to be detected;
s32, according to the second-order difference relation, the loss of the lane line structure model is as follows:
wherein: i | · | purple wind1Is the norm of L1;
s4, establishing a context global segmentation feature model, wherein the establishing process is as follows, S41-S42:
s41, generating a lane line segmentation image with a width larger than the lane line width by taking the lane line position as the center
S42, predicting model f from lane lineclsExtracting lane image IiHigh level feature s ofi(ii) a Establishing a segmentation model f by a full convolution networkfcnAs a model of the context global segmentation feature; high level features siAs a segmentation model ffcnThe algorithm of (1) inputs and outputs a result of (f)fcn(si;θfcn) Then the loss of the context global segmentation feature model is:
in the formula: thetafcnGlobal segmentation of feature models f for contextsfcnThe model parameters of (1);
s5, obtaining a comprehensive loss function by combining the loss of the three models in S2-S4, and training a lane line prediction model by using the lane image data set and the comprehensive loss function;
and S6, predicting the lane lines in the image by using the trained lane line prediction model.
Based on the scheme, the steps can be realized in the following modes:
preferably, in step S1, the lane image data set for lane line detection includes a lane image groupWherein IiK is the number of images in the image group for the ith image;
the algorithm targets are defined as: and detecting a lane line detection result R of the image to be detected.
Preferably, in step S5, the form of the comprehensive loss function L of the training model is:
L=Lcls+αLstr+βLseg
wherein alpha and beta are balance factors.
Preferably, in step S6, for the trained lane line prediction model, the image I to be detected is input, and the network prediction result f is obtainedcls(I;θcls) Namely, the lane line detection result R is obtained.
Compared with the existing lane line detection method, the ultra-fast lane line detection method based on structure perception has the following beneficial effects:
firstly, the ultrafast lane line detection method of the present invention defines a lane line detection mode based on classification. By converting the detection problem into the classification problem, the calculation complexity is effectively reduced, and the algorithm operation speed is greatly increased.
Secondly, the invention provides a lane line structure model and a context global segmentation feature model, realizes modeling on a lane line structure level and a global feature level, helps a prediction model to better learn features of a lane line on different levels, gets rid of the constraint of modeling only on a local lane line model, and effectively improves the accuracy of lane line detection.
Finally, the invention obtains a more robust lane detection result through modeling of lane characteristics (classification level, structure level and global segmentation characteristic level) of different levels, joint optimization and collaborative learning of different tasks (classification task and segmentation task).
The ultra-fast lane line detection method based on structure perception can effectively detect the lane line positions in various forms in the driving video under the complex illumination and shielding scenes, and has extremely high running speed (more than 300 frames per second) and good application value.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of the overall structure of the model learning framework of the present invention;
FIG. 3 is an original image in an embodiment;
FIG. 4 is a graph comparing the effects of the present invention with other methods;
fig. 5 is a comparison graph of the significance detection effect of the joint learning framework in the embodiment relative to a single image.
Detailed Description
In order to make the research objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Referring to fig. 1, in a preferred embodiment of the present invention, an ultrafast lane line detection method based on structure sensing is adopted, which includes the following steps:
and S1, acquiring a lane image data set for training a lane line detection method, and defining an algorithm target.
In the present embodiment, the lane image data set includes a lane image groupWherein IiFor the ith image, K is the number of images in the image group.
The algorithm targets are defined as: and detecting a lane line detection result R of the image to be detected. If the lane image contains n lane lines, all the lane lines L in the image need to be detected1~LnThe position of (a).
S2, establishing a classification-based lane line prediction model, wherein the establishing process is as follows, S21-S22:
s21, dividing the image to be detected into H multiplied by W grids, wherein H is the number of rows and W is the number of columns; each column of grids is a grid column consisting of H grids, and each row of grids is a grid row consisting of W grids.
And S22, acquiring each row of horizontal direction grids of the image to be detected, wherein the lane lines contained in each row of horizontal direction grids are the intersection areas of the grid rows and the lane lines, and the lane line position contained in each row of horizontal direction grids is the coordinate corresponding to the position along the vertical direction, namely the abscissa Loc of the row of the intersection area grids. In this embodiment, the classification-based lane line prediction model is a convolutional neural network, which is denoted as fcls. When the lane line prediction model extracts the features of the image, the visual features of the target object are extracted by multilayer convolution operation (adding ReLU activation function and BatchNorm regularization operation to the convolution layer) in the form of
Sk=f(X;θk),
Wherein theta iskIs the k-th layer convolution kernel parameter, X is the original in-and-out image, SkThe result of the k-th layer convolution operation, i.e., the lane line position.
Image of lane IiAs a classification-based lane line prediction model fclsThe algorithm input of (1) and the model output result is fcls(Ii;θcls) Then the loss of the classification-based lane line prediction model is:
Lcls=CrossEntropy(fcls(Ii;θcls),Loc);
in the formula: thetaclsFor classification-based lane line prediction models fclsThe model parameters of (1); cross Encopy represents the cross entropy.
Loc represents the true lane line position tag, and thus is based on the predicted value fcls(Ii;θcls) The cross entropy obtained from the true value Loc can be used as the loss constraint of the lane line prediction model.
When the global information is learned by using the lane line prediction model, the shape and structure characteristics of the lane lines are further constrained, and the lane line structure model is built to learn the shape and the structure of the lane lines. The lane lines in the image are generally in an inclined linear form, so that for one lane line, the difference value of the positions Loc of the two grid lines adjacent to each other up and down is obtained, and theoretically, the difference value of the positions Loc of the lane lines in any two adjacent grid lines is the same, so that a lane line structure model can be established according to the principle. The following description is made specifically.
S3, establishing a lane line structure model, wherein the establishing process is as follows, S31-S32:
s31, in the lane line structure model, the lane line positions contained in two adjacent lines of horizontal direction grids of the image to be detected meet approximate straight line constraint, namely the second-order difference of the lane line positions tends to zero:
(Locj-Locj+1)-(Locj+1-Locj+2)→0
wherein LocjThe j-1, 2, is.
S32, according to the second-order difference relation, the loss of the lane line structure model is as follows:
wherein: i | · | purple wind1Is the norm of L1;
s4, establishing a context global segmentation feature model, wherein the establishing process is as follows, S41-S42:
s41, for each lane image IiGenerating a lane line segmentation image with a width larger than the lane line width by taking the lane line position as a centerLane line segmentation imageMay be generally set to 16. Lane line segmentation imageThe lane image data set can be manually marked in advance to serve as a segmented real label.
S42, using the lane line prediction model fclsIn the process of extracting the features by the convolutional neural network, the feature S of each layer is usedkAnd (6) outputting. Establishing a segmentation model f by a full convolution networkfcnAs a model of the context global segmentation feature. In a segmentation model ffcnIn the method, the characteristics S of all layers are combined by utilizing a fusion strategykFusing to obtain lane image IiHigh level feature s ofi. High level features siAs a segmentation model ffcnThe algorithm of (1) inputs and outputs a result of (f)fcn(si;θfcn) Then the loss of the context global segmentation feature model is:
in the formula: thetafcnGlobal segmentation of feature models f for contextsfcnThe model parameters of (1).
S5, obtaining a comprehensive loss function by combining losses of the three models from S2 to S4, wherein the form of the comprehensive loss function L is as follows:
L=Lcls+αLstr+βLseg
wherein alpha and beta are balance factors which can be adjusted according to requirements.
Training the lane line prediction model f using the lane image data set in S1 and the synthetic loss function in this stepcls。
And S6, the lane line in the image can be predicted by using the trained lane line prediction model.
During prediction, only the image I to be detected is input into the trained lane line prediction model, and the network prediction result f can be obtainedcls(I;θcls) Is that isAnd detecting a lane line detection result R.
As shown in FIG. 2, the whole model learning framework relied on by the ultrafast lane line detection method of the present invention actually comprises two branches, wherein the main branch is the lane line prediction model fclsThe auxiliary branch is the segmentation model ffcn. Lane image IiAfter inputting the integral model, firstly, the model f is predicted on the lane lineclsThe multilayer convolution operation is carried out, the visual characteristics of the target object are extracted, and the characteristics are used for predicting the lane line and also need to be input into a segmentation model thetafcnIn the method, after upsampling, convolution operation and fusion, a fusion feature is obtained, and the fusion feature is compared with the integral segmentation label to obtain a constraint condition Lseg. Due to the segmentation model thetafcnThe effect of (2) is to assist the prediction model theta in the training processclsBetter learning and therefore does not occur in the actual prediction process. After training is finished, lane prediction is only needed through the main branch.
The above-described method is applied to specific examples so that those skilled in the art can better understand the effects of the present invention.
Examples
The implementation method of the embodiment is as described above, and specific steps are not described in detail, and the effect is shown only for case data. This embodiment is implemented on two data sets with true value labels, which are:
TuSimple dataset: the data set contains 6408 images, including 1 scene, with 5 lane lines or less per image.
CULane dataset: the data set contains 133235 images including 9 scenes, with 4 lane lines or less per image.
FIG. 3 shows a partially detailed image sample in which lane lines have been pre-labeled.
And performing data enhancement on the data on each data set to obtain an enhanced and expanded data set. Lane line prediction model f formed by passing images in data set through universal multilayer convolution networkclsObtaining the initial characteristics of each image; and performing global pooling on the output of the last layer of network to obtain the feature of fusing the global. The resulting features are converted into a result matrix M of W × H × n (n being the number of lane lines), H being the number of selected rows and W being the number of grids per row, according to the method described above. As shown in fig. 4, the left diagram is a region where the gridded lane lines are located, one of the lane lines extending from the upper right to the lower left, and the right diagram is a lane line extending from the upper left to the lower right. For each lane line, since the area where the entire lane line is located is divided by the H-row grid divided in the horizontal direction, the grid abscissa of the intersection area of the grid row and the lane line is the lane line position Loc included therein. Since the lane lines are straight, the amount of change in the lane line positions included in the adjacent grid rows thereof is the same.
The global feature fusion is used as a classification result, and w × n (n is the number of lane lines) classification problems are formed for each picture, and each classification problem has w categories. After converting the detection problem into a classification problem, a constraint condition L is obtainedcls。
In addition, in the division model θfcnIn the method, a lane line prediction model f is collected by a feature extractorclsThe output of each convolution layer is up-sampled to the original size; and fusing the features with the recovered sizes by using a convolution operation. Comparing the fused features with the overall label to obtain a constraint condition Lseg。
Similarly, for the result matrix M, for each lane line in M, each lane line is approximated to a straight line by approximating the second reciprocal thereof to 0. From the foregoing description of the shape of the lane line, the constraint condition L can be obtainedstr。
Therefore, in the model of the invention, only calculation is carried out to convert the problem into H multiplied by n classification problems, and each layer of convolution fusion and lane line shape output does not need to be calculated, so that the execution speed of the model can be greatly improved.
Through the technical scheme, the ultra-high-speed lane line detection method based on structure perception is provided based on the deep learning technology. Compared with other existing methods, the new detection process (marked as Res34-Ours/Res18-Ours, respectively based on 34-layer and 18-layer networks) can increase the speed to more than 43 times of the original model under the condition of almost no precision loss. Moreover, better results were obtained in different test data categories (Normal, congested section Crowded, Night, No lane line No-line, shaded ground Shadow, Arrow ground, Dazzle-light, Curve on curved roads, zebra line Crossroad), as shown in table 1 below:
table 1 comparison of the effects of the examples with other processes
Meanwhile, a comparison graph of the significance detection effect of the joint learning framework in the embodiment with respect to a single image is shown in fig. 5.
Therefore, the method can be used for lane line detection in complex scenes (complex illumination and shielding), has better effect and robustness in the face of various complex conditions, and has extremely high detection speed (more than 300 frames per second).
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (5)
1. An ultra-fast lane line detection method based on structure perception is characterized by comprising the following steps:
s1, acquiring a lane image data set for training a lane line detection method, and defining an algorithm target;
s2, establishing a classification-based lane line prediction model, wherein the establishing process is as follows, S21-S22:
s21, dividing the image to be detected into H multiplied by W grids, wherein H is the number of rows and W is the number of columns;
s22, marking the position of the lane line contained in each horizontal grid of the image to be detected as the abscissa Loc of the position, and marking the lane image IiAs a classification-based lane line prediction model fclsThe algorithm of (1) inputs and outputs a result of (f)cls(Ii;θcls) Then the loss of the classification-based lane line prediction model is:
Lcls=CrossEntropy(fcls(Ii;θcls),Loc);
in the formula: thetaclsFor classification-based lane line prediction models fclsThe model parameters of (1); cross Encopy represents cross entropy;
s3, establishing a lane line structure model, wherein the establishing process is as follows, S31-S32:
s31, in the lane line structure model, the lane line positions contained in two adjacent lines of horizontal direction grids of the image to be detected meet approximate straight line constraint, namely the second-order difference of the lane line positions tends to zero:
(Locj-Locj+1)-(Locj+1-Locj+2)→0
wherein LocjRepresenting the position of a lane line contained in a jth line horizontal direction grid of the image to be detected;
s32, according to the second-order difference relation, the loss of the lane line structure model is as follows:
wherein: i | · | purple wind1Is the norm of L1;
s4, establishing a context global segmentation feature model, wherein the establishing process is as follows, S41-S42:
s41, generating a lane line segmentation image with a width larger than the lane line width by taking the lane line position as the center
S42, slave vehicleLane line prediction model fclsExtracting lane image IiHigh level feature s ofi(ii) a Establishing a segmentation model f by a full convolution networkfcnAs a model of the context global segmentation feature; high level features siAs a segmentation model ffcnThe algorithm of (1) inputs and outputs a result of (f)fcn(si;θfcn) Then the loss of the context global segmentation feature model is:
in the formula: thetafcnGlobal segmentation of feature models f for contextsfcnThe model parameters of (1);
s5, obtaining a comprehensive loss function by combining the loss of the three models in S2-S4, and training a lane line prediction model by using the lane image data set and the comprehensive loss function;
and S6, predicting the lane lines in the image by using the trained lane line prediction model.
2. The structure-aware-based ultrafast lane line detection method as claimed in claim 1, wherein the set of lane image data for lane line detection in step S1 includes a set of lane imagesWherein IiK is the number of images in the image group for the ith image;
the algorithm targets are defined as: and detecting a lane line detection result R of the image to be detected.
4. The method for detecting ultra-fast lane line based on structural awareness of claim 1, wherein in step S5, the training model has a combined loss function L of the form:
L=Lcls+αLstr+βLseg
wherein alpha and beta are balance factors.
5. The method according to claim 1, wherein in step S6, the trained lane line prediction model is inputted with the image I to be detected, and the network prediction result f is obtainedcls(I;θcls) Namely, the lane line detection result R is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010065160.XA CN111310593B (en) | 2020-01-20 | 2020-01-20 | Ultra-fast lane line detection method based on structure perception |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010065160.XA CN111310593B (en) | 2020-01-20 | 2020-01-20 | Ultra-fast lane line detection method based on structure perception |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111310593A CN111310593A (en) | 2020-06-19 |
CN111310593B true CN111310593B (en) | 2022-04-19 |
Family
ID=71148900
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010065160.XA Active CN111310593B (en) | 2020-01-20 | 2020-01-20 | Ultra-fast lane line detection method based on structure perception |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111310593B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112016463A (en) * | 2020-08-28 | 2020-12-01 | 佛山市南海区广工大数控装备协同创新研究院 | Deep learning-based lane line detection method |
CN112562330A (en) * | 2020-11-27 | 2021-03-26 | 深圳市综合交通运行指挥中心 | Method and device for evaluating road operation index, electronic equipment and storage medium |
CN112927310B (en) * | 2021-01-29 | 2022-11-18 | 上海工程技术大学 | Lane image segmentation method based on lightweight neural network |
CN115049994B (en) * | 2021-02-25 | 2024-06-11 | 广州汽车集团股份有限公司 | Lane line detection method and system and computer readable storage medium |
CN113191256B (en) * | 2021-04-28 | 2024-06-11 | 北京百度网讯科技有限公司 | Training method and device of lane line detection model, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318258A (en) * | 2014-09-29 | 2015-01-28 | 南京邮电大学 | Time domain fuzzy and kalman filter-based lane detection method |
CN106446927A (en) * | 2016-07-07 | 2017-02-22 | 浙江大学 | Self-paced reinforcement image classification method and system |
CN108090456A (en) * | 2017-12-27 | 2018-05-29 | 北京初速度科技有限公司 | A kind of Lane detection method and device |
CN109389102A (en) * | 2018-11-23 | 2019-02-26 | 合肥工业大学 | The system of method for detecting lane lines and its application based on deep learning |
CN110298216A (en) * | 2018-03-23 | 2019-10-01 | 中国科学院沈阳自动化研究所 | Vehicle deviation warning method based on lane line gradient image adaptive threshold fuzziness |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10970564B2 (en) * | 2017-09-30 | 2021-04-06 | Tusimple, Inc. | System and method for instance-level lane detection for autonomous vehicle control |
-
2020
- 2020-01-20 CN CN202010065160.XA patent/CN111310593B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318258A (en) * | 2014-09-29 | 2015-01-28 | 南京邮电大学 | Time domain fuzzy and kalman filter-based lane detection method |
CN106446927A (en) * | 2016-07-07 | 2017-02-22 | 浙江大学 | Self-paced reinforcement image classification method and system |
CN108090456A (en) * | 2017-12-27 | 2018-05-29 | 北京初速度科技有限公司 | A kind of Lane detection method and device |
CN110298216A (en) * | 2018-03-23 | 2019-10-01 | 中国科学院沈阳自动化研究所 | Vehicle deviation warning method based on lane line gradient image adaptive threshold fuzziness |
CN109389102A (en) * | 2018-11-23 | 2019-02-26 | 合肥工业大学 | The system of method for detecting lane lines and its application based on deep learning |
Non-Patent Citations (2)
Title |
---|
"Towards End-to-End Lane Detection: an Instance Segmentation Approach";Davy Neven.et al;《IEEE》;20180630;全文 * |
"基于深度图像增强的夜间车道线检测技术";宋扬等;《计算机应用》;20191230;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111310593A (en) | 2020-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111310593B (en) | Ultra-fast lane line detection method based on structure perception | |
Tan et al. | YOLOv4_Drone: UAV image target detection based on an improved YOLOv4 algorithm | |
CN110276765B (en) | Image panorama segmentation method based on multitask learning deep neural network | |
CN111598030B (en) | Method and system for detecting and segmenting vehicle in aerial image | |
CN111563909B (en) | Semantic segmentation method for complex street view image | |
CN113486726A (en) | Rail transit obstacle detection method based on improved convolutional neural network | |
CN111008633B (en) | License plate character segmentation method based on attention mechanism | |
CN111104903A (en) | Depth perception traffic scene multi-target detection method and system | |
CN112488025B (en) | Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion | |
CN110781850A (en) | Semantic segmentation system and method for road recognition, and computer storage medium | |
CN111027505B (en) | Hierarchical multi-target tracking method based on significance detection | |
CN111882620A (en) | Road drivable area segmentation method based on multi-scale information | |
Zhang et al. | A semi-supervised 3D object detection method for autonomous driving | |
CN115019039A (en) | Example segmentation method and system combining self-supervision and global information enhancement | |
CN113936034B (en) | Apparent motion combined weak and small moving object detection method combined with inter-frame optical flow | |
Liu et al. | Multi-lane detection by combining line anchor and feature shift for urban traffic management | |
CN106529391B (en) | A kind of speed limit road traffic sign detection of robust and recognition methods | |
CN114943888A (en) | Sea surface small target detection method based on multi-scale information fusion, electronic equipment and computer readable medium | |
Ren et al. | MPSA: A multi-level pixel spatial attention network for thermal image segmentation based on Deeplabv3+ architecture | |
CN117830986A (en) | Automatic driving vision joint perception method, device and medium | |
Xia et al. | Unsupervised optical flow estimation with dynamic timing representation for spike camera | |
Zheng et al. | A method of traffic police detection based on attention mechanism in natural scene | |
CN116935249A (en) | Small target detection method for three-dimensional feature enhancement under unmanned airport scene | |
CN116823775A (en) | Display screen defect detection method based on deep learning | |
CN116524397A (en) | Intelligent detection and analysis method for pipeline defects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |