CN111259796A - Lane line detection method based on image geometric features - Google Patents

Lane line detection method based on image geometric features Download PDF

Info

Publication number
CN111259796A
CN111259796A CN202010045825.0A CN202010045825A CN111259796A CN 111259796 A CN111259796 A CN 111259796A CN 202010045825 A CN202010045825 A CN 202010045825A CN 111259796 A CN111259796 A CN 111259796A
Authority
CN
China
Prior art keywords
lane line
roi
rois
image
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010045825.0A
Other languages
Chinese (zh)
Inventor
唐雪嵩
金立松
郝矿荣
赵宁宁
李锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
National Dong Hwa University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN202010045825.0A priority Critical patent/CN111259796A/en
Publication of CN111259796A publication Critical patent/CN111259796A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a lane line detection method based on image geometric features, which is characterized in that LSD is adopted to extract image segment information and optimize and fuse the image segment information, the size proportion of an anchor frame is set according to the segment information (coordinate information), a feature map with lower depth is obtained through a backbone network and large segment constraint, different ROIs are obtained through the mapping relation between the anchor frame and the feature map, a brand-new feature gathering method (position-sensitive candidate regional pooling) is used for realizing the extraction based on batch of all ROIs, then a fully-connected neural network is fed, and a multi-classification cross loss function and a regression loss function are used for training. The method is different from the traditional algorithm and the example segmentation algorithm, can detect both the structured road and the unstructured road, can effectively detect rare types of lane lines, and simultaneously reduces false detection rate and improves accuracy through training of a mass data set.

Description

Lane line detection method based on image geometric features
Technical Field
The invention relates to a lane line detection method based on image geometric features, and belongs to the field of deep learning target detection.
Background
The rapid development of economy, the automobile keeping quantity in China increases year by year, the frequency of traffic accidents is also improved year by year, and the traffic safety problem arouses high attention of people. The intelligent vehicle is taken as an intelligent mobile robot, and intelligent information exchange of people, vehicles and roads is realized by combining multiple technologies such as a computer, vehicle-mounted sensing, automatic control, artificial intelligence and the like, so that the vehicle has environment sensing capability and achieves intelligent driving. Lane line detection is an important link in an intelligent vehicle driving assistance system. The rapid and effective detection of the lane lines in the road image is not only beneficial to assisting the path planning, carrying out the functions of road deviation early warning, traffic flow analysis and the like, but also can provide reference for accurate navigation.
Currently, lane marking detection solutions are mainly divided into two types, traditional image processing and deep learning, traditional lane detection methods rely on a combination of highly specialized manual feature labeling and post-processing heuristic algorithms, which are computationally expensive and easily scalable due to road site variations. The detection method based on the gray feature detects the road boundary and the lane mark by utilizing the gray feature extracted from the gray image, wherein the gray image can be directly collected by a system or can be generated by converting an original image. The method has simple structure and wide application, and is particularly suitable for the structured road with clear lane lines and uniform road surface. However, when the road surface is shaded by shadows or foreign matters, the illumination changes strongly or the road surface is unstructured, the method generally cannot achieve an ideal detection effect. The detection method based on the texture features is characterized in that the texture strength and the texture direction are calculated in a region containing a plurality of pixel points in a statistical manner to obtain a result meeting the detection requirement of a lane line, and the method has strong anti-noise capability. However, when the resolution of the image is changed, the statistical calculation result may have a large deviation. In addition, due to interference of factors such as illumination, the texture obtained from the two-dimensional image is not necessarily the real texture of the surface of the three-dimensional object, which also has a certain influence on the detection result. Meanwhile, in practical scenarios, the robustness of these traditional algorithms is really not ideal, and besides the influence of illumination and neighboring vehicles, the indication arrow in the middle of the lane and the sidewalk are challenges that such algorithms are difficult to handle.
In recent years, deep learning has been rapidly developed in the field of computer vision, and significant results have been achieved in computer vision tasks such as target detection, target recognition, target segmentation, and the like. Most of the problems of target detection are solved by Regions with CNN Features (RCNN) series algorithm, and real-time multi-target detection is pushed to another height by the You Only Look Once (YOLO) series algorithm. Lane line detection can then be viewed as a combination of object detection and object segmentation. Lane detection is implemented as an example segmentation problem, such as SCNN, VPGNET, Lanenet, among others. Although the defect problems of the traditional methods, such as the problem that the lane line is blocked, are solved, the network is greatly influenced by noise, has higher requirements (illumination, shadow and the like) on various environmental conditions, is more applied to the structured road, and has a less ideal detection effect on the unstructured road without obvious lane line division. Meanwhile, lane line classification is not carried out, and the influence of the lane line on the vehicle running track of a real scene cannot be really and effectively simulated.
Disclosure of Invention
The purpose of the invention is: the non-mechanization road lane line detection is realized, the detection false detection rate is reduced, and the detection accuracy is improved.
In order to achieve the above object, the technical solution of the present invention is to provide a lane line detection method based on image geometric features, which is characterized by comprising the following steps:
acquiring lane line image segment information through an LSD rectangle detection algorithm, and generating an anchor frame according to the lane line image segment information;
and step two, constructing a main network which comprises a convolution layer, a pooling layer and a full-link layer and is used for extracting the characteristics of the lane line image segment information to obtain a characteristic diagram.
Step three, obtaining an ROI of the feature map through a mapping relation between the anchor frame and the feature map;
step four, extracting all the ROIs through the pooling layer to obtain ROIs with uniform sizes;
step five, feeding the ROI obtained in the step four into a full-connection layer for classification regression;
and step six, training by using a classification loss function and a regression loss function.
Preferably, in the second step, the fused feature map is filled by large segment constraint to obtain a feature map with a smaller number of channels with unchanged size.
Preferably, in the fourth step, ROIs with uniform sizes are obtained by pooling position-sensitive candidate regions for all ROIs based on batch extraction, specifically, boundary coordinate values of the ROIs and boundary values of all rectangular units in each ROI are kept in a floating point form, and pixel values of fixed-position fixed-number sampling points are calculated in each rectangular unit for average pooling.
Preferably, in the first step, an anchor frame is generated based on the center point of the line segment of the lane line image.
Preferably, the classification loss is a cross-entropy loss of multiple classifications,
Figure BDA0002369358230000031
wherein N isclsTotal number of ROIs chosen for training, NregFor the number of locations of the ROI, λ is a value that guarantees weighting such as classification and regression loss, piIs the predicted probability of the ROI,
Figure BDA0002369358230000032
is a label for the ROI,
Figure BDA0002369358230000033
the log loss representing the true class is specified as follows:
Figure BDA0002369358230000034
regression loss represents the loss of ROI, ti={tx,ty,tw,th"predicted offset of center coordinate, width and height of ROI, ti *For the actual offset from the tag, the regression loss function is as follows:
Lreg(ti,ti *)=R(ti-ti *}
r is
Figure BDA0002369358230000035
The function of the function is that of the function,
Figure BDA0002369358230000036
wherein
Figure BDA0002369358230000037
Compared with the prior art, the invention has the beneficial effects that:
in the deep learning background, the method for learning and classifying the lane lines by the convolutional neural network is adopted, so that the speed and the accuracy of classifying the lane lines are improved, the lane lines can be effectively detected, and the types of the lane lines can be distinguished. The invention can detect different types of lane lines, provides basis for different reactions (such as the yellow and solid lines can not cross) of a driver, effectively detects the rare types of lane lines, reduces the false detection rate and improves the accuracy through the training of the mass data set.
Drawings
FIG. 1 is a schematic diagram of the overall network framework of the present method;
FIG. 2 is a schematic diagram of the working principle of large sectional restriction;
fig. 3 is a schematic diagram of the position-sensitive candidate region pooling operation.
Detailed Description
In order to make the invention more comprehensible, preferred embodiments are described in detail below with reference to the accompanying drawings.
The invention discloses a lane line detection method based on image geometric characteristics, which is characterized by comprising the following steps of:
acquiring lane line image segment information through an LSD rectangle detection algorithm, specifically acquiring coordinate information of all segments on an image, and generating an anchor frame according to the lane line image segment information; and (3) fusing the line segments obtained by the method, such as removing discrete points and line segments with small length, neighbor searching, parallel line fusion and the like. The image line segment information is extracted, optimized and fused through the LSD, the line segment number is reduced, noise is reduced, the anchor frame is generated more accurately, the anchor frame number does not need to be set based on the whole image pixel points, and the anchor frame number is greatly reduced. The anchor frame size in this embodiment is set to 5 aspect ratios {1:3,1:2,1:1,2:1,3:1} and 5 scales {32^2,64^2,128^2,256^2,512^2}, and overlap is reduced by NMS. Aiming at the condition that the lane line is long and narrow, the targeted adjustment is made, and the target is ensured to be completely detected.
And step two, constructing a backbone network for extracting the characteristics of the lane line image segment information to obtain a characteristic diagram. The main network comprises a convolution layer, a pooling layer and a full connection layer, and the fusion characteristic diagram is filled through large sectional constraint to obtain the characteristic diagram with unchanged size and less channel number.
If the feature map is directly fed into the classification regression network, the number of channels is too large, so that the parameter quantity is too large, the feature map is thinned through large segment constraint, the number of channels is reduced, the number of parameters is reduced, the feature map with less number of channels is obtained, the speed-up effect is achieved, and meanwhile, more local feature information can be extracted by utilizing separation convolution operation. If the lane lines are divided into 20 classes, the number of the channels passing through the operation is changed from 21 × p to 5 × p, wherein 5 corresponds to the number of the classes, and other numbers smaller than 20 can be adopted.
Step three, obtaining an ROI of the feature map through a mapping relation between the anchor frame and the feature map;
step four, extracting all ROIs through a pooling layer to obtain ROIs with uniform sizes, specifically extracting all ROIs based on batch through position-sensitive candidate region pooling to obtain ROIs with uniform sizes;
the position-sensitive candidate region pooling is to manually introduce position information during feature aggregation, so that the sensitivity of a deeper neural network to the position information of an object is effectively improved, and the batch-based extraction of all ROIs is realized. Meanwhile, an algorithm of Position Sensitive ROI alignment can be adopted, more edge information is extracted, and the detection capability of the small target is improved. The boundary coordinate values of the ROIs and the boundary values of all rectangular units in each ROI are kept in a floating point number form, and pixel values of fixed-position fixed-number sampling points are calculated in each rectangular unit to be averaged and pooled.
And step five, because 20 types exist and the number of channels is 5 × p, direct classification cannot be realized, so that channel number conversion is performed on all ROI connected full-connection layers, and the ROI obtained in the step four is fed into the full-connection layers for classification regression. For the optimizer, AdaDelta and Adam are preferably used.
And step six, training by using a classification loss function and a regression loss function, wherein the classification loss is the cross entropy loss of multi-classification.
The method comprises the steps of preprocessing pictures input by a backbone network, wherein the backbone network is VGG16, and the backbone network can be a residual error network and the like. Because the line can deform when the short lane line is seen from far to near, which can affect the training of the network, affine transformation (projection transformation) is required to ensure that the target cannot deform; the preprocessed image data is used as the input of the backbone network, and then the invention learns the image characteristics of the image data. Because the semantic information of the lane lines is single, the number of layers of convolution operation can be reduced, the number of parameters is reduced, and the speed is improved, for example, the VGG16 can remove the last group of three convolution kernels and three activation functions.
Obtaining a feature map with the size of 7 × 512 through convolution operation; in order to further reduce the calculation amount, a separation convolution operation is carried out through the lagepartial convolution to obtain a feature map with smaller depth and 5 x 7 of the number of channels, namely the number of channels is changed from 512 to 245; combining the feature map with an anchor frame generated by LSD, and extracting an ROI with the size of 5 × 7 by one anchor frame; in the classification network, the 5 × 7 feature map is firstly used for classification by using a full connection with 512 channels, then used for classification by using a full connection with 21 channels, and used for regression by using a full connection with 21 × 4 channels.
As shown in fig. 2, the large segmentable constraint operating principle is represented, the convolution of k × k is converted into 1 × k and k × 1, 1 × k and k × 1 are multiplied by Cmid × Cout to obtain 1 × k × cid × Cout and k × 1 × Cmid × Cout, 1 × k × Cmid × Cout and k × 1 × Cmid × Cout are multiplied by k × 1 × Cmid × Cout and 1 × 1 cm × Cout, and finally, the size-invariant feature map is obtained by filling the fused feature map, the complexity of the whole calculation can be controlled by Cmid and Cout, k is set to 15, Cmid is set to 64, and Cout is set to 245.
The algorithm as shown in FIG. 3 equally divides each candidate Region (ROI) into k ^2 rectangular units, and the pre-feature map first generates a feature map with the number of channels k ^ 2^ (C +1) through a layer of 1 ^ 1 convolution kernels. k 2 represents the number of all rectangular cells in an ROI, and C +1 represents the number of all classes plus background. The k 2 (C +1) feature maps are divided into a group containing k 2 groups every C +1, and each group is responsible for responding to the corresponding rectangular unit. Pooling each ROI, the points (k ^2 in total) are obtained by averaging the pooling of the corresponding location areas of the corresponding packets in the previous layer. Thus, a set of C +1 feature maps is obtained. And finally, performing global average pooling on the feature maps to obtain a C + 1-dimensional vector, and calculating a classification loss function. The whole graph convolution reduces the calculation amount of the second stage, and the recommendation windows can be classified only through global averaging; different convolution kernels extract features of different positions, possibly part of a lane line, and can reflect position information.
The lane lines are divided into 20 categories, which are 20 categories, respectively, including a road edge parallel solid line, a white single parallel solid line, a yellow single parallel solid line, a white single parallel broken line, a yellow single parallel broken line, a white double parallel solid line, a yellow double parallel broken line, a white single vertical solid line, a white single vertical broken line, a yellow single vertical broken line, a white double vertical solid line, a yellow double vertical solid line, a white double vertical broken line, a yellow double vertical broken line, a road edge parallel broken line, a road edge vertical solid line, a road edge vertical broken line, etc., the parallel lines are defined with respect to the vehicle advancing direction, the vertical lines are defined with respect to the vertical vehicle advancing direction, under a general rule, the vertical lines indicate that the vehicle cannot go over or pay attention to the slow movement, and the common lane line category is the first 9 categories, the last 11 categories are fewer.
The classification regression loss function is detailed as follows:
Figure BDA0002369358230000061
wherein the classification penalty is a cross-entropy penalty of multiple classifications, NclsThe total number of boxes (ROIs) chosen for training (typically the batch size), piIs the predicted probability of the frame (ROI),
Figure BDA0002369358230000062
is a label for a frame (ROI),
Figure BDA0002369358230000063
the log loss representing the true class is specified as follows:
Figure BDA0002369358230000064
the regression loss represents the loss of the box, ti={tx,ty,tw,thThe predicted offset, t, is the center coordinate, width and height of the framei *For the actual offset from the tag, the regression loss function is as follows:
Lreg(ti,ti *)=R(ti-ti *}
r is
Figure BDA0002369358230000069
Function, perfectly avoids L1And L2The loss defect is that when x is small, the gradient of x is also small; and when x is large, the absolute value of the gradient to xThe upper limit 1 is reached without causing unstable training due to a sufficiently large gradient of the predicted values, wherein
Figure BDA0002369358230000065
Figure BDA0002369358230000066
Finish L calculation for each boxreg(ti,ti *) After the part is multiplied by
Figure BDA0002369358230000067
Figure BDA0002369358230000068
1 in the presence of an object and 0 in the absence of an object means that we only calculate the foreground loss and not the background loss. N is a radical ofregFor the number of positions of the frame (generally, the number of image pixels), λ is generally set to a value that ensures almost equal weighting between classification and regression loss.
The invention is not limited to the application scene lane line detection, and common targets can be used, and particularly targets with a rule comparison and more intuitive segment information are all within the protection scope of the patent.

Claims (5)

1. A lane line detection method based on image geometric features is characterized by comprising the following steps:
acquiring lane line image segment information through an LSD rectangle detection algorithm, and generating an anchor frame according to the lane line image segment information;
and step two, constructing a main network which comprises a convolution layer, a pooling layer and a full-link layer and is used for extracting the characteristics of the lane line image segment information to obtain a characteristic diagram.
Step three, obtaining an ROI of the feature map through a mapping relation between the anchor frame and the feature map;
step four, extracting all the ROIs through the pooling layer to obtain ROIs with uniform sizes;
step five, feeding the ROI obtained in the step four into a full-connection layer for classification regression;
and step six, training by using a classification loss function and a regression loss function.
2. The lane line detection method based on the image geometric features as claimed in claim 1, wherein: and filling the fused feature map through large sectional constraint in the second step to obtain the feature map with less channel number and unchanged size.
3. The lane line detection method based on the image geometric features as claimed in claim 1, wherein: and in the fourth step, all ROIs are extracted based on batch through position-sensitive candidate region pooling to obtain ROIs with uniform sizes, specifically, boundary coordinate values of the ROIs and boundary values of all rectangular units in each ROI keep a floating point form, and pixel values of fixed-position fixed-number sampling points are calculated in each rectangular unit to be averaged and pooled.
4. The lane line detection method based on the image geometric features as claimed in claim 1, wherein: and generating an anchor frame based on the central point of the line segment of the lane line image in the first step.
5. The lane line detection method based on the image geometric features as claimed in claim 1, wherein: the classification loss is a cross-entropy loss of multiple classifications,
Figure FDA0002369358220000011
wherein N isclsTotal number of ROIs chosen for training, NregFor the number of locations of the ROI, λ is a value that guarantees weighting such as classification and regression loss, piIs the predicted probability of the ROI,
Figure FDA0002369358220000012
is a label for the ROI,
Figure FDA0002369358220000013
the log loss representing the true class is specified as follows:
Figure FDA0002369358220000014
regression loss represents the loss of ROI, ti={tx,ty,tw,th"predicted offset of center coordinate, width and height of ROI, ti *For the actual offset from the tag, the regression loss function is as follows:
Lreg(ti,ti *)=R(ti-ti *}
r is
Figure FDA0002369358220000021
The function of the function is that of the function,
Figure FDA0002369358220000022
wherein
Figure FDA0002369358220000023
CN202010045825.0A 2020-01-16 2020-01-16 Lane line detection method based on image geometric features Pending CN111259796A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010045825.0A CN111259796A (en) 2020-01-16 2020-01-16 Lane line detection method based on image geometric features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010045825.0A CN111259796A (en) 2020-01-16 2020-01-16 Lane line detection method based on image geometric features

Publications (1)

Publication Number Publication Date
CN111259796A true CN111259796A (en) 2020-06-09

Family

ID=70950578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010045825.0A Pending CN111259796A (en) 2020-01-16 2020-01-16 Lane line detection method based on image geometric features

Country Status (1)

Country Link
CN (1) CN111259796A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132037A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Sidewalk detection method, device, equipment and medium based on artificial intelligence
CN112396044A (en) * 2021-01-21 2021-02-23 国汽智控(北京)科技有限公司 Method for training lane line attribute information detection model and detecting lane line attribute information
CN112949493A (en) * 2021-03-03 2021-06-11 深圳瑞为智能科技有限公司 Lane line detection method and system combining semantic segmentation and attention mechanism
CN113822218A (en) * 2021-09-30 2021-12-21 厦门汇利伟业科技有限公司 Lane line detection method and computer-readable storage medium
CN114463720A (en) * 2022-01-25 2022-05-10 杭州飞步科技有限公司 Lane line detection method based on line segment intersection-to-parallel ratio loss function
CN115147811A (en) * 2022-07-01 2022-10-04 小米汽车科技有限公司 Lane line detection method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009524A (en) * 2017-12-25 2018-05-08 西北工业大学 A kind of method for detecting lane lines based on full convolutional network
CN109948459A (en) * 2019-02-25 2019-06-28 广东工业大学 A kind of football movement appraisal procedure and system based on deep learning
CN110046572A (en) * 2019-04-15 2019-07-23 重庆邮电大学 A kind of identification of landmark object and detection method based on deep learning
WO2019175686A1 (en) * 2018-03-12 2019-09-19 Ratti Jayant On-demand artificial intelligence and roadway stewardship system
CN110287779A (en) * 2019-05-17 2019-09-27 百度在线网络技术(北京)有限公司 Detection method, device and the equipment of lane line

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009524A (en) * 2017-12-25 2018-05-08 西北工业大学 A kind of method for detecting lane lines based on full convolutional network
WO2019175686A1 (en) * 2018-03-12 2019-09-19 Ratti Jayant On-demand artificial intelligence and roadway stewardship system
CN109948459A (en) * 2019-02-25 2019-06-28 广东工业大学 A kind of football movement appraisal procedure and system based on deep learning
CN110046572A (en) * 2019-04-15 2019-07-23 重庆邮电大学 A kind of identification of landmark object and detection method based on deep learning
CN110287779A (en) * 2019-05-17 2019-09-27 百度在线网络技术(北京)有限公司 Detection method, device and the equipment of lane line

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
景辉: ""基于卷积神经网络的全局车道线检测算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王嘉雯: ""基于hough变换和神经网络的智能车辆车道线识别"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132037A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Sidewalk detection method, device, equipment and medium based on artificial intelligence
CN112132037B (en) * 2020-09-23 2024-04-16 平安国际智慧城市科技股份有限公司 Pavement detection method, device, equipment and medium based on artificial intelligence
CN112396044A (en) * 2021-01-21 2021-02-23 国汽智控(北京)科技有限公司 Method for training lane line attribute information detection model and detecting lane line attribute information
CN112396044B (en) * 2021-01-21 2021-04-27 国汽智控(北京)科技有限公司 Method for training lane line attribute information detection model and detecting lane line attribute information
CN112949493A (en) * 2021-03-03 2021-06-11 深圳瑞为智能科技有限公司 Lane line detection method and system combining semantic segmentation and attention mechanism
CN112949493B (en) * 2021-03-03 2024-04-09 深圳瑞为智能科技有限公司 Lane line detection method and system combining semantic segmentation and attention mechanism
CN113822218A (en) * 2021-09-30 2021-12-21 厦门汇利伟业科技有限公司 Lane line detection method and computer-readable storage medium
CN114463720A (en) * 2022-01-25 2022-05-10 杭州飞步科技有限公司 Lane line detection method based on line segment intersection-to-parallel ratio loss function
CN115147811A (en) * 2022-07-01 2022-10-04 小米汽车科技有限公司 Lane line detection method and device and electronic equipment
CN115147811B (en) * 2022-07-01 2023-05-30 小米汽车科技有限公司 Lane line detection method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN110175576B (en) Driving vehicle visual detection method combining laser point cloud data
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
CN111259796A (en) Lane line detection method based on image geometric features
CN109147331B (en) Road congestion state detection method based on computer vision
US10037604B2 (en) Multi-cue object detection and analysis
Kühnl et al. Monocular road segmentation using slow feature analysis
CN104318258A (en) Time domain fuzzy and kalman filter-based lane detection method
CN111340855A (en) Road moving target detection method based on track prediction
CN111666805A (en) Category tagging system for autonomous driving
CN110532961A (en) A kind of semantic traffic lights detection method based on multiple dimensioned attention mechanism network model
CN113888754B (en) Vehicle multi-attribute identification method based on radar vision fusion
CN113837094A (en) Road condition rapid analysis method based on full-color high-resolution remote sensing image
Liu et al. Real-time on-road vehicle detection combining specific shadow segmentation and SVM classification
Joy et al. Real time road lane detection using computer vision techniques in python
Cheng et al. Semantic segmentation of road profiles for efficient sensing in autonomous driving
Ren et al. Automatic measurement of traffic state parameters based on computer vision for intelligent transportation surveillance
CN113724293A (en) Vision-based intelligent internet public transport scene target tracking method and system
CN113516853A (en) Multi-lane traffic flow detection method for complex monitoring scene
CN113129336A (en) End-to-end multi-vehicle tracking method, system and computer readable medium
CN110853000A (en) Detection method of track
CN113313008B (en) Target and identification tracking method based on YOLOv3 network and mean shift
CN114882205A (en) Target detection method based on attention mechanism
CN115294545A (en) Complex road surface lane identification method and chip based on deep learning
Chen et al. Vehicle detection based on yolov3 in adverse weather conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200609

WD01 Invention patent application deemed withdrawn after publication