CN115376082B - Lane line detection method integrating traditional feature extraction and deep neural network - Google Patents

Lane line detection method integrating traditional feature extraction and deep neural network Download PDF

Info

Publication number
CN115376082B
CN115376082B CN202210919555.0A CN202210919555A CN115376082B CN 115376082 B CN115376082 B CN 115376082B CN 202210919555 A CN202210919555 A CN 202210919555A CN 115376082 B CN115376082 B CN 115376082B
Authority
CN
China
Prior art keywords
lane line
road
feature map
lane
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210919555.0A
Other languages
Chinese (zh)
Other versions
CN115376082A (en
Inventor
魏超
张美迪
李路兴
随淑鑫
钱歆昊
胡乐云
徐扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing
Beijing Institute of Technology BIT
Original Assignee
Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing, Beijing Institute of Technology BIT filed Critical Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing
Priority to CN202210919555.0A priority Critical patent/CN115376082B/en
Publication of CN115376082A publication Critical patent/CN115376082A/en
Application granted granted Critical
Publication of CN115376082B publication Critical patent/CN115376082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a lane line detection method integrating traditional feature extraction and a deep neural network, which comprises the following steps: based on an input road picture, extracting prior features of lane lines in the road picture, and obtaining a lane line prior feature map; splicing the lane line priori feature map and the road picture to obtain a road feature map; and inputting the road feature map into a deep neural network model, and carrying out feature extraction and key point prediction on the road feature map to obtain the position coordinates of each key point in each lane line. The method fully considers the prior traditional characteristics of the lane lines from the angle of fusion of the traditional characteristics and the deep neural network, and obtains the prior characteristics of the lane lines by utilizing the traditional characteristic extraction method before the image is input into the deep neural network, so that the advantages of the traditional lane line detection method based on the characteristics are complementary with those of the method for deep learning, and the robustness and the accuracy of the lane line detection algorithm are improved on the premise of meeting the real-time requirement.

Description

Lane line detection method integrating traditional feature extraction and deep neural network
Technical Field
The invention relates to the technical field of automatic driving, in particular to a lane line detection method integrating traditional feature extraction and a deep neural network.
Background
In the current society, automobiles become one of the most convenient and important transportation means in daily travel of people, but with the popularization of automobiles, traffic safety problems also pose serious threat to life and property safety of people, so that the occurrence of traffic accidents is reduced, better driving experience of drivers is met, unmanned technology is greatly promoted, and the automobile is becoming a research hotspot in the current vehicle industry. As one of important links of unmanned vehicles, the lane line detection technology provides important information for functions of road environment sensing, lane departure early warning, collision warning, path planning and the like of the vehicles.
The lane line detection method developed to the present can be mainly divided into two types: one is a traditional detection method based on lane line characteristics or models; the other category is a lane line detection method based on deep learning which is emerging along with the development of the deep learning at home and abroad.
Traditional lane line detection methods rely on a combination of manually extracted features and heuristics to identify lane segments, which typically have distinct features such as color, shape, edge gradients, and intensity, as compared to other portions of the road image. The traditional lane line detection method is to extract the characteristics of the lane line, detect the points belonging to the lane line, and then apply a well-defined model to the lane line for fitting, so as to identify the complete lane line. The lane line detection method has the advantages that the requirements on illumination conditions, shielding conditions and damage conditions of the lane line are high, accuracy in complex road scenes is to be improved, but the algorithm is simple, instantaneity is good, robustness to various scenes is good, and interpretability is high.
With the continuous development of deep learning, more and more scholars apply the neural network to lane line detection. The convolutional neural network has a unique hierarchical connection structure, so that the characteristic extraction capability is stronger, most of the characteristics of the convolutional layer learning at the bottom layer are the edge information of the lane lines, but as the layer number is deepened, the deep information such as the color, texture, contour and the like of the lane lines can be continuously extracted and learned, so that lane line detection tasks in more complex environments can be processed, the accuracy of an algorithm is higher in complex road scenes, the complexity of a model is higher, the dependence on a data set is high, and the adaptability and the robustness in different scenes are poor.
In summary, both the conventional method and the deep learning-based method have corresponding defects in lane line detection by using a single principle, so that a lane line detection method combining the conventional feature extraction method and the deep neural network method is needed to exert the advantages of the two methods and make up for the advantages.
Disclosure of Invention
The invention provides a lane line detection method integrating traditional feature extraction and a deep neural network, which aims to solve the problems that in the lane line detection method, the calculation complexity and model parameters of the network are required to be reduced, the real-time performance is ensured, the prior traditional features of the lane line are considered, and the accuracy and the robustness of the lane line detection are improved.
In order to achieve the above object, the present invention provides the following solutions:
a lane line detection method integrating traditional feature extraction and a deep neural network comprises the following steps:
s1, extracting prior features of lane lines in an input road picture based on the road picture to obtain a lane line prior feature map;
s2, splicing the lane line priori feature map and the road picture to obtain a road feature map;
s3, inputting the road feature map into a deep neural network model, and carrying out feature extraction and key point prediction on the road feature map to obtain position coordinates of key points of each lane line in a preset grid unit, wherein the key points are points belonging to the lane lines in the grid unit.
Preferably, in S1, the obtaining the input road picture includes:
the vehicle-mounted camera is fixed at the center of the roof of the vehicle position through the bracket, the installation angle of the vehicle-mounted camera is adjusted, the vehicle-mounted camera is aligned to the area to be detected, and the road image of the area to be detected in front of the vehicle is collected by the vehicle-mounted camera.
Preferably, obtaining the lane line prior feature map includes:
s1.1, carrying out gray scale treatment on the road picture based on a weighted average method, configuring the proportion of RGB components, and reserving the brightness information of lane lines in the road picture to obtain a single-channel gray scale picture of a road;
s1.2, sequentially performing median filtering, linear gray stretching and OTSU automatic threshold segmentation on the single-channel gray map, and then selecting an interested region according to the installation angle of the vehicle-mounted camera and the characteristics of the detected road environment;
s1.3, carrying out Canny edge detection operation on the region-of-interest image, carrying out weighted average on gray values of all pixel points in the image after edge detection and the image before edge detection, and obtaining an output single-channel image as the lane line prior feature map.
Preferably, in S2, the obtaining the road feature map includes:
and splicing the lane line priori feature map and the road picture in the channel dimension to obtain the road feature map after channel merging.
Preferably, the deep neural network model comprises a feature extraction network module and a key point prediction network module, wherein the feature extraction network module is used for learning features with different sizes of lane lines in the road feature map; the key point prediction network module is used for receiving the lane line characteristics from the characteristic extraction network module and outputting the position coordinates of each key point in each lane line.
Preferably, in S3, the feature extraction of the road feature map includes: carrying out image transformation on the road feature map, adjusting an image, converting the road feature map into tensors and carrying out normalization processing to obtain a transformed road feature map; inputting the road feature map after the image transformation into the feature extraction network module to obtain lane line features with different scales.
Preferably, the feature extraction network module is composed of a ResNet50 network with a full connection layer removed, a convolution layer is used for replacing a downsampling module in the ResNet50 network, and a bottleneck architecture of a residual block of the ResNet50 network is replaced by an inverse bottleneck architecture, so that information loss caused by dimension compression in the residual block is reduced.
Preferably, the performing the keypoint prediction on the road feature map includes:
inputting the lane line features learned by the feature extraction network module into the key point prediction network module, and predicting coordinates of key points of each lane line in a grid unit by using two full-connection layers in the key point prediction network module; multiplying the predicted coordinates obtained by the grid cell width neural network and the grid cell depth neural network, calculating the coordinates of the lane line points in the input road picture, and outputting a predicted map of the lane line positions.
Preferably, the grid cell is a grid cell preset in the keypoint prediction network module.
Preferably, predicting the position of each lane line in the grid cell includes:
predicting probability P by each lane line in all grid cells i,j Obtaining the position of the lane line at the maximum and the index k of the grid unit, and obtaining the prediction probability Prob at different grid units based on the softmax function i,j,: And then, obtaining the expectation of the k, namely the predicted position of the lane line in the grid unit:
Prob i,j,: =softmax(P ,j,1:w )
Figure BDA0003777078870000051
wherein P is i,j,1:w Prob for the predictive probability distribution of the ith lane line in the jth line i,j,k Predictive probability of the presence of the ith lane line for the jth row, kth column, loc i,j And w is the preset grid cell column number for the position of the ith lane line of the jth row.
The beneficial effects of the invention are as follows:
the method fully considers the prior traditional characteristics of the lane lines from the angle of fusion of the traditional characteristics and the deep neural network, and obtains the prior characteristics of the lane lines by utilizing the traditional characteristic extraction method before the image is input into the deep neural network, so that the advantages of the traditional lane line detection method based on the characteristics are complementary with those of the method for deep learning, and the robustness and the accuracy of the lane line detection algorithm are improved on the premise of meeting the real-time requirement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a lane line detection method integrating traditional feature extraction and deep neural network in an embodiment of the invention;
FIG. 2 is a flow chart of a conventional feature extraction module according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a deep neural network model in an embodiment of the present invention;
fig. 4 is a schematic diagram of a residual block structure of a deep neural network according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As shown in fig. 1, in this embodiment, a lane line detection method integrating conventional feature extraction and deep neural network is provided, which includes the following steps:
step 1, fixing a vehicle-mounted camera at the central position of a vehicle roof through a bracket, adjusting the angle of the camera, aligning the camera to a region to be detected, ensuring the wide field of view of the camera, acquiring road images in front of the vehicle by using the vehicle-mounted camera, and obtaining an input road picture with the width of 1920 pixels, the height of 1080 pixels and the number of channels of 3 channels;
step 2, as shown in fig. 2, the input road picture is transmitted to a traditional feature extraction module, and the prior feature extraction of the lane lines is performed by referring to the preprocessing operation of the traditional lane line detection method, so as to obtain a single-channel lane line prior feature diagram:
step 2.1, carrying out weighted average gray scale treatment on the road picture, and uniformly configuring the proportion of RGB components in the result, and reserving the brightness information of lane lines in the road picture to obtain a single-channel gray scale picture of the road;
step 2.2, sequentially performing median filtering, linear gray stretching and OTSU automatic threshold segmentation on a single-channel gray map of a road to increase the brightness of a lane line part, taking a trapezoid area surrounded by (0, 1080), (700, 650), (1220, 650), (1920, 1080) as an interested area according to the installation angle of a camera and the characteristics of a detected road environment, setting the gray value outside the area as 0, and eliminating invalid parts in the image;
step 2.3, carrying out Canny edge detection operation on the image, carrying out weighted average on gray values of each pixel point of the image after edge detection and the image before edge detection, wherein an output single-channel image is the lane line priori feature map, in the lane line priori feature map, important features of lane lines are reserved, a background area with the lowest attention degree is black, a road area with a certain attention degree is gray, a lane line area with the highest attention degree is white, and the similarity is obviously improved under different road scenes, so that obvious accuracy reduction does not occur when the lane line detection is carried out by using a deep neural network trained by a public data set subsequently;
step 3, fusing the single-channel lane line priori feature map with the effective features of the lane lines and the input road images of the three channels, and splicing the single-channel lane line priori feature map with the channel dimensions to obtain the road feature map with the channel number of 4, wherein the road feature map not only comprises road images directly shot by a vehicle-mounted camera, but also comprises priori feature maps for extracting lane line priori features, so that feature information of the lane lines can be fully displayed, and the detection of a deep neural network is facilitated;
and 4, inputting the road feature map into a pre-trained deep neural network model, wherein the deep neural network model comprises a feature extraction network module and a key point prediction network module, and the specific structure is shown in figure 3.
Step 4.1, before the deep neural network model is input, carrying out image transformation on the road feature map, wherein the image transformation comprises the steps of adjusting the image size to 288 multiplied by 800, converting the feature map into tensor and carrying out normalization processing, so that the road feature map is convenient for network learning;
and 4.2, inputting the road feature map after image transformation into a feature extraction network module, further learning features of different scales of the lane lines, wherein the feature extraction network module is formed by ResNet50 network optimization after a full connection layer is removed. The method comprises the steps that firstly, a convolution kernel size of 4, a step length of 4 and a large convolution kernel convolution layer with the number of 64 replace a downsampling module which is formed by a convolution layer with the size of 7, the number of 64 and a step length of 2 and a largest pooling downsampling layer with the pooling kernel size of 3 in a ResNet50 network, the obtained feature image size is 72 multiplied by 200 multiplied by 64, the size and depth of an output image processed by the large convolution kernel convolution layer are completely the same as those of an output image processed by the ResNet50 downsampling module, and the convolution kernel size of the large convolution kernel is equal to the step length, so that an overlapping area does not exist in a feeling field of each convolution operation, and the information redundancy is reduced and the network efficiency is improved under the condition that the size and depth of the output image are unchanged; after downsampling, replacing a bottleneck architecture with large middle channel number and small upper and lower channel number of a ResNet50 network residual block by an inverse bottleneck architecture with small middle channel number and large upper and lower channel number, reducing information loss caused by dimensional compression in the residual block, and obtaining 1/2, 1/4 and 1/8 feature graphs containing higher-level lane line information through residual layers by using 4 residual layer deepened network structures respectively formed by combining 3, 4, 6 and 3 residual blocks similar to ResNet50, wherein the size of the finally output feature graphs is 9 multiplied by 25 multiplied by 1024; meanwhile, the use of an activation layer and a normalization layer in the residual block is reduced, the adverse effect of frequent nonlinear mapping on network learning is avoided, and two reverse bottleneck architecture residual blocks of the improved network are shown in fig. 4;
step 4.3, inputting the feature map with the size of 9×25×1024 learned by the feature extraction network module into a key point prediction network module, using two full-connection layers to combine, and predicting the probability P of lane lines in each grid unit with the size of (W+1) ×H×S formed by S lane lines in the area with the preset number of lines H and the number of columns W in the original image by using a classifier f through the feature X of the image, wherein the number of lines H of the cells is 18 lines, the number of columns W is 200 columns, and the number of predicted lane lines S is 4
P i,j,: =f ij (X),s.t.i∈[1,S],j∈[1,H]
Predicting probability P by each lane line in all grid units in a row i,j At maximum, the index k of the grid cell gets the position of the lane line, firstly the softmax function is used to get the prediction probability Prob at different grid cells i,j,: Then, the expected position of k is obtained, namely the predicted position of the lane line in the grid unit,
Prob i,j,: =softmax(P i,j,1:w )
Figure BDA0003777078870000091
and 4.4, calculating the predicted coordinates of the lane line points in the input road picture through the preset grid cell width and the predicted coordinates obtained by the depth neural network, and outputting a predicted map of the lane line positions.
The invention discloses a lane line detection method based on the combination of a traditional feature detection method and a depth neural network detection method, which comprises the steps of extracting prior features of lane lines by using a traditional feature extraction module through a road image acquired by a visual sensor, then carrying out channel splicing on the obtained prior feature image of the lane lines and an original road image to obtain a road feature image, inputting the road feature image into a pre-established depth neural network model, obtaining position coordinates of each key point in each lane line in a grid unit through a feature extraction network module and a key point prediction network module, calculating coordinates of the lane line points in the road image by using the size of the preset grid unit, and outputting the position prediction image of the lane lines.
The above embodiments are merely illustrative of the preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but various modifications and improvements made by those skilled in the art to which the present invention pertains are made without departing from the spirit of the present invention, and all modifications and improvements fall within the scope of the present invention as defined in the appended claims.

Claims (7)

1. A lane line detection method integrating traditional feature extraction and a deep neural network is characterized by comprising the following steps:
s1, extracting prior features of lane lines in an input road picture based on the road picture to obtain a lane line prior feature map;
s2, splicing the lane line priori feature map and the road picture to obtain a road feature map;
s3, inputting the road feature map into a deep neural network model, and carrying out feature extraction and key point prediction on the road feature map to obtain position coordinates of key points of each lane line in a preset grid unit, wherein the key points are points belonging to the lane lines in the grid unit;
the deep neural network model comprises a feature extraction network module and a key point prediction network module, wherein the feature extraction network module is used for learning features with different sizes of lane lines in the road feature map; the key point prediction network module is used for receiving the lane line characteristics from the characteristic extraction network module and outputting the position coordinates of each key point in each lane line;
the method for predicting the key points of the road feature map comprises the following steps:
inputting the lane line features learned by the feature extraction network module into the key point prediction network module, and predicting coordinates of key points of each lane line in a grid unit by using two full-connection layers in the key point prediction network module; multiplying the predicted coordinates obtained by the grid cell width neural network and the grid cell depth neural network, calculating the coordinates of the lane line points in the input road picture, and outputting a predicted map of the lane line positions;
predicting the location of each lane line in the grid cell includes:
predicting probability by each lane line in all grid cells
Figure QLYQS_1
Maximum and index of the grid cellskObtaining the position of the lane line based onsoftmaxThe function gets the prediction probability +.>
Figure QLYQS_2
Then find the
Figure QLYQS_3
Is the predicted position of the lane line in the grid cell:
Figure QLYQS_4
wherein (1)>
Figure QLYQS_5
Is the firstiThe lane lines are at the firstjIn a rowwA predictive probability distribution at each location of the individual cells; />
Figure QLYQS_6
Is->
Figure QLYQS_7
Is the first of (2)kA value representing the firstjLine 1kColumn is provided withiPrediction probability of individual lane lines; />
Figure QLYQS_8
Is the firstjLine 1iThe positions of the individual lane lines are determined,wand the number of columns of the grid cells is preset.
2. The lane line detection method according to claim 1, wherein in S1, obtaining the input road picture comprises:
the vehicle-mounted camera is fixed at the center of the roof of the vehicle position through the bracket, the installation angle of the vehicle-mounted camera is adjusted, the vehicle-mounted camera is aligned to the area to be detected, and the road image of the area to be detected in front of the vehicle is collected by the vehicle-mounted camera.
3. The lane line detection method integrating traditional feature extraction and deep neural network according to claim 2, wherein obtaining the lane line prior feature map comprises:
s1.1, carrying out gray scale treatment on the road picture based on a weighted average method, configuring the proportion of RGB components, and reserving the brightness information of lane lines in the road picture to obtain a single-channel gray scale picture of a road;
s1.2, sequentially performing median filtering, linear gray stretching and OTSU automatic threshold segmentation on the single-channel gray map, and then selecting an interested region according to the installation angle of the vehicle-mounted camera and the characteristics of the detected road environment;
s1.3, carrying out Canny edge detection operation on the region-of-interest image, carrying out weighted average on gray values of all pixel points in the image after edge detection and the image before edge detection, and obtaining an output single-channel image as the lane line prior feature map.
4. The lane line detection method according to claim 1, wherein in S2, obtaining the road feature map comprises:
and splicing the lane line priori feature map and the road picture in the channel dimension to obtain the road feature map after channel merging.
5. The lane line detection method according to claim 1, wherein in S3, the feature extraction of the road feature map comprises: carrying out image transformation on the road feature map, adjusting an image, converting the road feature map into tensors and carrying out normalization processing to obtain a transformed road feature map; inputting the road feature map after the image transformation into the feature extraction network module to obtain lane line features with different scales.
6. The lane line detection method according to claim 5, wherein the feature extraction network module consists of a res net50 network with a full connection layer removed, a convolutional layer is used to replace a downsampling module in the res net50 network, and a bottleneck architecture of a residual block of the res net50 network is replaced by an inverse bottleneck architecture, so as to reduce information loss in the residual block due to dimension compression.
7. The lane line detection method according to claim 1, wherein the grid cells are grid cells preset in the keypoint prediction network module.
CN202210919555.0A 2022-08-02 2022-08-02 Lane line detection method integrating traditional feature extraction and deep neural network Active CN115376082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210919555.0A CN115376082B (en) 2022-08-02 2022-08-02 Lane line detection method integrating traditional feature extraction and deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210919555.0A CN115376082B (en) 2022-08-02 2022-08-02 Lane line detection method integrating traditional feature extraction and deep neural network

Publications (2)

Publication Number Publication Date
CN115376082A CN115376082A (en) 2022-11-22
CN115376082B true CN115376082B (en) 2023-06-09

Family

ID=84063059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210919555.0A Active CN115376082B (en) 2022-08-02 2022-08-02 Lane line detection method integrating traditional feature extraction and deep neural network

Country Status (1)

Country Link
CN (1) CN115376082B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115775377B (en) * 2022-11-25 2023-10-20 北京化工大学 Automatic driving lane line segmentation method with fusion of image and steering angle of steering wheel
TWI832591B (en) * 2022-11-30 2024-02-11 鴻海精密工業股份有限公司 Method for detecting lane line, computer device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313047A (en) * 2021-06-11 2021-08-27 中国科学技术大学 Lane line detection method and system based on lane structure prior
CN114120272A (en) * 2021-11-11 2022-03-01 东南大学 Multi-supervision intelligent lane line semantic segmentation method fusing edge detection

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345547B (en) * 2018-10-19 2021-08-24 天津天地伟业投资管理有限公司 Traffic lane line detection method and device based on deep learning multitask network
CN109829403B (en) * 2019-01-22 2020-10-16 淮阴工学院 Vehicle anti-collision early warning method and system based on deep learning
CN112966624A (en) * 2021-03-16 2021-06-15 北京主线科技有限公司 Lane line detection method and device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313047A (en) * 2021-06-11 2021-08-27 中国科学技术大学 Lane line detection method and system based on lane structure prior
CN114120272A (en) * 2021-11-11 2022-03-01 东南大学 Multi-supervision intelligent lane line semantic segmentation method fusing edge detection

Also Published As

Publication number Publication date
CN115376082A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN115376082B (en) Lane line detection method integrating traditional feature extraction and deep neural network
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
CN111695448B (en) Roadside vehicle identification method based on visual sensor
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN110060508B (en) Automatic ship detection method for inland river bridge area
CN111310593B (en) Ultra-fast lane line detection method based on structure perception
CN108876805B (en) End-to-end unsupervised scene passable area cognition and understanding method
CN113095152B (en) Regression-based lane line detection method and system
CN111783671A (en) Intelligent city ground parking space image processing method based on artificial intelligence and CIM
CN110610153A (en) Lane recognition method and system for automatic driving
CN113205107A (en) Vehicle type recognition method based on improved high-efficiency network
CN117197763A (en) Road crack detection method and system based on cross attention guide feature alignment network
CN113326846B (en) Rapid bridge apparent disease detection method based on machine vision
CN112528994A (en) Free-angle license plate detection method, license plate identification method and identification system
CN117115690A (en) Unmanned aerial vehicle traffic target detection method and system based on deep learning and shallow feature enhancement
CN112785610A (en) Lane line semantic segmentation method fusing low-level features
CN116630702A (en) Pavement adhesion coefficient prediction method based on semantic segmentation network
CN115147450B (en) Moving target detection method and detection device based on motion frame difference image
CN116071374A (en) Lane line instance segmentation method and system
CN113192018B (en) Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network
CN115512302A (en) Vehicle detection method and system based on improved YOLOX-s model
CN114926456A (en) Rail foreign matter detection method based on semi-automatic labeling and improved deep learning
CN113011338A (en) Lane line detection method and system
CN111612013A (en) Parking system based on deep neural network and work flow thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant