CN112927310B - Lane image segmentation method based on lightweight neural network - Google Patents

Lane image segmentation method based on lightweight neural network Download PDF

Info

Publication number
CN112927310B
CN112927310B CN202110128855.2A CN202110128855A CN112927310B CN 112927310 B CN112927310 B CN 112927310B CN 202110128855 A CN202110128855 A CN 202110128855A CN 112927310 B CN112927310 B CN 112927310B
Authority
CN
China
Prior art keywords
lane
neural network
image
lightweight neural
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110128855.2A
Other languages
Chinese (zh)
Other versions
CN112927310A (en
Inventor
黄孝慈
吕泽正
曹文冠
舒方林
梁耀中
种玉祥
邢梦阳
杜嘉豪
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202110128855.2A priority Critical patent/CN112927310B/en
Publication of CN112927310A publication Critical patent/CN112927310A/en
Application granted granted Critical
Publication of CN112927310B publication Critical patent/CN112927310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a lane image segmentation method based on a lightweight neural network. The method comprises the following steps: selecting a CULane data set as a data set for lane segmentation training; performing feature extraction on the image of the data set by using a lightweight neural network to obtain a preprocessed feature map; constructing a pyramid analysis module by using the characteristic diagram, and roughly segmenting the lane line image; and according to the rough segmentation result, a lane structure loss function is fused to subdivide the lane line image to determine a lane area. On the premise of not losing the accuracy of segmenting the network frame, the method improves the network operation speed, enhances the reasoning capability of model visual clues, and meets the practicability and high efficiency in automatic driving.

Description

Lane image segmentation method based on lightweight neural network
Technical Field
The invention relates to the technical field of image segmentation of road traffic, in particular to a lane image segmentation method based on a lightweight neural network.
Background
In natural scenes, accurate and efficient lane segmentation is an important basis for realizing automatic driving.
Conventional lane detection methods are typically based on visual information to solve the problem. The method extracts the appearance characteristics of the lane line through an image filtering technology to realize lane detection, but in some complex scenes with missing appearance characteristics (such as severe weather conditions, dim or extreme light conditions and severe shielding), the positioning error is large, and the accuracy of a detection result is difficult to ensure.
With the development of deep neural networks, convolutional neural networks stacked layer by layer are well applied to lane detection. The deep learning method for determining the target position based on the bounding box is not suitable for the lane line with a slender shape feature, which may cause serious visualization errors. Thus, fitting a lane line using a Convolutional Neural Network (CNN) based semantic segmentation method is considered. However, the backbone of the CNN extracts redundant feature information, which results in a huge amount of model parameters, too high computation cost, and difficulty in ensuring the efficiency of the algorithm.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a lane image segmentation method based on a lightweight neural network.
The purpose of the invention can be realized by the following technical scheme:
a lane image segmentation method based on a lightweight neural network comprises the following steps:
step 1: selecting a specific data set as a data set for lane segmentation training;
step 2: performing feature extraction on the images in the data set by using a lightweight neural network to obtain a preprocessed feature map;
and step 3: constructing a pyramid analysis module by utilizing the preprocessed characteristic diagram, and roughly segmenting the lane line image;
and 4, step 4: and according to the rough segmentation result, a lane structure loss function is fused to subdivide the lane line image, and finally a lane area is determined.
Further, the step 2 comprises the following sub-steps:
step 201: replacing the convolution layer in the VGG-16 network structure with a lightweight Shadow convolution layer to construct a lightweight neural network Shadow-VGG-16;
step 202: subtracting the corresponding RGB mean value of the image from each pixel of the image in the data set, and simultaneously performing Gaussian noise reduction and smoothing to obtain a preprocessed image;
step 203: and (5) performing feature extraction on the preprocessed image by using the lightweight neural network Shadow-VGG-16 to obtain a feature map.
Further, the step 3 comprises the following sub-steps:
step 301: adding 4 pooling layers with different sizes behind the obtained feature map to construct a pyramid analysis module and generate feature areas with different scales;
step 302: after the characteristic regions with different scales, adjusting the number of channels by using the convolution layer;
step 303: and aggregating the original characteristic diagram and the output characteristic diagram after the channel adjustment by using an attention mechanism to obtain a rough segmentation result.
Further, the pooling layer in step 301 includes pooling layers having sizes of 1x1, 2x2, 4x4 and 6x6, respectively.
Further, the number of the output feature maps in step 303 is 4.
Further, the convolutional layer in step 302 is a convolutional layer with a size of 1 × 1.
Further, the step 4 comprises the following sub-steps:
step 401: introducing a lane structure loss function to enable the lightweight neural network to learn the structure information of the lane;
step 402: and constructing an overall loss function, adopting a pixel segmentation loss function and an adjacent pixel second-order difference equation to jointly constrain a lane rough segmentation result, and obtaining a final subdivision result through convolution.
Further, the lane structure loss function in step 401, that is, the adjacent pixel second order difference equation, is mathematically described as:
Figure BDA0002924466270000021
in the formula, L smooth For adjacent pixel second order difference equations, i.e. lane structure loss functions, P i,j 、P i,j+1 And P i,j+2 The probability values of j, j +1 and j +2 pixel points in the ith lane, h is the length of lane line, | |) 1 Is L 1 And (4) norm.
Further, the overall loss function in step 402 is mathematically described by the formula:
L total =L seg +L smooth
in the formula, L total As a function of the overall loss, L seg As a loss function of pixel division, L smooth And (4) a second-order difference equation of adjacent pixels, namely a lane structure loss function.
Further, the specific data set in step 1 is a CULane data set.
Compared with the prior art, the invention has the following advantages:
(1) Compared with the prior art, the lane segmentation method based on the lightweight neural network provided by the invention adopts the lightweight neural network Shadow-VGG-16, reduces the calculated amount of the network and improves the calculation efficiency;
(2) The pyramid analysis module is used for collecting the regional characteristics of different scales, so that the reasoning capability of visual clues of the model in a complex scene is improved;
(3) And performing pixel constraint on the lane segmentation image by using a vehicle pixel segmentation loss function and a lane structure loss function to obtain a final segmentation result. On the premise of not losing the accuracy of segmenting the network frame, the method improves the network operation speed, enhances the reasoning capability of model visual clues, and meets the practicability and high efficiency in automatic driving.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a lane segmentation method based on a lightweight neural network according to an embodiment of the present invention.
FIG. 2 is a schematic structural diagram of a lightweight Shadow convolutional layer designed according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a lightweight neural network Shadow-VGG-16 constructed by an embodiment of the invention;
FIG. 4 is a schematic structural diagram of 4 different-sized pooling layers according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an attention mechanism in an embodiment of the present invention;
FIG. 6 is a general diagram frame diagram of a lane segmentation method based on a lightweight neural network according to an embodiment of the present invention;
fig. 7 is an exemplary diagram of a fitting result of the lane segmentation method based on the lightweight neural network according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings or the orientations or positional relationships that the products of the present invention are conventionally placed in use, and are only used for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another, and are not to be construed as indicating or implying relative importance.
Furthermore, the terms "horizontal", "vertical" and the like do not imply that the components are required to be absolutely horizontal or pendant, but rather may be slightly inclined. For example, "horizontal" merely means that the direction is more horizontal than "vertical" and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The basic flow of the present invention is shown in fig. 1, and the lane segmentation method based on the lightweight neural network provided by the embodiment of the present invention includes the following steps:
step 1: selecting a CULane data set as a data set for lane segmentation training;
according to the invention, a CULane data set is used as a training data set for training a lane segmentation method of a lightweight neural network, and the CULane comprises 133235 road images extracted from videos shot by 6 different vehicles for 58 hours. The training set comprises 88880 images, the validation set comprises 9675 images, and the test set comprises 34680 images. The training set mainly comprises marked sheltered lane lines and fuzzy invisible lane lines, the verification set and the test set are not marked, and the test set comprises 9 types of road scenes including normal, night, congestion, lane line loss, shadow, arrows, dim light, curves and crossroads.
Step 2: and performing feature extraction on the image of the data set by using the lightweight neural network to obtain a preprocessed feature map.
The Convolutional Neural Network (CNN) stacked in layers has strong feature extraction capability, but redundant convolution operations occupy a large amount of memory, so that the efficiency of the algorithm is difficult to guarantee. In order to reduce the calculation cost and increase the network operation speed, the invention designs a brand-new Shadow convolution layer, and the structure of the layer is shown in figure 2. Meanwhile, a convolution layer in the VGG-16 is replaced by the Shadow, and the lightweight neural network Shadow-VGG-16 is constructed. The specific improvement steps are as follows:
step 2.1: replacing partial convolution operation with depth separable convolution operation to construct a Shadow convolution layer with characteristic parameters far lower than common convolution;
step 2.2: replacing the convolution layer in the VGG-16 network structure with a lightweight Shadow convolution layer to construct a lightweight neural network Shadow-VGG-16;
step 2.3: subtracting the RGB mean value of each pixel of the image in the CULane data set, and simultaneously carrying out Gaussian noise reduction and smoothing treatment;
step 2.4: performing feature extraction on the preprocessed image by using the first 13-layer convolution of Shadow-VGG-16 to obtain a feature map;
the context information in the scene is fully utilized, and the lane detection precision can be effectively improved. The size of the receptive field is the key to obtaining contextual information. But for lane detection it still cannot contain all the necessary information. In order to improve the reasoning capability of the model for lane identification in a complex scene, the invention introduces a multilevel global information aggregation module pyramid analysis module. The structure specifically comprises two parts: 1) Acquiring regional characteristics of different scales by using 4 pooling layers of different sizes, wherein the structure is shown in FIG. 4; 2) The original features were aggregated with the 4 region features by the attention mechanism, and the structure is shown in fig. 5. The specific improvement steps are as follows:
step 3.1: after the resulting feature map 4 different sized pooling layers were added: 1 × 1, 2 × 2, 4 × 4 and 6 × 6, generating characteristic regions with different scales;
step 3.2: after the characteristic regions with different scales are obtained, adjusting the number of channels by using a 1 multiplied by 1 convolution layer;
step 3.3: utilizing a global average pooling compression channel, learning an attention mechanism through two full-connection layers, and aggregating an original characteristic diagram and 4 output characteristic diagrams after channel adjustment by utilizing the attention mechanism to obtain a rough segmentation result;
typically, lane lines present structural constraints due to the requirements of the roadway design. The invention aims to further utilize the position relation among lane pixels, learn the structure information of the lane, introduce a lane structure loss function to restrict the distribution of adjacent pixels and realize the continuous and smooth prediction of the lane. The specific improvement steps are as follows:
step 4.1: introducing a lane structure loss function to encourage the model to learn the structure information of the lane, wherein the specific form is a second-order difference constraint equation of adjacent pixels:
Figure BDA0002924466270000061
in the formula, L smooth Is a second order difference equation of adjacent pixels, i.e. a lane structure loss function, P i,j 、P i,j+1 And P i,j+2 The probability values of the jth pixel point, the j +1 pixel point and the j +2 pixel point in the ith lane are respectively, h is the length of the lane line, | · |, of the lane line 1 Is L 1 And (4) norm.
And 4.2: and the constructed overall loss function is used for jointly constraining the lane rough segmentation result by adopting a pixel segmentation loss function and an adjacent pixel second-order difference equation, and obtaining a final segmentation result through convolution once.
The overall loss function is:
L total =L seg +L smooth
in the formula, L total As a function of the overall loss, L seg A loss function for pixel segmentation, L smooth And (4) a second-order difference equation of adjacent pixels, namely a lane structure loss function.
FIG. 6 is a general diagram frame diagram of a lane segmentation method based on a lightweight neural network according to an embodiment of the present invention, corresponding to each step in the lane segmentation method;
fig. 7 is an exemplary diagram of a fitting result of the lane segmentation method based on the lightweight neural network according to the embodiment of the present invention, where the left side in the diagram is an original image, and the right side in the diagram is an exemplary image of the fitting result after lane segmentation.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A lane image segmentation method based on a lightweight neural network is characterized by comprising the following steps:
step 1: selecting a specific data set as a data set for lane segmentation training;
and 2, step: performing feature extraction on the image in the data set by using a lightweight neural network to obtain a preprocessed feature map;
and 3, step 3: constructing a pyramid analysis module by utilizing the preprocessed characteristic diagram, and roughly segmenting the lane line image;
and 4, step 4: according to the rough segmentation result, a lane structure loss function is fused to subdivide the lane line image, and finally a lane area is determined;
the step 3 comprises the following sub-steps:
step 301: adding 4 pooling layers with different sizes behind the obtained feature map to construct a pyramid analysis module and generate feature areas with different scales;
step 302: after the characteristic areas with different scales, adjusting the number of channels by using the convolution layer;
step 303: aggregating the original characteristic diagram and the output characteristic diagram after the channel adjustment by using an attention mechanism to obtain a rough segmentation result;
the step 4 comprises the following sub-steps:
step 401: introducing a lane structure loss function to enable the lightweight neural network to learn the structure information of the lane;
step 402: constructing an overall loss function, adopting a pixel segmentation loss function and an adjacent pixel second-order difference equation to jointly constrain a lane rough segmentation result, and obtaining a final subdivision result through convolution;
the lane structure loss function in step 401, that is, the adjacent pixel second order difference equation, has the mathematical description formula:
Figure FDA0003860877140000011
in the formula, L smooth Is a second order difference equation of adjacent pixels, i.e. a lane structure loss function, P i,j 、P i,j+1 And P i,j+2 Probability values of j, j +1 and j +2 pixel points in the ith lane, h is the length of a lane line, | | 1 Is L 1 And (4) norm.
2. The method for segmenting the lane image based on the lightweight neural network according to claim 1, wherein the step 2 comprises the following sub-steps:
step 201: based on the VGG-16 network structure, replacing convolution layers in the VGG-16 network structure with lightweight Shadow convolution layers, specifically replacing partial convolution operation with depth separable convolution operation, and constructing the Shadow convolution layers with characteristic parameters far lower than those of ordinary convolution, thereby constructing the lightweight neural network Shadow-VGG-16;
step 202: subtracting the corresponding RGB mean value of the image from each pixel of the image in the data set, and simultaneously performing Gaussian noise reduction and smoothing to obtain a preprocessed image;
step 203: and (5) performing feature extraction on the preprocessed image by using the lightweight neural network Shadow-VGG-16 to obtain a feature map.
3. The method for segmenting a lane image based on a lightweight neural network as claimed in claim 1, wherein said pooling layer in step 301 comprises pooling layers with sizes of 1x1, 2x2, 4x4 and 6x6 respectively.
4. The method for segmenting the lane image based on the lightweight neural network as claimed in claim 1, wherein the number of the output feature maps in the step 303 is 4.
5. The method for segmenting a lane image based on a lightweight neural network as claimed in claim 1, wherein the convolutional layer in step 302 is a convolutional layer with a size of 1x 1.
6. The method for segmenting a lane image based on a lightweight neural network as claimed in claim 1, wherein the overall loss function in step 402 is mathematically described by the formula:
L total =L seg +L smooth
in the formula, L total To the total lossLoss function, L seg A loss function for pixel segmentation, L smooth And (4) a second-order difference equation of adjacent pixels, namely a lane structure loss function.
7. The method for segmenting the lane image based on the lightweight neural network as set forth in claim 1, wherein the specific data set in the step 1 is a CULane data set.
CN202110128855.2A 2021-01-29 2021-01-29 Lane image segmentation method based on lightweight neural network Active CN112927310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110128855.2A CN112927310B (en) 2021-01-29 2021-01-29 Lane image segmentation method based on lightweight neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110128855.2A CN112927310B (en) 2021-01-29 2021-01-29 Lane image segmentation method based on lightweight neural network

Publications (2)

Publication Number Publication Date
CN112927310A CN112927310A (en) 2021-06-08
CN112927310B true CN112927310B (en) 2022-11-18

Family

ID=76168737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110128855.2A Active CN112927310B (en) 2021-01-29 2021-01-29 Lane image segmentation method based on lightweight neural network

Country Status (1)

Country Link
CN (1) CN112927310B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215034A (en) * 2018-07-06 2019-01-15 成都图必优科技有限公司 A kind of Weakly supervised image, semantic dividing method for covering pond based on spatial pyramid
CN110097894A (en) * 2019-05-21 2019-08-06 焦点科技股份有限公司 A kind of method and system of speech emotion recognition end to end
CN110175613A (en) * 2019-06-03 2019-08-27 常熟理工学院 Street view image semantic segmentation method based on Analysis On Multi-scale Features and codec models
CN111310593A (en) * 2020-01-20 2020-06-19 浙江大学 Ultra-fast lane line detection method based on structure perception
CN111582201A (en) * 2020-05-12 2020-08-25 重庆理工大学 Lane line detection system based on geometric attention perception
CN112101363A (en) * 2020-09-02 2020-12-18 河海大学 Full convolution semantic segmentation system and method based on cavity residual error and attention mechanism

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927518B (en) * 2014-04-14 2017-07-07 中国华戎控股有限公司 A kind of face feature extraction method for human face analysis system
CN109902600B (en) * 2019-02-01 2020-10-27 清华大学 Road area detection method
CN111339918B (en) * 2020-02-24 2023-09-19 深圳市商汤科技有限公司 Image processing method, device, computer equipment and storage medium
CN111382686B (en) * 2020-03-04 2023-03-24 上海海事大学 Lane line detection method based on semi-supervised generation confrontation network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215034A (en) * 2018-07-06 2019-01-15 成都图必优科技有限公司 A kind of Weakly supervised image, semantic dividing method for covering pond based on spatial pyramid
CN110097894A (en) * 2019-05-21 2019-08-06 焦点科技股份有限公司 A kind of method and system of speech emotion recognition end to end
CN110175613A (en) * 2019-06-03 2019-08-27 常熟理工学院 Street view image semantic segmentation method based on Analysis On Multi-scale Features and codec models
CN111310593A (en) * 2020-01-20 2020-06-19 浙江大学 Ultra-fast lane line detection method based on structure perception
CN111582201A (en) * 2020-05-12 2020-08-25 重庆理工大学 Lane line detection system based on geometric attention perception
CN112101363A (en) * 2020-09-02 2020-12-18 河海大学 Full convolution semantic segmentation system and method based on cavity residual error and attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Ultra Fast Structure-aware Deep Lane Detection";Zequn Qin等;《arXiv》;20200805;1-16页 *
"基于全卷积神经网络的车道线检测";王帅帅等;《数字制造科学》;20200630;第18卷(第2期);122-127页 *

Also Published As

Publication number Publication date
CN112927310A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN109740465B (en) Lane line detection algorithm based on example segmentation neural network framework
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN110111335B (en) Urban traffic scene semantic segmentation method and system for adaptive countermeasure learning
CN111695448B (en) Roadside vehicle identification method based on visual sensor
CN109255350B (en) New energy license plate detection method based on video monitoring
CN112257609A (en) Vehicle detection method and device based on self-adaptive key point heat map
CN111738111A (en) Road extraction method of high-resolution remote sensing image based on multi-branch cascade void space pyramid
CN110516633B (en) Lane line detection method and system based on deep learning
CN113688836A (en) Real-time road image semantic segmentation method and system based on deep learning
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN107944354A (en) A kind of vehicle checking method based on deep learning
CN114120069B (en) Lane line detection system, method and storage medium based on direction self-attention
CN112257793A (en) Remote traffic sign detection method based on improved YOLO v3 algorithm
CN113313031B (en) Deep learning-based lane line detection and vehicle transverse positioning method
CN112084928A (en) Road traffic accident detection method based on visual attention mechanism and ConvLSTM network
CN112990065A (en) Optimized YOLOv5 model-based vehicle classification detection method
CN115205636B (en) Image target detection method, system, equipment and storage medium
CN114120272A (en) Multi-supervision intelligent lane line semantic segmentation method fusing edge detection
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN113095152A (en) Lane line detection method and system based on regression
CN112766056A (en) Method and device for detecting lane line in low-light environment based on deep neural network
CN112489072A (en) Vehicle-mounted video perception information transmission load optimization method and device
CN117037119A (en) Road target detection method and system based on improved YOLOv8
CN116740424A (en) Transformer-based timing point cloud three-dimensional target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant