CN112016463A - Deep learning-based lane line detection method - Google Patents

Deep learning-based lane line detection method Download PDF

Info

Publication number
CN112016463A
CN112016463A CN202010885430.1A CN202010885430A CN112016463A CN 112016463 A CN112016463 A CN 112016463A CN 202010885430 A CN202010885430 A CN 202010885430A CN 112016463 A CN112016463 A CN 112016463A
Authority
CN
China
Prior art keywords
lane
detection data
data set
detection
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010885430.1A
Other languages
Chinese (zh)
Inventor
杨海东
杨航
黄坤山
彭文瑜
林玉山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Original Assignee
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute, Foshan Guangdong University CNC Equipment Technology Development Co. Ltd filed Critical Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Priority to CN202010885430.1A priority Critical patent/CN112016463A/en
Publication of CN112016463A publication Critical patent/CN112016463A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lane line detection method based on deep learning, which comprises the following steps: s01: acquiring a lane detection data set; the lane detection dataset comprises a plurality of images containing lanes; s02: building a training model, and setting a loss function and a constraint parameter; the training model comprises a convolution layer, a pooling layer, a residual block and a full-connection layer which are connected in sequence; the loss function is used for limiting the smoothness and rigidity of the lane line; s03: training the training model by adopting a lane detection data set, and obtaining a converged detection model after iteration for multiple times; s04: and placing the detection model in a vehicle-mounted camera to obtain a lane detection result. The lane line detection method based on deep learning provided by the invention can realize rapid detection of the lane line while ensuring the accuracy.

Description

Deep learning-based lane line detection method
Technical Field
The invention relates to the technical field of deep learning, in particular to a lane line detection method based on deep learning.
Background
With the development of computer hardware technology and computer vision technology, unmanned driving based on computer vision becomes possible, and lane line detection is an important component of an unmanned system and needs to be kept running in real time during driving. And usually, a plurality of cameras are often mounted in the vehicle, so that the algorithm for detecting the lane lines is simple and efficient.
In the traditional method, the lane line detection is regarded as a segmentation problem, all pixel points in an image are classified, and then the lane line is found out according to the classification result; the steps and operations of segmentation classification result in a very complex and slow model. Meanwhile, as global information is not utilized in the segmentation, the detection effect is not ideal enough when the lane line is blocked or the light condition is poor. Another problem with segmentation is the field of view, since segmentation is usually obtained by full convolution, the field of view of each pixel is limited, and the lane line cannot be accurately and quickly determined according to the limited features. In the convolutional neural network, the definition of a Receptive Field (Receptive Field) is the size of an area where pixel points on a feature map (feature map) output by each layer of the convolutional neural network are mapped on an input image, and the interpretation of a popular point is that a point on the feature map corresponds to an area on the input image.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a lane line detection method based on deep learning, which can realize the rapid detection of the lane line while ensuring the accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme: a lane line detection method based on deep learning comprises the following steps:
s01: acquiring a lane detection data set; the lane detection dataset comprises a plurality of images containing lanes;
s02: building a training model, and setting a loss function and a constraint parameter; the training model comprises a convolution layer, a pooling layer, a residual block and a full-connection layer which are connected in sequence; the loss function is used for limiting the smoothness and rigidity of the lane line;
s03: training the training model by adopting a lane detection data set, and obtaining a converged detection model after iteration for multiple times;
s04: and placing the detection model in a vehicle-mounted camera to obtain a lane detection result.
Further, the lane detection data set in step S01 includes a normal detection data set in which the lane lines of the images are clear, and an extreme detection data set in which the lane lines of the images are blocked.
Further, the normal detection data set and the extreme detection data set are enhanced to obtain an enhanced detection data set.
Further, the enhancing includes rotating and segmenting images in the normal detection data set and the extreme detection data set, translating in the horizontal direction or/and the vertical direction to obtain enhanced images, extending lane lines in each image in the enhanced detection data set to image boundaries to obtain enhanced detection data sets, and forming the lane detection data sets by the enhanced detection data sets, the normal detection data sets and the extreme detection data sets.
Further, the training model comprises a convolution layer, a maximum pooling layer, 4 residual blocks and a full connection layer which are connected in sequence.
Further, the loss function Ltotal=Lcls+αLstrWherein L isclsIndicating a deviation of the predicted position from the actual position; l isstr=Lsim+λLshp,LsimRepresents a loss of smoothness function, LshpRepresenting the stiffness loss function, and alpha and lambda represent the constraint parameters.
Further, the step S02 is directed to lane detectionMeasuring each image in the data set, defining a group of line anchor frames, dividing the image into H lines with the resolution of H multiplied by W, and dividing W grids in each line anchor frame; the maximum number of lane lines of the image is C, the global information is X, and then the smoothness loss function is obtained
Figure BDA0002655422470000031
Wherein P isi,j=fi,j(X),fi,jTo judge whether all grids on the jth line are on the classifiers on the ith lane line.
Further, in the step S02
Figure BDA0002655422470000032
Where Loc represents a second order difference function.
Further, in step S03, the set parameters α and λ and the learning rate of the training model are substituted into the training model, and the lane detection data sets are input into the training model one by one for training, so that the detection model is obtained after the training model converges.
The invention has the beneficial effects that: in order to avoid the problems of huge model, slow calculation and limited receptive field brought by the traditional segmentation method, the invention considers the detection as a line search problem based on the global image characteristics, divides the image into a plurality of image blocks, classifies the image blocks of each line, considers the lane line detection as a classification problem, and greatly reduces the complexity of the model and the calculation time. Meanwhile, the invention adopts the prior design loss function of the lane line to add geometric constraint to the training model network, the receptive field of the invention is the size of the whole image, the characteristic is obvious, and the lane line can be accurately positioned under the conditions of poor light environment and shielding.
Drawings
FIG. 1 is a flow chart of the process of the present invention;
FIG. 2 is a schematic diagram of the overall structure of the training model of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and the detailed description below:
referring to fig. 1, a lane line detection method based on deep learning includes the following steps:
s01: a lane detection data set is acquired, wherein the lane detection data set includes a plurality of lane detection data, each lane detection data corresponds to an image containing a lane, and the position and number of lane lines in the image are known and determined.
The lane detection data set comprises a normal detection data set and an extreme detection data set, the lane lines of the images in the normal detection data set are clear, namely the images are shot on the highway conventionally, the illumination condition is good, the lane lines are clear and are not shielded, and the data set mainly acquires the features of the lane lines under the general condition.
The lane line of the image in the extreme detection data set is shielded, namely, the image is mainly acquired under the conditions that the light condition is poor, the road is congested and meanders, the lane line is not easily distinguished at night and in extreme weather such as heavy rain, and the like.
In order to improve the generalization capability of the system and prevent overfitting, the normal detection data set and the extreme detection data set are enhanced to obtain an enhanced detection data set. And the enhancement comprises the steps of rotating, segmenting and translating the images shot under the normal condition and the extreme condition in the horizontal direction or/and the vertical direction to obtain enhanced images, wherein partial lane lines can be shielded or shortened after the operations, in order to keep the structure of the lane lines coherent, the lane lines are extended to the boundary of the images to obtain an enhanced detection data set, and the enhanced detection data set, the normal detection data set and the extreme detection data set form a lane detection data set. In the actual training of the model, the lane detection data set may be divided into two parts, one part being used to train the model, called the training data set, and the other part being used to verify the model after training, called the test data set.
S02: building a training model, and setting a loss function and a constraint parameter; the training model comprises a convolution layer, a pooling layer, a residual block and a full-connection layer which are connected in sequence; the loss function is used to limit the smoothness and rigidity of the lane line; specifically, the number of residual blocks may be 4.
The training model is a deep learning network, the set up training model is only a network frame, the specific parameters of the training model are not finally determined, accurate model parameters can be determined only by continuously training and detecting in the subsequent steps, and the network frame after the model parameters are determined is a detection model obtained after training.
The overall structure of the training model is shown in fig. 2, and includes a convolution layer of 7 × 7, a maximum pooling layer, 4 residual blocks, and a full connection layer. The lane detection data set comprises a plurality of images containing lanes, a group (a plurality of) of line anchor frames is predefined for each image, the image is divided into H lines if the resolution of the image is H multiplied by W, wherein H is far smaller than H, W grids are divided in each line anchor frame, so that lane lines are defined as a set of a series of grids in the image, and the task of detection is changed from the traditional division of the image into the classification of the grids. Setting the maximum number of lane lines in the image as C, the global information as X, fi,jIn order to judge the classifier whether all grids on the jth line are on the ith lane line, the output result of the backbone network is a probability vector based on whether the grids on each line anchor frame are on the corresponding lane line: pi,j=fi,j(X)。
When each image in the lane detection data set is used for training the training model, a partial image in which the predefined line anchor frames are located needs to be input into the training model as a training factor, whether the partial image contains a lane line is known, and whether the partial image contains the lane line, for example, 1, or does not contain the lane line, for example, permanently 0, namely the partial image and whether the partial image contains the lane line (0 or 1) form a training pair.
When one partial image enters the network, the partial image firstly passes through a 7 x 7 convolutional layer, 64 feature maps I with the resolution of one half of the original image are output, the feature maps I pass through a maximum pooling layer, and feature maps II with the resolution of one half of the feature maps I are output. The feature map ii is then entered into 4 residual blocks, where the output feature map resolution of each residual block is one half of the input, doubling the dimension. And the feature map output by the last residual block is spread into a one-dimensional vector through a full connection layer to be output, the one-dimensional vector indicates whether the partial image contains a lane line or not, and the result is consistent with the result of 0 or 1 in the training pair. In the process of training the training model, it can be understood that: and (4) taking the partial image (the part corresponding to the line anchor frame) as the input of the training model, taking the result of whether the lane line is contained as the output of the training image, namely the input and the output of the known training model, and carrying out parameter determination on the training model.
During the training process, the loss function helps to optimize the parameters of the training model. Our goal is to minimize the loss of the training model by optimizing the parameters (weights) of the training model. And matching the target (actual) value with the predicted value through a training model, and calculating the loss through a loss function. We then use a gradient descent method to optimize the network weights to minimize the losses. This is how we train the neural network.
The deviation of the predicted position from the actual position is represented by a cross entropy loss function:
Figure BDA0002655422470000061
wherein, Ti,jRepresenting the true position distribution of the lane lines, LCEThe deviation function is represented. In addition to classification loss, since there is direct position information in the horizontal row direction, the information can be directly used to add a priori constraints to the lane lines, for which we define two loss functions to limit the smoothness and rigidity of the lane lines. Defining the L1 norm of classification on adjacent rows as smoothness, it is desirable that the grid of lane lines be positioned on adjacent rows to be close and smoothly varying:
Figure BDA0002655422470000062
meanwhile, the second-order difference between adjacent lines is defined as the shape of the lane line, and the second-order difference is 0 because the lane line is mostly a straight line, so that the difference between the second-order difference and 0 can be restrainedTo make the predicted lane lines straighter in the optimization process:
Figure BDA0002655422470000063
the constraint parameter lambda is used to reduce the limit on rigidity and the influence on the curve, and the overall structural loss function is Lstr=Lsim+λLshp. While limiting L by a constraint parameter alphastrThe final loss function is therefore: l istotal=Lcls+αLstr
The loss function can accelerate the convergence of the training model and quickly and accurately obtain the detection model.
S03: and training the training model by adopting a lane detection data set, and obtaining a detection model after iteration for multiple times.
And substituting the set parameters alpha and lambda and the learning rate of the training model into the training model, inputting the lane detection data sets into the training model one by one for training, and obtaining the detection model after the training model converges.
For example, in the actual training process, each line anchor frame may be divided into 100 grids, the blocksize (number of samples) is set to 32, and the constraint parameters α and λ are set to 0.5 and 0.3, respectively. After the constraint parameters are set, training a model on a data set, training 100 rounds by using an Adam optimizer, setting the learning rate of the front 80 rounds to be 1e-4, decreasing the learning rate of the rear 20 rounds from 1e-5 to 1e-6, and obtaining a final model after convergence.
The finally obtained detection model can be detected by adopting a test data set, namely, the images in the test data set are substituted into the detection model to obtain a probability value, the obtained probability value is compared with a real result of whether the corresponding images contain the lane line, and the accuracy of the detection model is verified.
S04: and placing the detection model in a vehicle-mounted camera to obtain a lane detection result. The method comprises the following steps of applying a detection model, determining model parameters through input and output when the input and the output of the training model are known in the training process, determining the model parameters in the detection model, and determining the position and the number of lane lines only through inputting images shot by a vehicle-mounted camera.
The invention considers the detection as a line search problem based on the global image characteristics, divides the image into a plurality of grids (image blocks), classifies the image blocks in the lines in each line, and considers the lane line detection as a classification problem, thereby greatly reducing the calculation time. Meanwhile, the invention adopts the prior design loss function of the lane line to add geometric constraint to the training model network, the receptive field of the invention is the size of the whole image, and the lane line can be accurately positioned under the conditions of poor light environment and shielding.
Various other modifications and changes may be made by those skilled in the art based on the above-described technical solutions and concepts, and all such modifications and changes should fall within the scope of the claims of the present invention.

Claims (9)

1. A lane line detection method based on deep learning is characterized by comprising the following steps:
s01: acquiring a lane detection data set; the lane detection dataset comprises a plurality of images containing lanes;
s02: building a training model, and setting a loss function and a constraint parameter; the training model comprises a convolution layer, a pooling layer, a residual block and a full-connection layer which are connected in sequence; the loss function is used for limiting the smoothness and rigidity of the lane line;
s03: training the training model by adopting a lane detection data set, and obtaining a converged detection model after iteration for multiple times;
s04: and placing the detection model in a vehicle-mounted camera to obtain a lane detection result.
2. The method according to claim 1, wherein the lane detection data set in step S01 includes a normal detection data set in which the lane lines of the images are clear, and an extreme detection data set in which the lane lines of the images are blocked.
3. The method according to claim 2, wherein the normal detection data set and the extreme detection data set are enhanced to obtain an enhanced detection data set.
4. The method according to claim 3, wherein the enhancing comprises rotating, segmenting and translating images in the normal detection data set and the extreme detection data set in the horizontal or/and vertical directions to obtain enhanced images, extending lane lines in each image in the enhanced detection data set to image boundaries to obtain enhanced detection data sets, and the enhanced detection data sets, the normal detection data set and the extreme detection data set constitute lane detection data sets.
5. The method according to claim 1, wherein the training model comprises a convolutional layer, a max pooling layer, 4 residual blocks and a full connection layer which are connected in sequence.
6. The method according to claim 5, wherein the loss function L is a function of a distance between the vehicle and the roadtotal=Lcls+αLstrWherein L isclsIndicating a deviation of the predicted position from the actual position; l isstr=Lsim+λLshp,LsimRepresents a loss of smoothness function, LshpRepresenting the stiffness loss function, and alpha and lambda represent the constraint parameters.
7. The method for detecting lane lines based on deep learning of claim 6, wherein in step S02, for each image in the lane detection data set, a set of line anchor boxes is defined, the image resolution is H × W, the image is divided into H lines, and W grids are divided in each line anchor box; the maximum number of lane lines of the image is C, the global information is X, and then the smoothness loss function is obtained
Figure FDA0002655422460000021
Wherein P isi,j=fi,j(X),fi,jTo judge whether all grids on the jth line are on the classifiers on the ith lane line.
8. The method for detecting lane lines based on deep learning of claim 7, wherein step S02 is executed
Figure FDA0002655422460000022
Where Loc represents a second order difference function.
9. The method as claimed in claim 8, wherein in step S03, the set parameters α and λ and the learning rate of the training model are substituted into the training model, the lane detection data sets are input into the training model one by one for training, and the detection model is obtained after the training model converges.
CN202010885430.1A 2020-08-28 2020-08-28 Deep learning-based lane line detection method Pending CN112016463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010885430.1A CN112016463A (en) 2020-08-28 2020-08-28 Deep learning-based lane line detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010885430.1A CN112016463A (en) 2020-08-28 2020-08-28 Deep learning-based lane line detection method

Publications (1)

Publication Number Publication Date
CN112016463A true CN112016463A (en) 2020-12-01

Family

ID=73502865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010885430.1A Pending CN112016463A (en) 2020-08-28 2020-08-28 Deep learning-based lane line detection method

Country Status (1)

Country Link
CN (1) CN112016463A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112284400A (en) * 2020-12-24 2021-01-29 腾讯科技(深圳)有限公司 Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN112396044A (en) * 2021-01-21 2021-02-23 国汽智控(北京)科技有限公司 Method for training lane line attribute information detection model and detecting lane line attribute information
CN112686233A (en) * 2021-03-22 2021-04-20 广州赛特智能科技有限公司 Lane line identification method and device based on lightweight edge calculation
CN113361447A (en) * 2021-06-23 2021-09-07 中国科学技术大学 Lane line detection method and system based on sliding window self-attention mechanism
CN113936266A (en) * 2021-10-19 2022-01-14 西安电子科技大学 Deep learning-based lane line detection method
CN114022863A (en) * 2021-10-28 2022-02-08 广东工业大学 Deep learning-based lane line detection method, system, computer and storage medium
CN114463720A (en) * 2022-01-25 2022-05-10 杭州飞步科技有限公司 Lane line detection method based on line segment intersection-to-parallel ratio loss function
CN114550139A (en) * 2022-03-02 2022-05-27 盛景智能科技(嘉兴)有限公司 Lane line detection method and device
CN113313047B (en) * 2021-06-11 2022-09-06 中国科学技术大学 Lane line detection method and system based on lane structure prior

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046723A (en) * 2019-10-17 2020-04-21 安徽清新互联信息科技有限公司 Deep learning-based lane line detection method
CN111242037A (en) * 2020-01-15 2020-06-05 华南理工大学 Lane line detection method based on structural information
CN111310593A (en) * 2020-01-20 2020-06-19 浙江大学 Ultra-fast lane line detection method based on structure perception

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046723A (en) * 2019-10-17 2020-04-21 安徽清新互联信息科技有限公司 Deep learning-based lane line detection method
CN111242037A (en) * 2020-01-15 2020-06-05 华南理工大学 Lane line detection method based on structural information
CN111310593A (en) * 2020-01-20 2020-06-19 浙江大学 Ultra-fast lane line detection method based on structure perception

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZEQUN QIN 等: "Ultra Fast Structure-aware Deep Lane Detection", 《ARXIV》, pages 1 - 5 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112284400B (en) * 2020-12-24 2021-03-19 腾讯科技(深圳)有限公司 Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN112284400A (en) * 2020-12-24 2021-01-29 腾讯科技(深圳)有限公司 Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN112396044A (en) * 2021-01-21 2021-02-23 国汽智控(北京)科技有限公司 Method for training lane line attribute information detection model and detecting lane line attribute information
CN112396044B (en) * 2021-01-21 2021-04-27 国汽智控(北京)科技有限公司 Method for training lane line attribute information detection model and detecting lane line attribute information
CN112686233A (en) * 2021-03-22 2021-04-20 广州赛特智能科技有限公司 Lane line identification method and device based on lightweight edge calculation
CN112686233B (en) * 2021-03-22 2021-06-18 广州赛特智能科技有限公司 Lane line identification method and device based on lightweight edge calculation
CN113313047B (en) * 2021-06-11 2022-09-06 中国科学技术大学 Lane line detection method and system based on lane structure prior
CN113361447A (en) * 2021-06-23 2021-09-07 中国科学技术大学 Lane line detection method and system based on sliding window self-attention mechanism
CN113936266A (en) * 2021-10-19 2022-01-14 西安电子科技大学 Deep learning-based lane line detection method
CN114022863A (en) * 2021-10-28 2022-02-08 广东工业大学 Deep learning-based lane line detection method, system, computer and storage medium
CN114463720A (en) * 2022-01-25 2022-05-10 杭州飞步科技有限公司 Lane line detection method based on line segment intersection-to-parallel ratio loss function
CN114463720B (en) * 2022-01-25 2022-10-21 杭州飞步科技有限公司 Lane line detection method based on line segment intersection ratio loss function
CN114550139A (en) * 2022-03-02 2022-05-27 盛景智能科技(嘉兴)有限公司 Lane line detection method and device

Similar Documents

Publication Publication Date Title
CN112016463A (en) Deep learning-based lane line detection method
CN109447034B (en) Traffic sign detection method in automatic driving based on YOLOv3 network
CN110163187B (en) F-RCNN-based remote traffic sign detection and identification method
CN111145174B (en) 3D target detection method for point cloud screening based on image semantic features
CN113468967B (en) Attention mechanism-based lane line detection method, attention mechanism-based lane line detection device, attention mechanism-based lane line detection equipment and attention mechanism-based lane line detection medium
CN108694386B (en) Lane line detection method based on parallel convolution neural network
CN111814623A (en) Vehicle lane departure visual detection method based on deep neural network
CN111695514B (en) Vehicle detection method in foggy days based on deep learning
CN113076871B (en) Fish shoal automatic detection method based on target shielding compensation
CN111738055B (en) Multi-category text detection system and bill form detection method based on same
CN113486764B (en) Pothole detection method based on improved YOLOv3
CN109801297B (en) Image panorama segmentation prediction optimization method based on convolution
CN110909623B (en) Three-dimensional target detection method and three-dimensional target detector
CN113920499A (en) Laser point cloud three-dimensional target detection model and method for complex traffic scene
CN113591617B (en) Deep learning-based water surface small target detection and classification method
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
CN112633149A (en) Domain-adaptive foggy-day image target detection method and device
CN114648654A (en) Clustering method for fusing point cloud semantic categories and distances
CN114049572A (en) Detection method for identifying small target
CN113159215A (en) Small target detection and identification method based on fast Rcnn
CN114332921A (en) Pedestrian detection method based on improved clustering algorithm for Faster R-CNN network
CN116824543A (en) Automatic driving target detection method based on OD-YOLO
CN116342536A (en) Aluminum strip surface defect detection method, system and equipment based on lightweight model
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
CN114550134A (en) Deep learning-based traffic sign detection and identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination