CN114429621A - UFSA algorithm-based improved lane line intelligent detection method - Google Patents

UFSA algorithm-based improved lane line intelligent detection method Download PDF

Info

Publication number
CN114429621A
CN114429621A CN202111611111.2A CN202111611111A CN114429621A CN 114429621 A CN114429621 A CN 114429621A CN 202111611111 A CN202111611111 A CN 202111611111A CN 114429621 A CN114429621 A CN 114429621A
Authority
CN
China
Prior art keywords
lane line
ufsa
algorithm
images
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111611111.2A
Other languages
Chinese (zh)
Inventor
张茜茜
李君�
于心远
沈国丽
朱明浩
仲星
刘子怡
刘兴鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202111611111.2A priority Critical patent/CN114429621A/en
Publication of CN114429621A publication Critical patent/CN114429621A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses an improved intelligent lane line detection method based on a UFSA algorithm, which comprises the steps of obtaining lane line images, preprocessing the lane line images and obtaining a lane line data set; dividing a lane line data set into an independent and non-repetitive training set, a verification set and a test set; marking and classifying the images of the lane lines in the training set and the verification set; respectively extracting the characteristics of the images of the lane lines in the training set, the verification set and the test set; improving a Resnet18 network model adopted by the UFSA algorithm, and training and verifying the improved Resnet18 network model by utilizing a training set and a verification set to obtain a trained lane line detection model; and detecting the test set by using the trained lane line detection model to obtain a detection result. The method realizes accurate positioning and lane line detection, effectively relieves urban traffic burden and traffic accidents, is automatic driving assistance, and ensures driving safety.

Description

UFSA algorithm-based improved lane line intelligent detection method
Technical Field
The invention relates to an improved intelligent lane line detection method based on a UFSA algorithm, and belongs to the technical field of lane line detection.
Background
With the acceleration of urbanization and the increase of automobile holding capacity, urban traffic load is increasingly increased, traffic accidents are frequent, and the pressure of driving vehicles by people is doubled. In order to effectively relieve traffic pressure and avoid the increase of road accidents of vehicles and ensure driving safety, the development of a vehicle driving assistance system for providing driver assistance has been actively developed worldwide in recent years. These systems utilize some form of feedback signal output by the system to guide the driver in safe driving by taking input from sensors carried on the vehicle. Lane departure warning and lane keeping, lane changing and forward collision warning, adaptive cruise control, blind spot detection systems and the like belong to the category of driving assistance systems. Lane marking detection is a central component of these systems. Therefore, the research of lane line detection has great theoretical value and practical significance.
The more accurate and effective detection of the lane line from the image is a problem that is widely concerned by related researchers in recent years, and a great deal of research is carried out on a lane line detection method. At present, there are two main methods for lane detection, namely, a conventional image detection processing method and a depth segmentation method.
The traditional image detection processing mode, namely the depth segmentation UFSA algorithm, has many problems, such as: it is difficult to acquire too much image information, to capture context information for a long distance, to lose boundary information, and the like. Since the application scene of lane line detection is very sensitive to the lane line boundary information, not only the geometric shape of the object needs to be concerned, but also the edge information, such as color, texture, illumination and the like, needs to be concerned, so that the traditional image processing mode is hardly suitable for the special scene of lane line detection.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the UFSA algorithm is improved, so that the improved algorithm is more accurate in positioning and classification, the traffic burden and traffic accidents are relieved to a certain extent, and the driving safety is guaranteed.
The invention adopts the following technical scheme for solving the technical problems:
an improved intelligent lane line detection method based on a UFSA algorithm comprises the following steps:
step 1, acquiring lane line images from a reference data set CULane, and preprocessing the lane line images to obtain a lane line data set;
step 2, dividing the lane line data set into a training set, a verification set and a test set which are independent and not repeated;
step 3, labeling and classifying the lane line images in the training set and the verification set;
step 4, respectively extracting the characteristics of the lane line images in the training set, the verification set and the test set to obtain the characteristics of the lane line images in the training set, the verification set and the test set;
step 5, improving a Resnet18 network model adopted by the UFSA algorithm, namely replacing a Softmax loss function with an L-Dice loss function, and training and verifying the improved Resnet18 network model by using a training set and a verification set to obtain a trained lane line detection model;
and 6, detecting the test set by using the trained lane line detection model to obtain a detection result.
As a preferred embodiment of the present invention, the reference data set CULane in step 1 contains 133235 images, wherein the training images 88880, the verification images 9675, and the test images 34680.
As a preferred scheme of the present invention, the step 1 of preprocessing the lane line image to obtain a lane line data set specifically includes: the lane line image is formatted to a set size, resulting in a lane line data set.
In a preferred embodiment of the present invention, in step 2, the lane line data set is divided by using a random sampling method.
As a preferred embodiment of the present invention, in the step 3, the lane line images are labeled and classified by using a target detection labeling tool.
As a preferred embodiment of the present invention, in the improved Resnet18 network model in step 5, the loss function is defined as:
Ltotal=Lcls+αLstr+β(μLseg+γLL-Dice)
wherein L istotalRepresenting the overall loss function, LclsRepresents a classification loss, LstrDenotes structural loss, LsegAnd LL-DiceAll represent segmentation loss, α, β, μ, and γ all represent loss coefficients, μ is set to 0.7, and γ is set to 0.3.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
1. the improved Resnet18 network model is trained by using the reference data set to obtain the lane line intelligent detection model, so that the lane line can be quickly positioned, the method is more efficient than the conventional method, and a large amount of time and labor cost are saved.
2. The L-Dice loss function is added into the improved Resnet18 network model, and the model focuses more on edge information such as color, texture, illumination and the like, so that the detection and classification are more accurate.
3. The L-Dice loss function is added into the improved Resnet18 network model, so that the feature maps of different layers are output more accurately, the output data are optimized, and the positioning accuracy is improved.
4. The method can effectively relieve increasingly heavy urban traffic burden and frequent traffic accidents, and effectively reduce the pressure of people on driving vehicles. Meanwhile, the driving safety is also guaranteed.
Drawings
FIG. 1 is an architecture diagram of an improved intelligent lane line detection method based on UFSA algorithm according to the present invention;
FIG. 2 is a schematic diagram of Dice Loss according to the present invention;
figure 3 is a simplified diagram of a UFSA network architecture;
figure 4 is a simplified diagram of an improved UFSA network architecture;
fig. 5 is a diagram illustrating the effect of detecting lane lines before and after modification according to an embodiment of the present invention, where (a) is before modification and (b) is after modification.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The invention discloses an improved intelligent lane line detection method based on a UFSA (Ultra Fast Structure-aware) algorithm, which is characterized in that in view of the defect that boundary information is lost in the detection of a lane line by an original algorithm, the approach point mainly comprises a Softmax loss function of an original auxiliary segmentation branch, which is improved and fused into an L-die (Lane die loss) loss function, and the function enables the algorithm to pay more attention to the information of the lane boundary. The effectiveness of the above improvements was verified by ablation experiments, and the improvements did not require any additional computational effort.
As shown in fig. 1, the method for intelligently detecting a lane line based on UFSA algorithm improvement provided by the present invention specifically includes the following steps:
step 1, acquiring lane line images from a reference data set CULane, and preprocessing the lane line images to obtain a lane line data set;
the lane line image extraction is from a CULane reference data set, the data set marking is stored in a json file by adopting a center dotting mode, and the marking of one road picture is a tuple. The CULane dataset was used for a large-scale challenging dataset for roadway detection academic research, containing a total of 133235 images, divided into 88880 training images, 9675 verification images and 34680 test images. Preprocessing refers to formatting the lane line image to a set size to obtain a lane line data set.
Step 2, dividing the lane line data set into a training set, a verification set and a test set which are independent and not repeated; the division may be performed by a random sampling method.
Step 3, labeling and classifying the lane line images in the training set and the verification set; the lane line images can be labeled and classified by using a target detection labeling tool.
And 4, respectively extracting the characteristics of the lane line images in the training set, the verification set and the test set to obtain the characteristics of the lane line images in the training set, the verification set and the test set.
Step 5, improving a Resnet18 network model adopted by the UFSA algorithm, namely improving a Softmax loss function of an original auxiliary segmentation branch into an L-Dice loss function, and training and verifying the improved Resnet18 network model by utilizing a training set and a verification set to obtain a trained lane line detection model;
and 6, detecting the test set by using the trained lane line detection model to obtain a detection result.
One straightforward solution to boundary detection is to treat it as a semantic segmentation problem. In the labeling, the boundary is simply marked as 1 and other areas are marked as 0, namely, the boundary is expressed as a two-classification semantic segmentation problem, and binary cross entropy is used as a loss function. However, cross entropy has two limitations: the problems of unbalanced label distribution height and difficult cross entropy loss of adjacent boundary pixel points are solved. The cross-entropy penalty value only takes into account penalties in the microscopic sense, not globally.
The above existing problems are well solved by the Dice Loss, which originated in the 40's of the 20 th century, and is used to measure the similarity between two samples, and is schematically shown in fig. 2. It is applied to computer vision by Milletti et al and performs three-dimensional medical image segmentation in 2016. The mathematical expression is as follows:
Figure BDA0003434915560000051
the equation for the dice coefficient shown in the above equation, where piAnd giRespectively, a predicted value and a true value of the pixel are represented, and the values are 0 or 1. In the lane line detection scene, whether or not a pixel is a boundary is represented by a value of 1, and not by a value of 0. The denominator is the total boundary pixel sum of the predicted value and the true value, the numerator is the total boundary pixel sum of the correct prediction, only if piAnd giThe loss function is incremented when the values (1) match. In the present invention, die Loss is defined as L-die (Lane die Loss), and the segmentation Loss at this time is defined as LL-Dice
The definition of the original loss function is:
Ltotal=Lcls+αLstr+βLseg
Lsegcorresponding to the segmentation loss, LstrCorresponding to structural loss, LclsCorresponding to the classification loss, α and β are loss coefficients.
Then, the overall loss function is redefined as:
Ltotal=Lcls+αLstr+β(μLseg+γLL-Dice)
wherein the division loss LsegIs set to0.7, segmentation loss LL-DiceThe coefficient γ of (b) is set to 0.3.
Fig. 3 and 4 show a conventional UFSA network structure and an improved UFSA network structure, respectively.
Aiming at the detection of the lane lines, a CULane data set issued by Chinese university in hong Kong is selected for carrying out related experiments and researches, most of the lane lines are obvious and clear, and the bending degree of the lane lines is small.
The cuilane reference dataset is described in detail as shown in table 1.
Table 1 data set description
Figure BDA0003434915560000052
The data set is divided into three parts: training set, validation set and test set. In the case of the CULane data set, the number of training iterations is set to 50; batch size 32, basic learning rate 4e initially-4With momentum and weight decay configured at 0.9 and 0.00055, respectively. The training error and the verification error use a mean square error, the training error is converged to 0.0044 when the training is finished, and the verification error is converged to 0.0045.
The accuracy and the recall rate are two most commonly used evaluation indexes when the target detection and identification algorithm is evaluated. The accuracy rate is used to measure the proportion of the output prediction result that is correctly detected as the true value, and the recall rate is used to measure the proportion of the number of correctly detected values included in the output prediction result.
The calculation formula of the accuracy and the recall rate is as follows:
Figure BDA0003434915560000061
Figure BDA0003434915560000062
TP is a real example, which means that the model correctly predicts a positive type sample as a positive type; TN is a true negative case, which means that the model correctly predicts a negative category sample as a negative category; FP is a false positive case, which means that the negative class sample is wrongly predicted as a positive class; FN is a true negative case, and a negative class sample is incorrectly predicted as a negative class.
The lane line detection method is a fusion algorithm based on optimization of a deep learning segmentation algorithm and a traditional method, so that pixel-level classification evaluation is performed on detection results by using evaluation indexes of image segmentation. The F value (F-measure) is also called F1 score (F-score), is a weighted harmonic mean of Precision (Precision) and Recall (Recall), and is used for judging whether the classification model is good or bad and determining two factors of Precision and regression, and the calculation formula is as follows:
Figure BDA0003434915560000063
when the beta parameter is set to 1, namely the F value function is set to be the common F1, the result of accuracy and recall rate is integrated, and the higher the F1 is, the better the segmentation model is.
The invention is modified based on the Resnet-18 network model, and the L-Dice loss is added to carry out the experiment, as shown in the table 2.
TABLE 2 comparison of the impact on model Performance before and after UFSA Algorithm improvement
Experiment of the invention L-Dice F1 fraction/%)
Experiment 1 68.4
Experiment 2 69.3
Table 2 shows the improved content of the experiment of the present invention, wherein experiment 1 is the method of UFSA original algorithm, and the corresponding F1 score is 68.4%; experiment 2 is based on the detection result of adding L-Dice loss by the UFSA original algorithm, the corresponding F1 score is 69.3%, and the precision is improved by 0.9% compared with the precision of the original UFSA, the experiment proves the effectiveness of classifying loss L-Dice, and the problems that important information is lost by extracting lane characteristics through network convolution and pooling, and more abundant context information cannot be obtained and long-distance context information cannot be captured when a large receptive field exists are solved.
The improved part of the experiment is carried out based on a public data set CULane data set, and a large amount of experimental verification is carried out. Table 3 shows the recognition accuracy of 9 scenes based on different algorithms for the CULane dataset. The present invention is based on the improved detection accuracy of Resnet-18 as shown in the table. The 9 scenes are respectively: normal, Crowded, Night, No-line, Shadow, Arrow, Dazzlelight, Curve, Crossroad, the corresponding precision of the original algorithm in Resnet-18 is: 87.7%, 66.0%, 62.1%, 40.2%, 62.8%, 81.0%, 58.4%, 57.9%, 1743; the corresponding precision of the experiment of the invention is respectively as follows: 89.1%, 68.0%, 64.1%, 42.1%, 63.4%, 83.1%, 58.7%, 59.4%, 2336%. After improvement, compared with before improvement, the precision of the first 8 scenes is respectively improved as follows: 1.4%, 2.0%, 2.0%, 1.9%, 0.6%, 2.1%, 0.3%, 1.5%, and the 9 th scene has lower detection and identification precision, and more false-detected lane lines than before the improvement, but the overall detection effect is good. The original Resnet-18 comprehensive detection precision is 68.4%, and the improved Resnet-18 comprehensive detection precision is 69.9%, which is improved by 1.5%; meanwhile, the detection speed is obviously higher than that of the previous detection speed, and the detection speed is changed from 322.5FPS to 301.7 FPS.
Table 3 recognition accuracy of 9 scenes under the curane dataset. Where the test threshold was set to 0.5, the picture resolution was set to 1640 × 590, and "-" indicated that the result was not available.
Figure BDA0003434915560000071
Fig. 5 is a diagram illustrating the effect of detecting lane lines before and after modification according to an embodiment of the present invention, where (a) is before modification and (b) is after modification. The invention discloses an improved lane line intelligent detection method based on a UFSA algorithm, aiming at standardizing driving, improving the standardization of illegal driving, reducing traffic accidents caused by non-standardized driving and improving the lane line detection effect. The method can be suitable for different detection scenes and can also give the specific position of the lane line. The invention improves the driving safety to a certain extent, reduces the growth rate of traffic accidents and realizes intelligent detection.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention.

Claims (6)

1. An improved intelligent lane line detection method based on a UFSA algorithm is characterized by comprising the following steps:
step 1, acquiring lane line images from a reference data set CULane, and preprocessing the lane line images to obtain a lane line data set;
step 2, dividing the lane line data set into a training set, a verification set and a test set which are independent and not repeated;
step 3, labeling and classifying the lane line images in the training set and the verification set;
step 4, respectively extracting the characteristics of the lane line images in the training set, the verification set and the test set to obtain the characteristics of the lane line images in the training set, the verification set and the test set;
step 5, improving a Resnet18 network model adopted by the UFSA algorithm, namely replacing a Softmax loss function with an L-Dice loss function, and training and verifying the improved Resnet18 network model by using a training set and a verification set to obtain a trained lane line detection model;
and 6, detecting the test set by using the trained lane line detection model to obtain a detection result.
2. The method for intelligently detecting the lane line based on the improvement of the UFSA algorithm according to claim 1, wherein the reference data set CULane in step 1 comprises 133235 images, wherein the training images comprise 88880, the verification images comprise 9675, and the test images comprise 34680.
3. The UFSA algorithm-based improved lane line intelligent detection method according to claim 1, wherein the step 1 preprocesses the lane line image to obtain a lane line data set, specifically: the lane line image is formatted to a set size, resulting in a lane line data set.
4. The UFSA algorithm-based improved intelligent detection method for lane lines according to claim 1, wherein in step 2, a random sampling method is used to divide the lane line data set.
5. The UFSA algorithm-based improved intelligent detection method for lane lines according to claim 1, wherein in step 3, a target detection labeling tool is used to label and classify lane line images.
6. The UFSA algorithm-based improved intelligent detection method for lane lines according to claim 1, wherein the loss function of the improved Resnet18 network model in step 5 is defined as:
Ltotal=Lcls+αLstr+β(μLseg+γLL-Dice)
wherein L istotalRepresenting the overall loss function, LclsRepresents a classification loss, LstrDenotes structural loss, LsegAnd LL-DiceAll represent segmentation loss, α, β, μ and γ all represent loss coefficients, μ is set to 07, γ is set to 0.3.
CN202111611111.2A 2021-12-27 2021-12-27 UFSA algorithm-based improved lane line intelligent detection method Pending CN114429621A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111611111.2A CN114429621A (en) 2021-12-27 2021-12-27 UFSA algorithm-based improved lane line intelligent detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111611111.2A CN114429621A (en) 2021-12-27 2021-12-27 UFSA algorithm-based improved lane line intelligent detection method

Publications (1)

Publication Number Publication Date
CN114429621A true CN114429621A (en) 2022-05-03

Family

ID=81311413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111611111.2A Pending CN114429621A (en) 2021-12-27 2021-12-27 UFSA algorithm-based improved lane line intelligent detection method

Country Status (1)

Country Link
CN (1) CN114429621A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294548A (en) * 2022-07-28 2022-11-04 烟台大学 Lane line detection method based on position selection and classification method in row direction

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294548A (en) * 2022-07-28 2022-11-04 烟台大学 Lane line detection method based on position selection and classification method in row direction

Similar Documents

Publication Publication Date Title
CN109829403B (en) Vehicle anti-collision early warning method and system based on deep learning
WO2017156772A1 (en) Method of computing passenger crowdedness and system applying same
CN105160309B (en) Three lanes detection method based on morphological image segmentation and region growing
CN101859382B (en) License plate detection and identification method based on maximum stable extremal region
CN102509091B (en) Airplane tail number recognition method
CN111563412B (en) Rapid lane line detection method based on parameter space voting and Bessel fitting
CN111460919B (en) Monocular vision road target detection and distance estimation method based on improved YOLOv3
CN110119726B (en) Vehicle brand multi-angle identification method based on YOLOv3 model
CN110866430B (en) License plate recognition method and device
CN105260712A (en) Method and system for detecting pedestrian in front of vehicle
Zhang et al. Study on traffic sign recognition by optimized Lenet-5 algorithm
CN108960074B (en) Small-size pedestrian target detection method based on deep learning
CN114359876B (en) Vehicle target identification method and storage medium
CN106845458B (en) Rapid traffic sign detection method based on nuclear overrun learning machine
CN103310006A (en) ROI extraction method in auxiliary vehicle driving system
CN104915642A (en) Method and apparatus for measurement of distance to vehicle ahead
CN112487905A (en) Method and system for predicting danger level of pedestrian around vehicle
CN114049572A (en) Detection method for identifying small target
CN113095152A (en) Lane line detection method and system based on regression
Chen et al. Vehicle detection based on multifeature extraction and recognition adopting RBF neural network on ADAS system
CN109543498B (en) Lane line detection method based on multitask network
CN114463715A (en) Lane line detection method
Huang et al. A safety vehicle detection mechanism based on YOLOv5
CN114429621A (en) UFSA algorithm-based improved lane line intelligent detection method
CN104966064A (en) Pedestrian ahead distance measurement method based on visual sense

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination