CN113449650A - Lane line detection system and method - Google Patents

Lane line detection system and method Download PDF

Info

Publication number
CN113449650A
CN113449650A CN202110733838.1A CN202110733838A CN113449650A CN 113449650 A CN113449650 A CN 113449650A CN 202110733838 A CN202110733838 A CN 202110733838A CN 113449650 A CN113449650 A CN 113449650A
Authority
CN
China
Prior art keywords
lane line
image
point cloud
module
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110733838.1A
Other languages
Chinese (zh)
Inventor
李琳
赵万忠
章波
王春燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110733838.1A priority Critical patent/CN113449650A/en
Publication of CN113449650A publication Critical patent/CN113449650A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lane line detection system and a method, wherein the system comprises: the system comprises a camera signal acquisition module, a laser radar signal acquisition module, a time-space registration module, a camera signal preprocessing module and a multi-task lane line detection module; the invention integrates the information of the laser radar and the camera, and can adapt to the lane line detection task under the change of illumination conditions; a lane line detection model is established by using a multi-task idea, and meanwhile, the lane line position is determined and the type of the lane line is identified, so that the network efficiency is improved, and the resource allocation is optimized.

Description

Lane line detection system and method
Technical Field
The invention belongs to the technical field of intelligent driving, and particularly relates to a lane line detection system and method.
Background
With the increasing of the automobile holding capacity, the road traffic gradually tends to be dense and complex, so that the driving pressure is increased, the driving capability of a driver in a normal traffic scene is reduced, and the occurrence probability of traffic accidents is greatly increased. The lane change behavior is one of important causes of traffic accidents and traffic jam, particularly in urban areas, the density of traffic flow is high, lane change collision accidents are easy to happen, and even chain rear-end collisions are easy to happen. Compared with human driving, the intelligent driving system has the advantages of short response time, high perception precision and the like, so that the research on the intelligent driving technology has very important significance for reducing traffic accidents caused by human factors.
In the current research of lane change decision-making technology, most of the research focuses on the influence of the motion state of other vehicles on decision-making, however, under the actual traffic condition, the lane change decision-making must be executed according to the traffic regulations, so that the system is required to analyze lane line information. However, in the prior art, only the classification of whether the lane line exists is performed, and the lane change decision requires more specific lane line information, such as whether the lane line is a broken line. The laser radar can work all weather, has a longer detection distance than a vision sensor and can provide curvature information; the camera can acquire richer scene information, weather conditions are good, the close-distance detection effect is high in accuracy, and the information fusion of the camera and the sensor can achieve sensor data complementation, so that more complete lane line information is obtained, and the lane line detection capability of the system is improved.
Disclosure of Invention
In view of the above disadvantages of the prior art, the present invention provides a lane line detection system and method to solve the problem of lane line detection under severe illumination conditions in the prior art; the invention integrates the information of the laser radar and the camera, and can adapt to the lane line detection task under the change of illumination conditions; a lane line detection model is established by using a multi-task idea, and meanwhile, the lane line position is determined and the type of the lane line is identified, so that the network efficiency is improved, and the resource allocation is optimized.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention relates to a lane line detection system, comprising: the system comprises a camera signal acquisition module, a laser radar signal acquisition module, a time-space registration module, a camera signal preprocessing module and a multi-task lane line detection module; wherein the content of the first and second substances,
the camera signal acquisition module is used for acquiring image data of a road in front of a running vehicle;
the laser radar signal acquisition module is used for sensing point cloud data of a road scene in front of a running vehicle;
the spatiotemporal registration module comprises: sensor data temporal registration and sensor data spatial registration; the sensor data time registration is used for unifying the data acquired by the camera signal acquisition module and the laser radar signal acquisition module to the same time coordinate system; the sensor data space registration is used for converting the data acquired by the camera signal acquisition module and the laser radar signal acquisition module into the same coordinate system;
the camera signal preprocessing module is used for performing inverse perspective transformation on road image data subjected to time registration and space registration to extract an ROI (region of interest) in the image;
the multi-task lane line detection module is used for establishing a multi-task lane line detection model and comprises the following steps: the system comprises an image feature extraction module, a point cloud feature extraction module, a feature fusion module, a lane line generation module and a lane line classification module;
the image feature extraction module inputs the preprocessed road image data into an image feature extraction network to obtain a lane line position feature vector F1And lane line category feature vector F2
The point cloud feature extraction module inputs the road scene point cloud data subjected to time registration and space registration into a point cloud feature extraction network to obtain a lane line feature vector F3
The feature fusion module is used for fusing the lane line position feature vector F based on a feature fusion network1And lane line feature vector F3Performing fusion according to the input feature vectorDynamically adjusting the attention weight of the characteristics of the laser radar and the camera, and calculating the weighted sum of the attention weight and the original characteristic vector as a fused lane line position characteristic vector;
the lane line generation module inputs the lane line position characteristic vector obtained after fusion into a lane line generation network, determines the position of a lane line and fits the lane line based on a least square method;
the lane line classification module is used for fusing the obtained position feature vector and the lane line category feature vector F2And inputting the lane line classification network, and outputting the probability distribution of multiple classes detected by the lane line by using a SoftMax function.
Further, the camera signal preprocessing module is used for inverse perspective transformation and ROI extraction; let the height value between the camera and the ground be h, the corresponding pitch angle be theta, the yaw angle be alpha, fuIs the equivalent focal length of the image plane in the u direction, fvIs the equivalent focal length of the image plane in the v direction, (c)u,cv) Corresponding is the optical center of the image plane; determining the parameters according to the installation position of the camera, and then determining the corresponding relation between any point in the road surface and an image coordinate system through a transformation matrix T:
Figure BDA0003140753920000021
in the formula, a1=cosθ,a2=cosα,b1=sinθ,b2And (5) solving the generalized inverse matrix of the transformation matrix T through singular value decomposition to complete the inverse perspective transformation of the image.
And intercepting the lower part image after the inverse perspective transformation as an ROI (region of interest) for lane line detection.
Further, the lane line position feature vector F1And lane line category feature vector F2The calculation process of (2) is as follows: based on a YOLO v3 model, the method is used as an image feature extraction network, and two MLP layers are added and used as a lane line position feature extraction branch and a lane line category feature extraction branch; by the formulaIs represented as follows:
Figure BDA0003140753920000031
Figure BDA0003140753920000032
in the formula, F1As a feature vector of the position of the lane line, F2Is a characteristic vector of the category of the lane line,
Figure BDA0003140753920000033
is the preprocessed image data.
Further, the lane line feature vector F3The calculation process of (2) is as follows: based on a YOLO v3 model, the point cloud feature extraction network is expressed by the following formula:
Figure BDA0003140753920000034
in the formula, F3Is a characteristic vector of the lane line,
Figure BDA0003140753920000035
the point cloud is a point cloud set after space-time registration.
Further, the lane line classification module outputs lane line categories as: no lane line, white solid line, white dotted line, white double solid line, single yellow solid line, double yellow solid line.
The invention also provides a lane line detection method, which comprises the following steps:
1) acquiring an image set I and a point cloud set P in front of a running vehicle;
2) performing time registration and space registration on the image set I and the point cloud set P;
3) performing inverse perspective transformation on the image subjected to space-time registration, and extracting an ROI (region of interest) in the image;
4) establishing a multi-task lane line detection model;
5) acquiring images in front of running vehicles and corresponding point cloud data, establishing real lane line positions and category labels for training the multi-task lane line detection model established in the step 4), so as to obtain network parameters of the established lane line detection model, and training the multi-task lane line detection model by taking a loss reduction function as a target;
6) and (3) using the trained model for detecting the current scene, acquiring image and point cloud data in the scene, processing the image and point cloud data in the scene in the steps 2) and 3), and using the processed image and point cloud data as the input of the lane line detection model to obtain the lane line position and the lane line type in front of the current running vehicle.
Further, the time registration of the image set I and the point cloud set P adopts a least square registration method, an interpolation extrapolation method or a Lagrangian interpolation method.
Further, the spatially registering the image set I and the point cloud set P specifically includes: the conversion between the laser radar coordinate and the camera coordinate system is realized through the intermediary of an image coordinate system, a pixel coordinate system and a world coordinate system, and a point cloud set after space-time registration is recorded as
Figure BDA0003140753920000036
Image data is recorded as
Figure BDA0003140753920000037
Further, the step 4) specifically includes: inputting the preprocessed road image data into an image feature extraction network to obtain a lane line position feature vector F1And lane line category feature vector F2
Based on a YOLO v3 model, the method is used as an image feature extraction network, and two MLP layers are added and used as a lane line position feature extraction branch and a lane line category feature extraction branch; is formulated as follows:
Figure BDA0003140753920000041
Figure BDA0003140753920000042
in the formula, F1As a feature vector of the position of the lane line, F2Is a characteristic vector of the category of the lane line,
Figure BDA0003140753920000043
is the preprocessed image data.
Further, the step 4) specifically further includes: inputting the road scene point cloud data subjected to time registration and space registration into a point cloud feature extraction network to obtain a lane line feature vector F3(ii) a Based on a YOLO v3 model, the point cloud feature extraction network is expressed by the following formula:
Figure BDA0003140753920000044
in the formula, F3Is a characteristic vector of the lane line,
Figure BDA0003140753920000045
the point cloud is a point cloud set after space-time registration.
Further, the step 4) specifically further includes fusing the feature vector based on the feature fusion network: dynamically adjusting weight value alpha of laser radar and camera characteristicsi(ii) a Attention weight alpha of the lidar and camera featuresiThe calculation process of (a) is as follows:
ei=tanh(WFi+UO)
αi=exp(ei)/∑kexp(ek)
in the formula, i belongs to {1,3}, W and U are coefficient matrixes of attention weights obtained by training, and O is an output matrix of the current lane line position;
the feature vector fused by the feature fusion network is the weighted sum of the original feature vector and the attention weight, and is expressed by a formula as follows:
F=∑iαiFi
wherein F is a fused feature vector, FiIs the original feature vector.
Further, the step 4) specifically further includes determining a lane line position: selecting a full connection layer to fit the lane line position;
O=FC(F)
wherein O ═ { O ═ O1,o2,...,on}={(x1,y1),(x2,y2),...,(xn,yn) And (4) selecting a quadratic curve as a fitted expression:
y=a0+a1x+a2x2
in the formula, a0,a1,a2Is the position parameter to be found.
Further, the step 4) specifically includes obtaining probability distributions belonging to different lane line categories based on SoftMax function fitting: p (c) ═ SoftMax ([ F, F)3])。
Further, the loss function L in step 5) includes: center coordinate error function LcoorAnd a classification error function Lclas(ii) a Wherein the content of the first and second substances,
L=Lcoor+Lclas
in the formula, the error function L of the center coordinatecoorThe method is used for evaluating the accuracy of the horizontal and vertical coordinates of the center of the target prediction frame and specifically comprises the following steps of
Figure BDA0003140753920000051
In the formula, λcoorWeight coefficients for coordinate errors; b the number of candidate frames generated for each grid;
Figure BDA0003140753920000052
the possibility that the lane line exists in the jth prediction frame of the ith grid and if the lane line exists in the jth prediction frame of the ith grid
Figure BDA0003140753920000053
Otherwise, the value is 0; x is the number ofi,yiRespectively the abscissa and the overall coordinate of the predicted first center point of the ith sliding window,
Figure BDA0003140753920000054
respectively corresponding real abscissa and ordinate; the detected lane line includes 6 categories, i.e., class ═ no lane line, white solid line, white dotted line, white double solid line, single yellow solid line, double yellow solid line }, and classification error function LclasComprises the following steps:
Figure BDA0003140753920000055
in the formula, piAnd
Figure BDA0003140753920000056
respectively representing the real probability and the prediction probability of the lane line target in the ith cell.
The invention has the beneficial effects that:
the invention simultaneously utilizes the camera and the laser radar sensor to detect the position of the lane line and classify the lane line;
on one hand, the limitation of the camera in detection in night environment is made up, and the information fusion of the camera and the camera can realize sensor data complementation, so that more complete lane line information is obtained, and the lane line detection capability of the system is improved;
on the other hand, the lane line result is subdivided, and more accurate and effective support information is provided for the lane changing decision. After the image in front of the running vehicle is obtained, inverse perspective transformation is carried out and the ROI area is extracted, so that the efficiency of lane line detection can be effectively improved.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention.
FIG. 2 is a schematic diagram of the method of the present invention.
Detailed Description
In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention.
Referring to fig. 1, a lane line detection system according to the present invention includes: the system comprises a camera signal acquisition module, a laser radar signal acquisition module, a time-space registration module, a camera signal preprocessing module and a multi-task lane line detection module; wherein the content of the first and second substances,
the camera signal acquisition module is used for acquiring image data of a road in front of a running vehicle;
the laser radar signal acquisition module is used for sensing point cloud data of a road scene in front of a running vehicle;
the spatiotemporal registration module comprises: sensor data temporal registration and sensor data spatial registration; the sensor data time registration is used for unifying the data acquired by the camera signal acquisition module and the laser radar signal acquisition module to the same time coordinate system; the sensor data space registration is used for converting the data acquired by the camera signal acquisition module and the laser radar signal acquisition module into the same coordinate system;
the camera signal preprocessing module is used for performing inverse perspective transformation on road image data subjected to time registration and space registration to extract an ROI (region of interest) in the image;
the camera signal preprocessing module is used for inverse perspective transformation and ROI extraction; let the height value between the camera and the ground be h, the corresponding pitch angle be theta, the yaw angle be alpha, fuIs the equivalent focal length of the image plane in the u direction, fvIs the equivalent focal length of the image plane in the v direction, (c)u,cv) Corresponding is the optical center of the image plane; determining the parameters according to the installation position of the camera, and then determining the corresponding relation between any point in the road surface and an image coordinate system through a transformation matrix T:
Figure BDA0003140753920000061
in the formula,a1=cosθ,a2=cosα,b1=sinθ,b2And (5) solving the generalized inverse matrix of the transformation matrix T through singular value decomposition to complete the inverse perspective transformation of the image.
The multi-task lane line detection module is used for establishing a multi-task lane line detection model and comprises the following steps: the system comprises an image feature extraction module, a point cloud feature extraction module, a feature fusion module, a lane line generation module and a lane line classification module;
the image feature extraction module inputs the preprocessed road image data into an image feature extraction network to obtain a lane line position feature vector F1And lane line category feature vector F2
The lane line position feature vector F1And lane line category feature vector F2The calculation process of (2) is as follows: based on a YOLO v3 model, the method is used as an image feature extraction network, and two MLP layers are added and used as a lane line position feature extraction branch and a lane line category feature extraction branch; is formulated as follows:
Figure BDA0003140753920000062
Figure BDA0003140753920000063
in the formula, F1As a feature vector of the position of the lane line, F2Is a characteristic vector of the category of the lane line,
Figure BDA0003140753920000064
is the preprocessed image data.
The point cloud feature extraction module inputs the road scene point cloud data subjected to time registration and space registration into a point cloud feature extraction network to obtain a lane line feature vector F3
The lane line feature vector F3The calculation process of (2) is as follows: based on the YOLO v3 model, the point cloud feature is extractedTaking a network, and formulating as follows:
Figure BDA0003140753920000071
in the formula, F3Is a characteristic vector of the lane line,
Figure BDA0003140753920000072
the point cloud is a point cloud set after space-time registration.
The feature fusion module is used for fusing the lane line position feature vector F based on a feature fusion network1And lane line feature vector F3Fusing, dynamically adjusting the attention weight of the features of the laser radar and the camera according to the input feature vector, and calculating the weighted sum of the attention weight and the original feature vector to serve as a fused lane line position feature vector;
the lane line generation module inputs the lane line position characteristic vector obtained after fusion into a lane line generation network, determines the position of a lane line and fits the lane line based on a least square method;
the lane line classification module is used for fusing the obtained position feature vector and the lane line category feature vector F2And inputting the lane line classification network, and outputting the probability distribution of multiple classes detected by the lane line by using a SoftMax function.
And intercepting the lower part image after the inverse perspective transformation as an ROI (region of interest) for lane line detection.
Referring to fig. 2, the method for detecting a lane line according to the present invention includes the following steps:
1) acquiring an image set I and a point cloud set P in front of a running vehicle; in an example, vehicle front image data and point cloud data are collected through a vehicle-mounted camera and a laser radar.
2) Performing time registration and space registration on the image set I and the point cloud set P; the image set I and the point cloud set P are subjected to time registration by adopting a least square registration method, an interpolation extrapolation method or a Lagrange interpolation method;
the pairThe spatial registration of the image set I and the point cloud set P specifically comprises the following steps: the conversion between the laser radar coordinate and the camera coordinate system is realized through the intermediary of an image coordinate system, a pixel coordinate system and a world coordinate system, and a point cloud set after space-time registration is recorded as
Figure BDA0003140753920000073
Image data is recorded as
Figure BDA0003140753920000074
3) Performing inverse perspective transformation on the image subjected to space-time registration, and extracting an ROI (region of interest) in the image;
4) establishing a multi-task lane line detection model, and based on an image feature extraction network, a point cloud feature extraction network, a feature fusion network, a lane line generation network and a lane line classification network; taking the data processed in the steps 2) and 3) as the input of a network to obtain the position of the lane line and the probability distribution of multiple categories of the lane line; in an example, the method specifically comprises the following steps:
inputting the preprocessed road image data into an image feature extraction network to obtain a lane line position feature vector F1And lane line category feature vector F2
Based on a YOLO v3 model, the method is used as an image feature extraction network, and two MLP layers are added and used as a lane line position feature extraction branch and a lane line category feature extraction branch; is formulated as follows:
Figure BDA0003140753920000081
Figure BDA0003140753920000082
in the formula, F1As a feature vector of the position of the lane line, F2Is a characteristic vector of the category of the lane line,
Figure BDA0003140753920000083
is the preprocessed image data.
Inputting the road scene point cloud data subjected to time registration and space registration into a point cloud feature extraction network to obtain a lane line feature vector F3(ii) a Based on a YOLO v3 model, the point cloud feature extraction network is expressed by the following formula:
Figure BDA0003140753920000084
in the formula, F3Is a characteristic vector of the lane line,
Figure BDA0003140753920000085
the point cloud is a point cloud set after space-time registration;
fusing feature vectors based on the feature fusion network: dynamically adjusting weight value alpha of laser radar and camera characteristicsi(ii) a Attention weight alpha of the lidar and camera featuresiThe calculation process of (a) is as follows:
ei=tanh(WFi+UO)
αi=exp(ei)/∑kexp(ek)
in the formula, i belongs to {1,3}, W and U are coefficient matrixes of attention weights obtained by training, and O is an output matrix of the current lane line position;
the feature vector fused by the feature fusion network is the weighted sum of the original feature vector and the attention weight, and is expressed by a formula as follows:
F=∑iαiFi
wherein F is a fused feature vector, FiIs the original feature vector.
Determining the lane line position: selecting a full connection layer to fit the lane line position;
O=FC(F)
wherein O ═ { O ═ O1,o2,...,on}={(x1,y1),(x2,y2),...,(xn,yn) And (4) selecting a quadratic curve as a fitted expression:
y=a0+a1x+a2x2
in the formula, a0,a1,a2Is the position parameter to be found.
Probability distributions belonging to different lane line categories are obtained based on SoftMax function fitting:
P(c)=SoftMax([F,F3])。
5) acquiring images in front of running vehicles and corresponding point cloud data, establishing real lane line positions and category labels for training the multi-task lane line detection model established in the step 4), so as to obtain network parameters of the established lane line detection model, and training the multi-task lane line detection model by taking a loss reduction function as a target;
the loss function L in the step 5) comprises: center coordinate error function LcoorAnd a classification error function Lclas(ii) a Wherein the content of the first and second substances,
L=Lcoor+Lclas
in the formula, the error function L of the center coordinatecoorThe method is used for evaluating the accuracy of the horizontal and vertical coordinates of the center of the target prediction frame and specifically comprises the following steps of
Figure BDA0003140753920000091
In the formula, λcoorWeight coefficients for coordinate errors; b the number of candidate frames generated for each grid;
Figure BDA0003140753920000092
the possibility that the lane line exists in the jth prediction frame of the ith grid and if the lane line exists in the jth prediction frame of the ith grid
Figure BDA0003140753920000093
Otherwise, the value is 0; x is the number ofi,yiRespectively the abscissa and the overall coordinate of the predicted first center point of the ith sliding window,
Figure BDA0003140753920000094
respectively corresponding real abscissa and ordinate; the detected lane line includes 6 categories, i.e., class ═ no lane line, white solid line, white dotted line, white double solid line, single yellow solid line, double yellow solid line }, and classification error function LclasComprises the following steps:
Figure BDA0003140753920000095
in the formula, piAnd
Figure BDA0003140753920000096
respectively representing the real probability and the prediction probability of the lane line target in the ith cell.
6) And (3) using the trained model for detecting the current scene, acquiring image and point cloud data in the scene, processing the image and point cloud data in the scene in the steps 2) and 3), and using the processed image and point cloud data as the input of the lane line detection model to obtain the lane line position and the lane line type in front of the current running vehicle.
While the invention has been described in terms of its preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A lane line detection system, comprising: the system comprises a camera signal acquisition module, a laser radar signal acquisition module, a time-space registration module, a camera signal preprocessing module and a multi-task lane line detection module;
the camera signal acquisition module is used for acquiring image data of a road in front of a running vehicle;
the laser radar signal acquisition module is used for sensing point cloud data of a road scene in front of a running vehicle;
the spatiotemporal registration module comprises: sensor data temporal registration and sensor data spatial registration; the sensor data time registration is used for unifying the data acquired by the camera signal acquisition module and the laser radar signal acquisition module to the same time coordinate system; the sensor data space registration is used for converting the data acquired by the camera signal acquisition module and the laser radar signal acquisition module into the same coordinate system;
the camera signal preprocessing module is used for performing inverse perspective transformation on road image data subjected to time registration and space registration to extract an ROI (region of interest) in the image;
the multi-task lane line detection module is used for establishing a multi-task lane line detection model and comprises the following steps: the system comprises an image feature extraction module, a point cloud feature extraction module, a feature fusion module, a lane line generation module and a lane line classification module;
the image feature extraction module inputs the preprocessed road image data into an image feature extraction network to obtain a lane line position feature vector F1And lane line category feature vector F2
The point cloud feature extraction module inputs the road scene point cloud data subjected to time registration and space registration into a point cloud feature extraction network to obtain a lane line feature vector F3
The feature fusion module is used for fusing the lane line position feature vector F based on a feature fusion network1And lane line feature vector F3Fusing, dynamically adjusting the attention weight of the features of the laser radar and the camera according to the input feature vector, and calculating the weighted sum of the attention weight and the original feature vector to serve as a fused lane line position feature vector;
the lane line generation module inputs the lane line position characteristic vector obtained after fusion into a lane line generation network, determines the position of a lane line and fits the lane line based on a least square method;
the lane line classification module is used for fusing the obtained position feature vector and the lane line category feature vector F2And inputting the lane line classification network and outputting the multi-class probability distribution of lane line detection.
2. The lane line detection system of claim 1, wherein the camera signal pre-processing module is configured to perform inverse perspective transformation and extract ROI regions; let the height value between the camera and the ground be h, the corresponding pitch angle be theta, the yaw angle be alpha, fuIs the equivalent focal length of the image plane in the u direction, fvIs the equivalent focal length of the image plane in the v direction, (c)u,cv) Corresponding is the optical center of the image plane; determining the parameters according to the installation position of the camera, and then determining the corresponding relation between any point in the road surface and an image coordinate system through a transformation matrix T:
Figure FDA0003140753910000021
in the formula, a1=cosθ,a2=cosα,b1=sinθ,b2And (5) solving the generalized inverse matrix of the transformation matrix T through singular value decomposition to complete the inverse perspective transformation of the image.
3. The lane line detection system of claim 1, wherein the lane line position feature vector F1And lane line category feature vector F2The calculation process of (2) is as follows: based on a YOLO v3 model, the method is used as an image feature extraction network, and two MLP layers are added and used as a lane line position feature extraction branch and a lane line category feature extraction branch; is formulated as follows:
Figure FDA0003140753910000022
Figure FDA0003140753910000023
in the formula, F1As a feature vector of the position of the lane line, F2For the lane line categoryThe number of the eigenvectors is the sum of the average,
Figure FDA0003140753910000024
is the preprocessed image data.
4. The lane line detection system of claim 1, wherein the lane line feature vector F3The calculation process of (2) is as follows: based on a YOLO v3 model, the point cloud feature extraction network is expressed by the following formula:
Figure FDA0003140753910000025
in the formula, F3Is a characteristic vector of the lane line,
Figure FDA0003140753910000026
the point cloud is a point cloud set after space-time registration.
5. A lane line detection method is characterized by comprising the following steps:
1) acquiring an image set I and a point cloud set P in front of a running vehicle;
2) performing time registration and space registration on the image set I and the point cloud set P;
3) performing inverse perspective transformation on the image subjected to space-time registration, and extracting an ROI (region of interest) in the image;
4) establishing a multi-task lane line detection model;
5) acquiring images in front of running vehicles and corresponding point cloud data, establishing real lane line positions and category labels for training the multi-task lane line detection model established in the step 4), so as to obtain network parameters of the established lane line detection model, and training the multi-task lane line detection model by taking a loss reduction function as a target;
6) and (3) using the trained model for detecting the current scene, acquiring image and point cloud data in the scene, processing the image and point cloud data in the scene in the steps 2) and 3), and using the processed image and point cloud data as the input of the lane line detection model to obtain the lane line position and the lane line type in front of the current running vehicle.
6. The method according to claim 5, wherein the temporal registration of the image set I and the point cloud set P is performed by a least squares registration method, an interpolation extrapolation method or a Lagrangian interpolation method.
7. The lane line detection method according to claim 6, wherein the spatially registering the image set I and the point cloud set P specifically comprises: the conversion between the laser radar coordinate and the camera coordinate system is realized through the intermediary of an image coordinate system, a pixel coordinate system and a world coordinate system, and a point cloud set after space-time registration is recorded as
Figure FDA0003140753910000031
Image data is recorded as
Figure FDA0003140753910000032
8. The lane line detection method according to claim 5, wherein the step 4) specifically includes: inputting the preprocessed road image data into an image feature extraction network to obtain a lane line position feature vector F1And lane line category feature vector F2
Based on a YOLO v3 model, the method is used as an image feature extraction network, and two MLP layers are added and used as a lane line position feature extraction branch and a lane line category feature extraction branch; is formulated as follows:
Figure FDA0003140753910000033
Figure FDA0003140753910000034
in the formula, F1As a feature vector of the position of the lane line, F2Is a characteristic vector of the category of the lane line,
Figure FDA0003140753910000035
is the preprocessed image data.
9. The lane line detection method according to claim 8, wherein the step 4) further comprises: inputting the road scene point cloud data subjected to time registration and space registration into a point cloud feature extraction network to obtain a lane line feature vector F3(ii) a Based on a YOLO v3 model, the point cloud feature extraction network is expressed by the following formula:
Figure FDA0003140753910000036
in the formula, F3Is a characteristic vector of the lane line,
Figure FDA0003140753910000037
the point cloud is a point cloud set after space-time registration.
10. The lane line detection method according to claim 9, wherein the step 4) further specifically includes fusing feature vectors based on a feature fusion network: dynamically adjusting weight value alpha of laser radar and camera characteristicsi(ii) a Attention weight alpha of the lidar and camera featuresiThe calculation process of (a) is as follows:
ei=tanh(WFi+UO)
αi=exp(ei)/∑kexp(ek)
in the formula, i belongs to {1,3}, W and U are coefficient matrixes of attention weights obtained by training, and O is an output matrix of the current lane line position;
the feature vector fused by the feature fusion network is the weighted sum of the original feature vector and the attention weight, and is expressed by a formula as follows:
F=∑iαiFi
wherein F is a fused feature vector, FiIs the original feature vector.
CN202110733838.1A 2021-06-30 2021-06-30 Lane line detection system and method Pending CN113449650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110733838.1A CN113449650A (en) 2021-06-30 2021-06-30 Lane line detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110733838.1A CN113449650A (en) 2021-06-30 2021-06-30 Lane line detection system and method

Publications (1)

Publication Number Publication Date
CN113449650A true CN113449650A (en) 2021-09-28

Family

ID=77814374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110733838.1A Pending CN113449650A (en) 2021-06-30 2021-06-30 Lane line detection system and method

Country Status (1)

Country Link
CN (1) CN113449650A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063762A (en) * 2022-05-20 2022-09-16 广州文远知行科技有限公司 Method, device and equipment for detecting lane line and storage medium
CN115273460A (en) * 2022-06-28 2022-11-01 重庆长安汽车股份有限公司 Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium
CN116152761A (en) * 2022-12-26 2023-05-23 小米汽车科技有限公司 Lane line detection method and device
CN116503383A (en) * 2023-06-20 2023-07-28 上海主线科技有限公司 Road curve detection method, system and medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063762A (en) * 2022-05-20 2022-09-16 广州文远知行科技有限公司 Method, device and equipment for detecting lane line and storage medium
CN115273460A (en) * 2022-06-28 2022-11-01 重庆长安汽车股份有限公司 Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium
CN116152761A (en) * 2022-12-26 2023-05-23 小米汽车科技有限公司 Lane line detection method and device
CN116152761B (en) * 2022-12-26 2023-10-17 小米汽车科技有限公司 Lane line detection method and device
CN116503383A (en) * 2023-06-20 2023-07-28 上海主线科技有限公司 Road curve detection method, system and medium
CN116503383B (en) * 2023-06-20 2023-09-12 上海主线科技有限公司 Road curve detection method, system and medium

Similar Documents

Publication Publication Date Title
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN113449650A (en) Lane line detection system and method
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
US11532151B2 (en) Vision-LiDAR fusion method and system based on deep canonical correlation analysis
CN108909624B (en) Real-time obstacle detection and positioning method based on monocular vision
US9626599B2 (en) Reconfigurable clear path detection system
CN111027461B (en) Vehicle track prediction method based on multi-dimensional single-step LSTM network
CN111965636A (en) Night target detection method based on millimeter wave radar and vision fusion
CN114091598B (en) Multi-vehicle cooperative environment sensing method based on semantic-level information fusion
CN107985189A (en) Towards driver's lane change Deep Early Warning method under scorch environment
CN112668560B (en) Pedestrian detection method and system for pedestrian flow dense area
CN113822221A (en) Target detection method based on antagonistic neural network and multi-sensor fusion
Jiang et al. Target detection algorithm based on MMW radar and camera fusion
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN112084928A (en) Road traffic accident detection method based on visual attention mechanism and ConvLSTM network
CN117111055A (en) Vehicle state sensing method based on thunder fusion
CN111161160A (en) Method and device for detecting obstacle in foggy weather, electronic equipment and storage medium
CN116704273A (en) Self-adaptive infrared and visible light dual-mode fusion detection method
CN116258940A (en) Small target detection method for multi-scale features and self-adaptive weights
KR102423218B1 (en) System and method for simultaneously detecting road damage and moving obstacles using deep neural network, and a recording medium recording a computer readable program for executing the method.
Liu et al. Research on security of key algorithms in intelligent driving system
CN116863227A (en) Hazardous chemical vehicle detection method based on improved YOLOv5
CN116740657A (en) Target detection and ranging method based on similar triangles
CN115984568A (en) Target detection method in haze environment based on YOLOv3 network
CN115294551A (en) Construction method of drivable area and lane line detection model based on semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination