CN112339773B - Monocular vision-based non-active lane departure early warning method and system - Google Patents

Monocular vision-based non-active lane departure early warning method and system Download PDF

Info

Publication number
CN112339773B
CN112339773B CN202011243927.XA CN202011243927A CN112339773B CN 112339773 B CN112339773 B CN 112339773B CN 202011243927 A CN202011243927 A CN 202011243927A CN 112339773 B CN112339773 B CN 112339773B
Authority
CN
China
Prior art keywords
lane line
lane
vehicle
active
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011243927.XA
Other languages
Chinese (zh)
Other versions
CN112339773A (en
Inventor
曹玉社
许亮
李峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongkehai Micro Beijing Technology Co ltd
Original Assignee
Zhongkehai Micro Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongkehai Micro Beijing Technology Co ltd filed Critical Zhongkehai Micro Beijing Technology Co ltd
Priority to CN202011243927.XA priority Critical patent/CN112339773B/en
Publication of CN112339773A publication Critical patent/CN112339773A/en
Application granted granted Critical
Publication of CN112339773B publication Critical patent/CN112339773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a monocular vision-based non-active lane departure early warning method and system, wherein images of a road in the vehicle advancing direction are acquired in real time based on monocular vision; on the basis of the acquired image, the positions of two lane lines on the left and right in the driving direction and the type of the lane line are obtained through a lane line extraction algorithm; judging the deviation condition of the non-active vehicle according to the obtained lane line position, the lane line type and the states of left and right steering lamps of the vehicle; and according to the deviation condition of the non-active vehicle, performing auxiliary early warning on the non-active lane deviation in the driving process of the vehicle. The invention effectively reduces the non-active vehicle deviation behavior of the vehicle in the driving process, and the non-active lane deviation behavior is one of the main factors of vehicle illegal driving and driving accidents.

Description

Monocular vision-based non-active lane departure early warning method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a monocular vision-based non-active lane departure early warning method and system.
Background
The self-driving trip has become a common traffic mode for people's daily trip, and driving safety in the driving process has important influence on people's normal life, has received more and more attention from people, and based on this, the driving auxiliary system takes place at the mercy.
Through search, the following results are found:
the Chinese patent application, namely a lane line detection method for auxiliary driving, with the application number of 201910008697.X and the application date of 2019, 1 month and 4 days, discloses a lane line detection method for auxiliary driving, wherein image samples are trained based on a multitask convolutional neural network, and the recognition of lane lines is assisted by using the relevance between a vehicle and the lane lines, so that the detection safety of the complex and changeable automatic driving of the actual driving environment is improved, and the hardware calculation force processing requirement is reduced; the lane line detection method provided by the invention not only ensures safer driving assistance, but also has the advantages of higher working efficiency, more accurate identification precision, reduction of misjudgment and the like. Compared with the lane departure early warning method, the lane departure early warning method has the following problems:
1. the technology utilizes a neural network to extract image features, and then carries out lane line classification by training an svm model, belonging to a two-stage method.
2. This technique does not give an estimate of the distance the vehicle itself is offset from the centre of the lane line.
3. The technology cannot be combined with the state of a vehicle steering lamp to distinguish active lane changing and inactive lane deviation, and cannot give early warning.
In summary, the existing driving assistance technology cannot well meet the requirement of people on vehicle driving assistance early warning in the driving process, and no explanation or report similar to the technology of the invention is found at present, and similar data at home and abroad is not collected.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a monocular vision-based non-active lane departure early warning method and system.
The invention is realized by the following technical scheme.
According to one aspect of the invention, a monocular vision-based non-active lane departure warning method is provided, which comprises the following steps:
acquiring an image of a road in a traveling direction in real time based on monocular vision;
on the basis of the acquired image, the positions of two lane lines on the left and right in the driving direction and the type of the lane line are obtained through a lane line extraction algorithm;
judging the deviation condition of the non-active vehicle according to the obtained lane line position, the lane line type and the states of left and right steering lamps of the vehicle;
and according to the deviation condition of the non-active vehicle, performing auxiliary early warning on the non-active lane deviation in the driving process of the vehicle.
Preferably, the lane line extraction algorithm comprises two stages of training and deployment; wherein:
in the training stage, marking lane line areas on a large number of pictures of actual road conditions to form a training data set; training a deep neural network by using a training data set, setting network branches and corresponding loss functions on the basis of the deep neural network, and generating a lane line detection model;
and in the deployment stage, the generated lane line detection model is utilized to acquire the lane line position and the lane line type of the image of the real-time advancing direction road.
Preferably, in the training stage, the method for marking the lane line region on the picture of the actual road condition includes:
considering the lane line detection problem as a line-based anchor selection problem, the lane line detection problem is expressed as follows:
Pi,j,:=fij(X),s.t.i∈[1,C],j∈[1,h]
wherein X represents global image feature extracted by CNN, fijIs expressed as classification function of ith lane line at jth anchor point row, Pi,j,:Representing the probability of the jth anchor point row to the ith lane line at each anchor point position for a w + 1-dimensional vector, wherein the 1-dimensional vector which is obtained by adding up represents any position of the jth anchor point row which does not contain the ith lane line;
dividing the picture of the actual road condition into h × w grids, wherein h represents the number of anchor line rows, and w represents the number of aiming point columns; setting the number of lane lines to be detected to be C, then for each anchor point row, at most C grids are marked as lane line positions.
Preferably, the network branch comprises two fully-connected layers and one Softmax layer for obtaining the type of lane line.
Preferably, the loss function LtotalIncluded
Ltotal=Lcls+αLstr+βLtype
In the formula:
Lclsis a classification loss function;
Lstris a structural loss function;
Ltypeis a type loss function;
alpha and beta are loss weighting coefficients;
the structural loss function LstrIncluding a continuous line loss function LsimSum lane line shape loss function Lshp
Lstr=Lsim+λLshp
Preferably, the generated lane line detection model is subjected to auxiliary segmentation task training and used for learning the semantic features of the lane line by the lane line detection model; wherein the loss function of the auxiliary segmentation task training is LsegThe loss weighting factor is γ.
Preferably, the deployment phase further includes:
and filtering the false-detected lane lines, wherein when the detected lane lines have any one of the following conditions, judging that the lane lines are not detected or the false-detected lane lines have low credibility, namely the false-detected lane lines:
the first condition is as follows: the detected lane line point pairs are smaller than three pairs of set thresholds;
case two: in the detected lane line point pairs, the distance between two adjacent point pairs from near to far satisfies a decreasing trend, and the occurrence of the condition is more than or equal to two times.
Preferably, the determining the deviation condition of the non-active vehicle according to the obtained lane line position and the lane line type includes:
the offset distance of the vehicle at the right center of the lane line is as follows:
departance=imgW/2-(x0,0+x0,1)/2
in the formula:
imgW/2 is the abscissa position of the vehicle center in the image;
(x0,0+x0,1) 2 is the abscissa position of the center of the lane line in the image, x0Representing the nearest pair of lane line points;
the default is the image distance of the center of the vehicle offset from the center of the lane line, and when the default is greater than 0, the vehicle deviates to the right; when deparatance <0, the vehicle is offset to the left.
Preferably, in the method for determining an involuntary vehicle deviation condition:
assuming that the average lane width is 3.7 meters, the actual distance that the vehicle center deviates from the lane line is:
carDepartance=departance*(3.7/(x0,1-x0,0))
according to the distance of the vehicle offset lane line and the types of the left lane line and the right lane line, when the vehicle offset lane line exceeds the distance of 0.76 m and the lane on the deviation side is a solid line, and the corresponding turn signal is not turned on, the situation of the non-active vehicle deviation is judged.
According to another aspect of the present invention, there is provided a monocular vision-based inactive lane departure warning system, comprising:
the camera is used for acquiring an image of a road in a traveling direction in real time;
the non-active vehicle deviation estimation module obtains the positions of the left lane line and the right lane line in the driving direction and the type of the lane line through a lane line extraction algorithm according to the acquired road image in the driving direction; judging the deviation condition of the non-active vehicle according to the obtained lane line position, the lane line type and the states of left and right steering lamps of the vehicle;
and the auxiliary early warning module is used for carrying out auxiliary early warning on the non-active lane departure occurring in the vehicle driving process through a set threshold value according to the non-active vehicle departure condition.
Preferably, the camera is arranged at the position right in the middle above the front window of the vehicle.
Due to the adoption of the scheme, compared with the prior art, the invention has the following beneficial effects:
the monocular vision based non-active lane departure early warning method and the monocular vision based non-active lane departure early warning system can effectively reduce the non-active lane departure behavior of the vehicle in the driving process, and the non-active lane departure behavior is one of the main factors of vehicle illegal driving and driving accidents.
According to the monocular vision based non-active lane departure early warning method and system provided by the invention, the lane line detection problem is regarded as the anchor point selection problem, the lane line position and type are directly obtained through the designed network structure prediction, and the method and system belong to a one-stage method.
The monocular vision based non-active lane departure early warning method and the monocular vision based non-active lane departure early warning system belong to end-to-end neural networks, are more intuitive, have lower calculated amount and shorter reasoning time, and are easy to deploy.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of the operation of acquiring an image of a road in a direction of travel in real time in a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of the position of the left and right lane lines and the type of the lane lines in the driving direction according to a preferred embodiment of the present invention;
FIG. 3 is a diagram of the overall framework of the neural network during the training phase in accordance with a preferred embodiment of the present invention;
FIG. 4 is a diagram of a neural network structure during a deployment phase in accordance with a preferred embodiment of the present invention;
FIG. 5 is a flow chart of a low quality lane marking (false positive) filtering algorithm in accordance with a preferred embodiment of the present invention;
fig. 6 is a flow chart of an inactive lane departure warning in a preferred embodiment of the present invention.
Fig. 7 is a flowchart of a monocular vision based inactive lane departure warning method in a preferred embodiment of the present invention.
Detailed Description
The following examples illustrate the invention in detail: the embodiment is implemented on the premise of the technical scheme of the invention, and a detailed implementation mode and a specific operation process are given. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.
The embodiment of the invention provides a monocular vision-based non-active lane departure early warning method, which is used for acquiring front view image data in real time, avoiding the occurrence of illegal driving behaviors of vehicles such as line pressing and the like through lane line departure calculation and reducing the probability of accidents caused by vehicle deviation.
The non-active lane departure warning method based on monocular vision provided by the embodiment, as shown in fig. 7, includes the following steps:
step S1, acquiring the image of the road in the advancing direction in real time based on monocular vision;
step S2, obtaining the positions of the left lane line and the right lane line in the driving direction and the type of the lane line through a lane line extraction algorithm on the basis of the obtained image;
step S3, judging the deviation condition of the non-active vehicle according to the obtained lane line position, lane line type and the states of left and right turn lights of the vehicle;
and step S4, according to the situation of the non-active vehicle departure, carrying out auxiliary early warning on the non-active lane departure occurring in the driving process of the vehicle.
In step S1, an image of the road in the traveling direction is acquired in real time by a camera provided at a position right above the front window.
As a preferred embodiment, in step S2, the lane line extraction algorithm includes two stages, namely training (train phase) and deployment (deployment); wherein:
in the training stage, marking lane line areas on a large number of pictures of actual road conditions to form a training data set; training a deep neural network by using a training data set, setting network branches and corresponding loss functions on the basis of the deep neural network, and generating a lane line detection model;
and in the deployment stage, the generated lane line detection model is utilized to acquire the lane line position and the lane line type of the image of the road in the real-time advancing direction.
As a preferred embodiment, in the training phase, the method for marking the lane line area on the picture of the actual road condition includes:
considering the lane line detection problem as a line-based anchor selection problem, the lane line detection problem is expressed as follows:
Pi,j,:=fij(X),s.t.i∈[1,C],j∈[1,h]
wherein X represents global image feature extracted by CNN, fijIs expressed as classification function of ith lane line at jth anchor point row, Pi,j,:Representing the probability of the jth anchor point row to the ith lane line at each anchor point position for a w + 1-dimensional vector, wherein the 1-dimensional vector which is obtained by adding up represents any position of the jth anchor point row which does not contain the ith lane line;
dividing the picture of the actual road condition into h × w grids, wherein h represents the number of anchor line rows, and w represents the number of aiming point columns; setting the number of lane lines to be detected to be C, then for each anchor point row, at most C grids are marked as lane line positions.
As a preferred embodiment, the network branch comprises two fully connected layers and one Softmax layer for obtaining the type of lane line.
As a preferred embodiment, the loss function LtotalIncluded
Ltotal=Lcls+αLstr+βLtype
In the formula:
Lclsis a classification loss function;
Lstris a structural loss function;
Ltypeis a type loss function;
alpha and beta are loss weighting coefficients;
structural loss function LstrIncluding a continuous line loss function LsimSum lane line shape loss function Lshp
Lstr=Lsim+λLshp
As a preferred embodiment, the training phase further comprises:
performing auxiliary segmentation task training on the generated lane line detection model, wherein the auxiliary segmentation task training is used for the lane line detection model to learn the semantic features of the lane lines; the loss function for the aided segmentation task training is LsegThe loss weighting factor is γ.
As a preferred embodiment, the deployment phase further includes filtering the falsely detected lane lines, wherein when any one of the following conditions occurs to the detected lane line, it is determined that the lane line is not detected or the falsely detected lane line has low reliability, that is, the falsely detected lane line:
the first condition is as follows: the detected lane line point pairs are smaller than three pairs of set thresholds;
case two: in the detected lane line point pairs, the distance between two adjacent point pairs from near to far satisfies a decreasing trend, and the occurrence of the condition is more than or equal to two times.
As a preferred embodiment, the step S3 of determining the non-active vehicle departure situation according to the obtained lane line position, lane line type and left and right turn signal states includes:
the offset distance of the vehicle at the right center of the lane line is as follows:
departance=imgW/2-(x0,0+x0,1)/2
in the formula:
imgW/2 is the abscissa position of the vehicle center in the image;
(x0,0+x0,1) 2 is the abscissa position of the center of the lane line in the image, x0To representThe nearest lane line pair;
the default is the image distance of the center of the vehicle offset from the center of the lane line, and when the default is greater than 0, the vehicle deviates to the right; when deparatance <0, the vehicle is offset to the left.
As a preferred embodiment, assuming an average lane width of 3.7 meters, the (approximate) actual distance that the vehicle center is offset from the lane line is:
carDepartance=departance*(3.7/(x0,1-x0,0))
according to the distance of the vehicle offset lane line and the types of the left lane line and the right lane line, when the vehicle offset lane line exceeds the distance of 0.76 m and the lane on the deviation side is a solid line, and the corresponding turn signal is not turned on, the situation of the non-active vehicle deviation is judged.
As a preferred embodiment, in step S4, a vehicle deviation threshold is set, and an auxiliary warning is performed on the behavior of an inactive lane deviation occurring during the driving of the vehicle through threshold comparison.
The technical solutions provided by the above embodiments of the present invention are further described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a work for acquiring an image of a road in a traveling direction in real time based on monocular vision.
Based on monocular vision, the camera is suspended in the middle position above the front window of the vehicle, and the camera can acquire images of the road in the advancing direction in real time.
The positions of the left and right lane lines and the types of the lane lines (single white dotted/solid line, single yellow dotted/solid line, double white dotted/solid line, etc.) in the driving direction are obtained by a lane line extraction algorithm, as shown in fig. 2.
The lane line detection algorithm is based on a deep neural network, and the deep neural network algorithm is divided into two stages of training (train phase) and deployment (deployment). In the training stage, marking of lane line areas is carried out on a large number of actual road condition pictures captured by the camera to form a training data set, and 13w pictures are obtained in total. The input size of the neural network is fixed to 800 × 288, where w is 800 and h is 288. In the training phase, the learning of the neural network is dragged by designing a plurality of loss functions. The overall framework diagram of the neural network in the training phase is shown in FIG. 3:
unlike the conventional lane line detection algorithm and the lane line detection algorithm based on the segmentation task, in the framework provided by the above embodiment of the present invention, the lane line detection problem is regarded as a line-based anchor point selection problem. In the training data calibration, the image is divided into h x w grids, where h denotes the number of anchor rows and w denotes the number of aimed point columns. When the number of lane lines to be detected is set to C in advance, for example, when C is 2, it means that only two lane lines on the left and right sides of the vehicle are detected. Then for each anchor row, a maximum of C meshes are labeled as lane line positions.
Suppose that the global image feature extracted by CNN is represented by X, fijExpressed as a classification function of the ith lane line at the jth anchor line, the lane line detection problem is expressed as the following formula:
Pi,j,:=fij(X),s.t.i∈[1,C],j∈[1,h]
where P isi,j,:And the vector is a w + 1-dimensional vector and represents the probability of the jth anchor point row to the ith lane line at each anchor point position, wherein the extra 1-dimensional vector represents that the jth anchor point row does not contain any position of the ith lane line.
By Ti,j,:Representing a true lane-line tag (one-hot form), the penalty function LclsExpressed as:
Figure BDA0002769292590000081
wherein L isCERepresenting cross entropy loss
In addition, in order to better describe the structural characteristics of the lane lines, two additional loss functions are used to suppress the structural characteristics of the lane lines. One is the continuity characteristic of lane lines, i.e. the lane line anchor points between adjacent anchor point rows should be close to each other, so this characteristic is described by designing the continuous-going loss function:
Figure BDA0002769292590000082
wherein P isi,j,:Probability vector, P, representing the ith lane line at the jth anchor linei,j+1,:Representing the probability vector of the ith lane line at the j +1 th anchor point row, | × | counting1Representing the L1 norm.
The second structural loss function mainly focuses on the shape of the lane line, i.e. in general the lane line is an approximate straight line, and therefore its second derivative should be close to zero, and the loss function is described as follows:
Figure BDA0002769292590000083
wherein
Figure BDA0002769292590000084
And the expectation of the anchor point position of the lane line in the jth anchor point row of the ith lane line is shown. LocshpCan be considered as the (approximate) second derivative of the shape of the lane line in discrete situations.
By linearly weighting the two loss functions, the overall structural loss can be described as:
Lstr=Lsim+λLshp
in addition, in order to make the neural network capable of the type of lane line, a network branch is added, which comprises two fully-connected layers and a Softmax layer, and the loss function is expressed as Ltype. Meanwhile, in order to enable the neural network to better learn the semantic features of the lane lines, an auxiliary segmentation task is added in the training stage, and as can be seen from the above architecture diagram, the loss function of the lane line segmentation task is LsegAnd (4) showing. In turn, during the network training phase, the overall loss function is expressed as:
Ltotal=Lcls+αLstr+βLseg+γLtype
here, α, β, γ are loss weighting coefficients. During the training process, λ, α, β, γ are all set to 1, while the batchSize is set to 32. Training 100 epochs in total to obtain a final lane line detection model.
It should be noted that the auxiliary lane line segmentation task branch is only used in training, and in the model deployment stage, the lane line segmentation branch is removed, that is, the neural network structure in the deployment stage is as shown in fig. 4.
And a low-quality (false detection) lane line filtering algorithm is adopted, so that unnecessary early warning is avoided. And the filtering logic is that a normal lane line detection result theoretically comprises 18 point pairs, and the distance between the 18 point pairs is gradually reduced from near to far due to the visual angle of the camera, so that when any one of the following conditions of the detected lane line occurs, the lane line is not detected or the reliability of the detected lane line is low.
The first condition is as follows: the detected lane line pairs are less than three pairs
Case two: in the detected lane line point pairs, two adjacent point pairs from near to far are in a distance-decreasing trend, and the condition occurs more than twice.
Mathematically expressed as:
xi,1-xi,0>xi+1,1-xi+1,0(theoretically, the distance between the front and rear adjacent point pairs should satisfy the formula)
Here:
xi,1-identifying the x-axis coordinates of the near view point pair to the right lane line
xi,0-x-axis direction coordinates of the near vision corner points to the left lane lines
xi+1,1-identifying the x-axis coordinates of the far view point pair to the right lane line
xi+1,0-x-axis direction coordinates of marking far-vision corner points to left lane lines
Fig. 5 shows a flow chart of a low-quality lane line (false detection) filtering algorithm.
And judging lane departure, wherein the designed road meets the requirement of 3.5 m width or 3.75 m width on a common expressway and an urban road according to the national regulations of road traffic signs and marking lines. When the lane departure determination is performed, the calculation is performed assuming that the lane width is 3.7 meters. In addition, since the camera is disposed at the position right above the front window, the center of the vehicle in the captured image theoretically coincides (or nearly coincides) with the center position of the image, and therefore the center position of the image is approximated to the center position of the vehicle.
Based on the above conditions, the offset distance of the vehicle with respect to the right center of the lane line can be calculated:
departance=imgW/2-(x0,0+x0,1)/2
wherein the content of the first and second substances,
imgW/2-abscissa position of vehicle center in image
(x0,0+x0,1) 2-abscissa position of center of lane line in image, x0Indicating the nearest pair of lane-line points
deparatance-the image distance by which the vehicle center is offset from the center of the lane line, when deparatance >0, the vehicle deviates to the right; when deparatance <0, the vehicle is offset to the left.
According to the assumption that the average lane width is 3.7 meters, the (approximate) actual distance that the vehicle center is offset from the lane line can be found as:
carDepartance=departance*(3.7/(x0,1-x0,0))
according to the distance that the vehicle deviates from the lane line and the types of the left lane line and the right lane line, when the vehicle deviates from the lane line by more than 0.76 m and a lane on the deviation side is a solid line (a single solid line, a double solid line and the like do not meet the lane change condition), and the system is combined with the condition that a corresponding turn light signal is not turned on, alarm information is given.
As shown in fig. 6, it is a flow chart of the inactive lane departure warning.
Another embodiment of the present invention provides a monocular vision-based non-active lane departure warning system, including:
the camera is used for acquiring an image of a road in a traveling direction in real time;
the non-active lane departure estimation module obtains the positions of two lane lines on the left and right in the driving direction and the type of the lane lines through a lane line extraction algorithm according to the acquired road image in the driving direction; judging the non-active lane departure condition according to the obtained lane line position, the lane line type and the states of left and right turn lights of the vehicle;
and the auxiliary early warning module is used for carrying out auxiliary early warning on the non-active lane departure in the vehicle driving process through a set threshold value according to the non-active lane departure condition.
As a preferred embodiment, the camera is arranged at the right middle position above the front window of the vehicle.
The monocular vision-based non-active lane departure early warning method and the monocular vision-based non-active lane departure early warning system can effectively reduce the non-active lane departure behavior of the vehicle in the driving process, and the non-active lane departure behavior is one of the main factors of vehicle illegal driving and driving accidents.
It should be noted that, the steps in the method provided by the present invention may be implemented by using corresponding modules, devices, units, and the like in the system, and those skilled in the art may implement the composition of the system by referring to the technical solution of the method, that is, the embodiment in the method may be understood as a preferred example for constructing the system, and will not be described herein again.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices provided by the present invention in purely computer readable program code means, the method steps can be fully programmed to implement the same functions by implementing the system and its various devices in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices thereof provided by the present invention can be regarded as a hardware component, and the devices included in the system and various devices thereof for realizing various functions can also be regarded as structures in the hardware component; means for performing the functions may also be regarded as structures within both software modules and hardware components for performing the methods.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (8)

1. A monocular vision based non-active lane departure early warning method is characterized by comprising the following steps:
acquiring an image of a road in the vehicle traveling direction in real time based on monocular vision;
on the basis of the acquired image, the positions of two lane lines on the left and right in the driving direction and the type of the lane line are obtained through a lane line extraction algorithm;
judging the deviation condition of the non-active vehicle according to the obtained lane line position, the lane line type and the states of left and right steering lamps of the vehicle;
according to the situation of the non-active vehicle departure, performing auxiliary early warning on the non-active lane departure occurring in the vehicle driving process;
the lane line extraction algorithm comprises two stages of training and deployment; wherein:
in the training stage, marking lane line areas on a large number of pictures of actual road conditions to form a training data set; training a deep neural network by using a training data set, setting network branches and corresponding loss functions on the basis of the deep neural network, and generating a lane line detection model;
in the deployment stage, the generated lane line detection model is utilized to acquire the lane line position and the lane line type of the image of the road in the real-time vehicle advancing direction;
in the training stage, the method for marking the lane line area on the picture of the actual road condition comprises the following steps:
considering the lane line detection problem as a line-based anchor selection problem, the lane line detection problem is expressed as follows:
Pi,j,:=fij(X),s.t.i∈[1,C],j∈[1,h]
in the formula, X represents a global image feature extracted by a convolutional neural network, fijIs expressed as classification function of ith lane line at jth anchor point row, Pi,j,:Representing the probability of the jth anchor point row to the ith lane line at each anchor point position for a w + 1-dimensional vector, wherein the 1-dimensional vector which is obtained by adding up represents any position of the jth anchor point row which does not contain the ith lane line;
dividing the picture of the actual road condition into h × w grids, wherein h represents the number of anchor line rows, and w represents the number of aiming point columns; setting the number of lane lines to be detected to be C, then for each anchor point row, at most C grids are marked as lane line positions.
2. The monocular vision based non-active lane departure warning method of claim 1, wherein the network branch comprises two fully connected layers and one Softmax layer for obtaining a type of lane line;
the loss function LtotalIncluded
Ltotal=Lcls+αLstr+βLtype
In the formula:
Lclsis a classification loss function;
Lstris a structural loss function;
Ltypeis a type loss function;
alpha and beta are loss weighting coefficients;
the structural loss function LstrIncluding a continuous line loss function LsimSum lane line shape loss function Lshp
Lstr=Lsim+λLshp
In the formula, λ is a weighting coefficient.
3. The monocular vision based non-active lane departure warning method according to claim 2, wherein the generated lane line detection model is subjected to an auxiliary segmentation task training for learning the semantic features of the lane lines by the lane line detection model; wherein the loss function of the auxiliary segmentation task training is LsegLoss weighting systemThe number is gamma.
4. The monocular vision based inactive lane departure warning method of claim 1, wherein the deployment phase further comprises:
and filtering the false-detected lane lines, wherein when the detected lane lines have any one of the following conditions, judging that the lane lines are not detected or the false-detected lane lines have low credibility, namely the false-detected lane lines:
the first condition is as follows: the detected lane line pairs are less than three pairs;
case two: in the detected lane line point pairs, the distance between two adjacent point pairs from near to far satisfies a decreasing trend, and the occurrence of the condition is more than or equal to two times.
5. The monocular vision based inactive lane departure warning method according to claim 1, wherein said determining an inactive vehicle departure situation according to the obtained lane line position and lane line type comprises:
the offset distance of the vehicle at the right center of the lane line is as follows:
departance=imgW/2-(x0,0+x0,1)/2
in the formula:
imgW/2 is the abscissa position of the vehicle center in the image;
(x0,0+x0,1) 2 is the abscissa position of the center of the lane line in the image, x0Representing the nearest pair of lane line points;
the default is the image distance of the center of the vehicle offset from the center of the lane line, and when the default is greater than 0, the vehicle deviates to the right; when deparatance <0, the vehicle is offset to the left.
6. The monocular vision based inactive lane departure warning method according to claim 5, wherein in the method of determining an inactive vehicle departure situation:
assuming that the average lane width is 3.7 meters, the actual distance that the vehicle center deviates from the lane line is:
carDepartance=departance*(3.7/(x0,1-x0,0))
according to the distance of the vehicle offset lane line and the types of the left lane line and the right lane line, when the vehicle offset lane line exceeds the distance of 0.76 m and the lane on the deviation side is a solid line, and the corresponding turn signal is not turned on, the situation of the non-active vehicle deviation is judged.
7. A monocular vision based non-active lane departure warning system comprising:
the camera is used for acquiring an image of a road in a traveling direction in real time;
the non-active vehicle deviation estimation module obtains the positions of the left lane line and the right lane line in the driving direction and the type of the lane line through a lane line extraction algorithm according to the acquired road image in the driving direction; judging the deviation condition of the non-active vehicle according to the obtained lane line position, the lane line type and the states of left and right steering lamps of the vehicle; the lane line extraction algorithm comprises two stages of training and deployment; wherein:
in the training stage, marking lane line areas on a large number of pictures of actual road conditions to form a training data set; training a deep neural network by using a training data set, setting network branches and corresponding loss functions on the basis of the deep neural network, and generating a lane line detection model;
in the deployment stage, the generated lane line detection model is utilized to acquire the lane line position and the lane line type of the image of the road in the real-time vehicle advancing direction;
in the training stage, the method for marking the lane line area on the picture of the actual road condition comprises the following steps:
considering the lane line detection problem as a line-based anchor selection problem, the lane line detection problem is expressed as follows:
Pi,j,:=fij(X),s.t.i∈[1,C],j∈[1,h]
in the formula, X represents convolutional neural network extractionGlobal image feature of fijIs expressed as classification function of ith lane line at jth anchor point row, Pi,j,:Representing the probability of the jth anchor point row to the ith lane line at each anchor point position for a w + 1-dimensional vector, wherein the 1-dimensional vector which is obtained by adding up represents any position of the jth anchor point row which does not contain the ith lane line;
dividing the picture of the actual road condition into h × w grids, wherein h represents the number of anchor line rows, and w represents the number of aiming point columns; setting the number of lane lines needing to be detected as C, and marking C grids as lane line positions at most for each anchor point row;
and the auxiliary early warning module is used for carrying out auxiliary early warning on the non-active lane departure occurring in the vehicle driving process through a set threshold value according to the non-active vehicle departure condition.
8. The monocular vision based inactive lane departure warning system of claim 7, wherein the camera is positioned directly in the middle above the front window.
CN202011243927.XA 2020-11-10 2020-11-10 Monocular vision-based non-active lane departure early warning method and system Active CN112339773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011243927.XA CN112339773B (en) 2020-11-10 2020-11-10 Monocular vision-based non-active lane departure early warning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011243927.XA CN112339773B (en) 2020-11-10 2020-11-10 Monocular vision-based non-active lane departure early warning method and system

Publications (2)

Publication Number Publication Date
CN112339773A CN112339773A (en) 2021-02-09
CN112339773B true CN112339773B (en) 2021-12-14

Family

ID=74362362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011243927.XA Active CN112339773B (en) 2020-11-10 2020-11-10 Monocular vision-based non-active lane departure early warning method and system

Country Status (1)

Country Link
CN (1) CN112339773B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449635B (en) * 2021-06-28 2023-10-31 吉林大学 Lane departure early warning method based on driving habit
CN113428177B (en) * 2021-07-16 2023-03-14 中汽创智科技有限公司 Vehicle control method, device, equipment and storage medium
CN113553938B (en) * 2021-07-19 2024-05-14 黑芝麻智能科技(上海)有限公司 Seat belt detection method, apparatus, computer device, and storage medium
CN113581196B (en) * 2021-08-30 2023-08-22 上海商汤临港智能科技有限公司 Method and device for early warning of vehicle running, computer equipment and storage medium
CN116682087B (en) * 2023-07-28 2023-10-31 安徽中科星驰自动驾驶技术有限公司 Self-adaptive auxiliary driving method based on space pooling network lane detection
CN117152707B (en) * 2023-10-31 2024-03-22 武汉未来幻影科技有限公司 Calculation method and device for offset distance of vehicle and processing equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2535882A1 (en) * 2011-06-17 2012-12-19 Clarion Co., Ltd. Lane departure warning device
KR20130076108A (en) * 2011-12-28 2013-07-08 전자부품연구원 Lane departure warning system
JP2014013455A (en) * 2012-07-03 2014-01-23 Clarion Co Ltd Lane departure warning apparatus
JP2014085805A (en) * 2012-10-23 2014-05-12 Isuzu Motors Ltd Lane departure prevention apparatus
CN104029680A (en) * 2014-01-02 2014-09-10 上海大学 Lane departure warning system and method based on monocular camera
CN108297867A (en) * 2018-02-11 2018-07-20 江苏金羿智芯科技有限公司 A kind of lane departure warning method and system based on artificial intelligence
CN108437893A (en) * 2018-05-16 2018-08-24 奇瑞汽车股份有限公司 A kind of method for early warning and device of vehicle lane departure
CN110517521A (en) * 2019-08-06 2019-11-29 北京航空航天大学 A kind of lane departure warning method based on road car fusion perception
CN111539303A (en) * 2020-04-20 2020-08-14 长安大学 Monocular vision-based vehicle driving deviation early warning method
CN111775947A (en) * 2020-07-17 2020-10-16 辽宁工业大学 Monocular vision lower lane identification and deviation early warning method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102785661B (en) * 2012-08-20 2015-05-13 深圳先进技术研究院 Lane departure control system and lane departure control method
CN111259706B (en) * 2018-12-03 2022-06-21 魔门塔(苏州)科技有限公司 Lane line pressing judgment method and system for vehicle
CN111002990B (en) * 2019-12-05 2021-06-08 华南理工大学 Lane departure early warning method and system based on dynamic departure threshold

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2535882A1 (en) * 2011-06-17 2012-12-19 Clarion Co., Ltd. Lane departure warning device
KR20130076108A (en) * 2011-12-28 2013-07-08 전자부품연구원 Lane departure warning system
JP2014013455A (en) * 2012-07-03 2014-01-23 Clarion Co Ltd Lane departure warning apparatus
JP2014085805A (en) * 2012-10-23 2014-05-12 Isuzu Motors Ltd Lane departure prevention apparatus
CN104029680A (en) * 2014-01-02 2014-09-10 上海大学 Lane departure warning system and method based on monocular camera
CN108297867A (en) * 2018-02-11 2018-07-20 江苏金羿智芯科技有限公司 A kind of lane departure warning method and system based on artificial intelligence
CN108437893A (en) * 2018-05-16 2018-08-24 奇瑞汽车股份有限公司 A kind of method for early warning and device of vehicle lane departure
CN110517521A (en) * 2019-08-06 2019-11-29 北京航空航天大学 A kind of lane departure warning method based on road car fusion perception
CN111539303A (en) * 2020-04-20 2020-08-14 长安大学 Monocular vision-based vehicle driving deviation early warning method
CN111775947A (en) * 2020-07-17 2020-10-16 辽宁工业大学 Monocular vision lower lane identification and deviation early warning method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种基于单目视觉的车道偏离检测与预警方法;郭子逸;《机械制造》;20120520(第05期);全文 *
单目视觉车道偏离预警系统的开发;吕柯岩等;《计算机应用》;20121231;全文 *
基于单目视觉的前向车辆检测、跟踪与测距;赵轩;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20180715(第7期);全文 *

Also Published As

Publication number Publication date
CN112339773A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN112339773B (en) Monocular vision-based non-active lane departure early warning method and system
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN110097109B (en) Road environment obstacle detection system and method based on deep learning
CN106652465B (en) Method and system for identifying abnormal driving behaviors on road
CN107609491B (en) Vehicle illegal parking detection method based on convolutional neural network
CN104008645B (en) One is applicable to the prediction of urban road lane line and method for early warning
CN111104903B (en) Depth perception traffic scene multi-target detection method and system
CN101916383B (en) Vehicle detecting, tracking and identifying system based on multi-camera
CN112614373B (en) BiLSTM-based weekly vehicle lane change intention prediction method
CN112329684B (en) Pedestrian crossing road intention recognition method based on gaze detection and traffic scene recognition
He et al. A feature fusion method to improve the driving obstacle detection under foggy weather
CN115294767A (en) Real-time detection and traffic safety early warning method and device for highway lane lines
CN114724063B (en) Road traffic incident detection method based on deep learning
CN112215073A (en) Traffic marking line rapid identification and tracking method under high-speed motion scene
Helala et al. Road boundary detection in challenging scenarios
Panda et al. Application of Image Processing In Road Traffic Control
CN116524420B (en) Key target detection method and system in traffic scene
Zaman et al. Deep Learning Approaches for Vehicle and Pedestrian Detection in Adverse Weather
TWI823819B (en) Driving assistance system and driving assistance computation method
Yu et al. An Improved YOLO for Road and Vehicle Target Detection Model
Zhang MASFF: Multiscale Adaptive Spatial Feature Fusion Method for vehicle recognition
Gao et al. Research on detection method of traffic anomaly based on improved YOLOv3
Li et al. Research and Improvement of Object Detection Algorithm based on Regression Method
Cheng et al. DCLane for Real-time Understanding of Lane Markings
Qie et al. Recognition of occluded pedestrians from the driver's perspective for extending sight distance and ensuring driving safety at signal-free intersections

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant