CN108009524A - A kind of method for detecting lane lines based on full convolutional network - Google Patents
A kind of method for detecting lane lines based on full convolutional network Download PDFInfo
- Publication number
- CN108009524A CN108009524A CN201711420524.6A CN201711420524A CN108009524A CN 108009524 A CN108009524 A CN 108009524A CN 201711420524 A CN201711420524 A CN 201711420524A CN 108009524 A CN108009524 A CN 108009524A
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- msup
- lane line
- lane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The present invention provides a kind of method for detecting lane lines based on full convolutional network, it is related to traffic information detection field, the present invention carries out probabilistic operation to the output characteristic figure of full convolution lane detection network, obtain every piece of region in input picture and the probability of lane line occur, and prediction probability threshold value is set, realize the extraction and detection of lane line.The present invention can realize the detection of Lane marking line and flexure type lane line at the same time, full convolution lane detection network is trained using lane detection loss function, lift the detection result of lane line, convolutional neural networks are from the abstract characteristics of lane line grouped data focusing study lane line, rather than simply extract the surface of lane line;Lane detection network model need to only be stored and can be realized as detection to new input picture, memory space is saved, suitable for vehicle-mounted embedded equipment;Acceleration is detected using the full convolution lane detection network of small-sized shallow-layer, detection speed is very fast.
Description
Technical field
The present invention relates to traffic information detection field, especially a kind of method for detecting lane lines
Background technology
Intelligent driving needs that traffic environment, situation are perceived and understood, the traffic environment of vehicle includes surrounding vehicles,
Lane line and traffic lights etc., lane detection is travelled for control vehicle in safety zone extremely important work
With.When vehicle shifts it is larger when using lane detection can to driver and alarm, adjust vehicle traveling direction, keep away
Exempt from traffic accident.
Lane detection technology is broadly divided into three kinds:Detection technique based on color characteristic, the detection based on textural characteristics
Technology and detection technique based on multi-feature fusion.Color characteristic is divided into gray feature and color property, for gray feature
Speech, the gray scale of lane line pixel are usually significantly larger than the gray scale of non-lane line pixel.It is selected suitable that foreign study personnel pass through
Threshold value distinguishes lane line pixel and non-lane line pixel, so as to detect lane line.Detection skill based on color property
Art detects road boundary and lane markings, Hunan University's body of a motor car advanced design system using the color information feature in image
The researcher for making National Key Laboratory utilizes RGB color and lane line light characteristic, preferentially to white and yellow
Pixel handled, increase the occupation rate of lane line pixel, improve the contrast between lane line and background area so that
Detect lane line.Color is divided into colourity, saturation degree and bright by the researcher of Shanghai Communications University using hsv color space
Degree, sets the correspondence threshold value of lane line and color is sorted out accordingly, by prevailing corresponding color in categorization results
As recognition result so as to detect lane line and its affiliated type.When the data is larger, the detection method based on color characteristic
Often substantial amounts of background area is detected, accuracy in detection is not high.
Detection method based on texture is then the texture strength and grain direction by pixel in statistical regions, is expired
The requirement of sufficient lane detection as a result, having the characteristics that anti-noise ability is stronger.Graovac S and Goma A by lane line region and
The textural characteristics and road structure of background area obtain optimal lane line region according to statistical information as information source.Jilin
The Liu Fu of university carries out transform analysis using the multidirectional Gabor templates of different frequency to shooting image, according to the line of pixel
Manage intensity and direction character value ballot obtains road end point, and built using the straight slope extracted from effective ballot region
The vertical path equation by end point, road area is marked off in non-structural road.In addition, Southeast China University Automation Institute
Researcher uses multiple dimensioned sparse coding, and road local grain information is used in large scale, road is used in Small and Medium Sized
The context mechanism feature on road carries out road k-path partition, so as to more efficiently distinguish road and the similar grain of surrounding enviroment.
Due to being disturbed by factors such as illumination, the texture in shooting image is not necessarily the real-texture of three-dimensional object surface, this is one
Determine to have impact on the effect of the detection method based on textural characteristics in degree.
Detection method based on multi-feature fusion improves lane detection effect by the characteristic with different characteristic.Lake
The scholar Wang Ke of southern university splits road area using lane line end point and lane line vanishing line, and to shooting image
Travel direction is servo-actuated filtering process, then merges road grain direction, the various features such as the border depth of parallelism, grey scale pixel value construction
Go out lane line confidence function, lane line is extracted using Hough transform.Although detection method inspection based on multi-feature fusion
It is preferable to survey effect, but image processing process is complex, for the more demanding of running environment, is not suitable for vehicle-mounted embedded type
Equipment.
The content of the invention
For overcome the deficiencies in the prior art, the present invention proposes the side that a kind of full convolutional network realizes lane detection
Method, for Embedded Application problem, the present invention builds small-sized, shallow-layer convolution on the premise of lane detection effect is ensured
Neutral net, realizes and saves memory space, accelerates lane detection speed purpose, and the full convolution detection network of lane line can be to defeated
The region for entering a certain size in picture is detected, and the size of detection zone is all pond Hua Ceng ponds core sizes in detection network
Product.The present invention carries out probabilistic operation to the output characteristic figure of full convolution lane detection network, obtains every in input picture
There is the probability of lane line in block region, and sets prediction probability threshold value, realizes the extraction and detection of lane line.
The technical solution adopted by the present invention to solve the technical problems comprises the following steps:
The first step:Build lane line sorter network
Lane line sorter network is made of three convolutional layers, three pond layers and two full articulamentums, lane line classification net
The input of network is limited to the picture for n × n-pixel size and comprising lane line, exports as lane line included in input picture
Classification sequence number, sequence number 0 represent background area, and sequence number 1 represents yellow solid line, and sequence number 2 represents dotted yellow line, and sequence number 3 represents white
Solid line, sequence number 4 represent white dashed line, and each convolutional layer is all connected with a pond layer afterwards in lane line sorter network, and often
A convolutional layer is connected with activation primitive, and first full articulamentum is connected with last pond layer, and is connected with activation primitive, i.e.,
Lane line sorter network concrete structure is according to convolutional layer 1, pond layer 1, convolutional layer 2, pond layer 2, convolutional layer 3, pond layer 3, entirely
Articulamentum 1, full articulamentum 2 are sequentially connected, convolutional layer 1, convolutional layer 2, convolutional layer 3 and full articulamentum respectively with activation primitive phase
Even, loss layer and accuracy rate layer are connected with full articulamentum 2 at the same time, and loss layer is not attached to therebetween with accuracy rate layer, also, mark
Label information, which needs to input as bottom, is connected to loss layer and accuracy rate layer, the pond mode of pond layer in lane line sorter network
Using MAX ponds mode, MAX ponds mode is using the maximum of pixel in the range of the kernel covering of pond as pond as a result, to spy
Sign figure realizes dimensionality reduction;
Second step:Training lane line sorter network model
On lane line categorized data set to the first step in the lane line sorter network that builds be trained, obtain lane line
Sorter network model, the lane line label information in original video sequence include the classification information and lane line boundary point of lane line
Pixel dot position information in the video frame, carries out fitting a straight line using the border dot position information of lane line, obtains lane line
The absorbing boundary equation in two sidelines, then chooses coordinate points according to label information on two sidelines and forms rectangle frame, utilize rectangle
The lane line region of correspondence position in original video sequence is intercepted out by frame, and it is big that the lane line region of interception saves as n × n
Small picture, it is consistent with the input picture size of sorter network in the first step, the lane line region picture of interception is made as
The lane line categorized data set of lmdb database formats, lane line categorized data set includes training set and test set, in training set
On lane line sorter network is trained, test on test set to the effect of gained model, obtain lane line classification
Network model;
3rd step:Full articulamentum in lane line sorter network is revised as convolutional layer, builds full convolution lane detection
Network, and be initialization detection network model by the lane line sorter network model conversation obtained in second step, for initializing
Full convolution lane detection network, the picture input pixel size of lane line sorter network is n × n, with reference to lane line classification net
Sorter network parameter setting shown in network structure and table 1;
1 lane line sorter network structure of table and parameter configuration
The convolution kernel for the conversion convolutional layer 1 being transformed by full articulamentum 1 is sized to 4 × 4, by full articulamentum 2
The convolution kernel for the conversion convolutional layer 2 being transformed is sized to 1 × 1, the convolution of the convolutional layer converted by full articulamentum
Core number and the output number of former full articulamentum are consistent;
The lane line sorter network model conversation is that the step of initialization detects network model is:
The parameter matrix of full articulamentum in lane line sorter network model is launched into column vector, then by the column vector
The element value conversion convolutional layer that is assigned to be transformed by full articulamentum by corresponding in full convolution lane detection network successively join
Element in the column vector of matrix number expansion, the parameter of remainder layer is directly by sorter network mould in full convolution lane detection network
Type obtains, and obtains initialization detection network model, initialization detection network model is using as full convolution lane detection network
Initial model, the training process applied to full convolution lane detection network;
4th step:Training lane detection network model
Using the parameter in the initialization detection network model obtained in the 3rd step in full convolution lane detection network
The parameter of each network layer carries out corresponding to assignment, completes the initialization of detection network, and lose in testing number using lane detection
Full convolution lane detection network is trained according to collection is upper, due to lane detection task need the classification of identification lane line with
And the position of lane line in the picture, Detectability loss includes Classification Loss and recurrence is lost, and it is that position is lost to return loss, track
Line Detectability loss is defined shown in L such as formulas (1):
L=α LC+βLR (1)
Wherein, α represents proportionality coefficient of the Classification Loss in Detectability loss, and β, which is represented, returns loss in Detectability loss
Proportionality coefficient, LCFor Classification Loss, LRLost to return;
Classification Loss represents the loss between prediction label and truthful data, and Classification Loss is defined as shown in formula (2):
Wherein, M represents the number of detection network inputs picture, and K represents the channel number of label matrix, with lane line containing the back of the body
The classification sum of scene area is consistent, and H represents the height of network end-point convolutional layer output characteristic figure, and W represents network end-point volume
The height and equivalent width of the height of lamination output characteristic figure, H and W and submatrix in each passage in label matrix, g (i, k,
H, w) label value at (i, k, h, w) place in truthful data label array is represented, represent that i-th input picture passes through after convolution
(h, w) place label classification is the probability of k on characteristic pattern, and numerical value is 0 or 1 in label array, and " 0 " represents the tag class at (h, w) place
It is not k, the label classification that " 1 " represents (h, w) place is k, when k is 0, represents background area, when k is 1, represents yellow reality
Line, when k is 2, represents dotted yellow line, when k is 3, represents solid white line, when k is 4, represents white dashed line;p(i,k,h,
W) represent i-th input picture pass through convolution after characteristic pattern on (h, w) place classification k prediction probability, probable value for (0,1]
The characteristic pattern of input is converted into prediction probability matrix, feature by the decimal within section, Detectability loss layer using Softmax algorithms
On figure shown in the computational methods such as formula (3) of each pixel prediction probability:
Wherein, y (i, c, h, w)=y'(i, c, h, w)-max (y'(i, k, h, w)), k ∈ { 0,1,2,3,4 }, y'(i, c,
H, w) represent input i-th convolution characteristic pattern (h, w) position upper channel serial number c pixel value, max (y'(i, k, h,
W) maximum of pixel among five passages on i-th characteristic pattern (h, w) position) is represented, k is to the progress time of characteristic pattern passage
The channel position lasted, since every characteristic pattern includes 5 passages, k values in { 0,1,2,3,4 };
Return in the track line position and label data of loss representative detection neural network forecast and lost between the line position of track, profit
Can interpolate that the position of lane line in characteristic pattern with the prediction probability in formula (3), then by with the lane line in label data
Position is compared calculating and returns loss, and the detailed step of the comparison is:
Certain a line in selected characteristic pattern, will predict that the column position for lane line occur is stored in vectorial P and predicts in this journey
In position vectors, input label data are corresponded to the column position for occurring lane line in row and are existed in vector L i.e. label position vector,
Column position, that is, the abscissa, then solves the L2 losses between P and L, you can the recurrence loss of certain a line in characteristic pattern is obtained,
The recurrence loss of output can be obtained by the recurrence loss read group total average value of all rows on characteristic pattern, calculation such as formula (4)
It is shown:
Wherein, D (j (i, k, h)-g'(i, k, h)) it is predicted position vector j (i, k, h) and label position vector g'(i, k,
H) the obtained vector of difference is made, j (i, k, h) represents the set of h rows classification in the i-th pictures output characteristic figure as the column position of k,
That is predicted position vector, the prediction probability p (i, k, h, w) of each pixel in characteristic pattern is compared with prediction probability threshold value
Compared with and comparative result being denoted as t (i, k, h, w), when p (i, k, h, w) is more than prediction probability threshold value, is then denoted as t (i, k, h, w)
1, otherwise, t (i, k, h, w) is denoted as 0, if t (i, k, h, w)=1, then w is stored in j (i, k, h), the definition of t (i, k, h, w)
As shown in formula (5):
Wherein, ptRepresent prediction probability threshold value, for differentiate current pixel point whether belong to lane line classification k, t (i, k,
H, w) it is that (h, w) that " 1 " interval scale i-th is opened on characteristic pattern to be classified as lane line classification k, t (i, k, h, w) be " 0 " interval scale
It is 0 interval scale background area that (h, w) place, which is not belonging to lane line classification k, k, g'(i, k, h) it is label position vector, acquisition process
It is similar with j (i, k, h), it is distinguished as the label data that detection data are concentrated and label probability 0 or 1 is provided, directly to number of tags
0 and 1 differentiation is carried out according to g (i, k, h, w), if g (i, k, h, w) value is 1, then w is saved in g'(i, k, h) in, if g
(i, k, h, w) value is 0, then w is not preserved;
||D(j(i,k,h)-g'(i,k,h))||2Predicted position vector j (i, k, h) and label position vector g'(i are represented,
K, h) between L2 loss, i.e. vector D (j (i, k, h)-g'(i, k, h)) mould square, | | D (j (i, k, h)-g'(i, k, h)) |
|2Calculating be divided into following four situation, the element is the information of lane line:
● there is no element, g'(i, k, h in j (i, k, h)) in there is no element:Illustrate predicted position vector with label position to
Amount all occurs without lane line, then | | D (j (i, k, h)-g'(i, k, h)) | |2=0;
● there is no element, g'(i, k, h in j (i, k, h)) in have element:
● have element, g'(i, k, h in j (i, k, h)) in there is no element:
● have element, g'(i, k, h in j (i, k, h)) in have element:
Formula (6) is into formula (8), as long as predicted position vector j (i, k, h) has element, then w represent predicted position vector j (i,
K, h) in element, such as only label position vector g'(i, k, h) have element, then w represents label position vector g'(i, k, h) in
Element, W represents the width of network end-point convolutional layer output characteristic figure, and in formula (8), w " is label position vector g'(i, k, h)
In arbitrary element, w' is label position vector g'(i, k, h) element, and w' and w values make difference gained difference absolute value it is equal
Less than label position vector g'(i, k, h) in other elements and w values make the absolute value of difference obtained by difference, by label vector g'
Arbitrary element w " in (i, k, h) is traveled through, and finds g'(i, k, h) in make with w values difference gained difference absolute value it is minimum
Element, is w', in j (i, k, h) and g'(i, k, h) in the row coordinate that did not occur, network end-point convolutional layer is defeated
The recurrence loss section for going out corresponding point in characteristic pattern is split as 0;
The present invention carries out full convolution lane detection network according to backpropagation (Back Propagation, BP) algorithm
Training, carries out network renewal, shown in network renewal gradient calculation such as formula (9) using the derivative of lane detection loss:
Update in gradient shown in the derivative calculations mode such as formula (10) of Classification Loss:
C represents the overall channel number of network end-point convolutional layer output characteristic figure, and c represents network end-point convolutional layer output characteristic figure
Channel position;
According to | | D (j (i, k, h)-g'(i, k, h)) | |2Form of Definition, return loss derivative calculations it is as follows:
● there is no element, g'(i, k, h in j (i, k, h)) in there is no element:
● there is no element, g'(i, k, h in j (i, k, h)) in have element:
● have element, g'(i, k, h in j (i, k, h)) in there is no element:
● have element, g'(i, k, h in j (i, k, h)) in have element:
Formula (12) is into formula (15), as long as predicted position vector j (i, k, h) has element, then w represents predicted position vector j
Element in (i, k, h), such as only label position vector g'(i, k, h) there is element, then w represents label position vector g'(i, k,
H) element in, W represent the width of network end-point convolutional layer output characteristic figure, and in formula (15), w " is label position vector g'(i,
K, h) in arbitrary element, w' is g'(i, k, h in label position vector) element, and w' and w values make it is poor obtained by difference it is exhausted
Label position vector g'(i, k, h are respectively less than to value) in other elements and w values make the absolute value of difference obtained by difference, by label
Vectorial g'(i, k, h) in arbitrary element w " traveled through, find g'(i, k, h) in w values make difference gained difference absolute value
Minimum element, is w', in j (i, k, h) and g'(i, k, h) in the row coordinate that did not occur, network end-point is rolled up
The derivative of the recurrence loss part of corresponding point is set to 0 in lamination output characteristic figure;
Forward-propagating process of the process as Detectability loss layer of Detectability loss will be calculated, lane detection loss will be calculated
Error Back-Propagation process of the process of derivative as Detectability loss layer, and the ratio that the proportionality coefficient of Classification Loss, recurrence are lost
The layer parameter of example coefficient and prediction probability threshold value as Detectability loss layer, by setting the layer parameter of Detectability loss layer in testing number
Full convolution lane detection network is trained using backward (BackPropagation, the BP) algorithm that transmits according on collection, is obtained
Lane detection network model, the detection to lane line is realized using gained lane detection network model.
The beneficial effects of the invention are as follows the detection that can realize Lane marking line and flexure type lane line at the same time, car is used
The full convolution lane detection network of diatom Detectability loss function pair is trained, and lifts the detection result of lane line.And with biography
The method for detecting lane lines of system is compared, and directly using original shooting image as input, saves complicated image preprocessing process;Volume
Abstract characteristics of the product neutral net from lane line grouped data focusing study lane line, rather than simply extract the outer of lane line
Portion's feature;Lane detection network model need to only be stored and can be realized as detection to new input picture, save memory space, fitted
For vehicle-mounted embedded equipment;Acceleration is detected using the full convolution lane detection network of small-sized shallow-layer, detection speed is very fast.
Brief description of the drawings
Fig. 1 is lane line sorter network structure diagram of the present invention.
Fig. 2 is full convolution lane detection schematic network structure of the invention.
Fig. 3 is overall flow figure of the present invention.
Embodiment
The present invention is further described with reference to the accompanying drawings and examples.
Present example is implemented according to flow in Fig. 3, first building into driveway line sorter network, and is classifying
Lane line sorter network is trained on data set, obtains lane line sorter network model.Then the present invention turns the model
Turn to initialization detection network model to initialize full convolution lane detection network, and utilize the lane detection of definition
Loss is trained full convolution lane detection network on detection data set, obtains lane detection network model.This hair
Bright example builds lane line sorter network, and classify in lane line using Caffe frames as experiment porch in Caffe frames
Lane line sorter network is trained on data set, obtains lane line sorter network model.Present example divides lane line
Full articulamentum in class network is revised as convolutional layer, and structure is suitable for full convolution lane detection network, and is examined according to lane line
Survey being defined in Caffe frames for loss and realize Detectability loss layer.By setting the parameter of Detectability loss layer, in detection data set
On full convolution lane detection network is trained, obtain lane detection network model.
The first step:Build lane line sorter network
Lane line sorter network is made of three convolutional layers, three pond layers and two full articulamentums, lane line classification net
The input of network is limited to the picture for n × n-pixel size and comprising lane line, exports as lane line included in input picture
Classification sequence number, sequence number 0 represent background area, and sequence number 1 represents yellow solid line, and sequence number 2 represents dotted yellow line, and sequence number 3 represents white
Solid line, sequence number 4 represent white dashed line, and each convolutional layer is all connected with a pond layer afterwards in lane line sorter network, and often
A convolutional layer is connected with activation primitive, and first full articulamentum is connected with last pond layer, and is connected with activation primitive, i.e.,
Lane line sorter network concrete structure is according to convolutional layer 1, pond layer 1, convolutional layer 2, pond layer 2, convolutional layer 3, pond layer 3, entirely
Articulamentum 1, full articulamentum 2 are sequentially connected, convolutional layer 1, convolutional layer 2, convolutional layer 3 and full articulamentum respectively with activation primitive phase
Even, loss layer and accuracy rate layer are connected with full articulamentum 2 at the same time, and loss layer is not attached to therebetween with accuracy rate layer, also, mark
Label information, which needs to input as bottom, is connected to loss layer and accuracy rate layer, the pond mode of pond layer in lane line sorter network
Using MAX ponds mode, MAX ponds mode is using the maximum of pixel in the range of the kernel covering of pond as pond as a result, to spy
Sign figure realizes dimensionality reduction;
Second step:Training lane line sorter network model
On lane line categorized data set to the first step in the lane line sorter network that builds be trained, obtain lane line
Sorter network model, the lane line label information in original video sequence include the classification information and lane line boundary point of lane line
Pixel dot position information in the video frame, carries out fitting a straight line using the border dot position information of lane line, obtains lane line
The absorbing boundary equation in two sidelines, then chooses coordinate points according to label information on two sidelines and forms rectangle frame, utilize rectangle
The lane line region of correspondence position in original video sequence is intercepted out by frame, and it is big that the lane line region of interception saves as n × n
Small picture, it is consistent with the input picture size of sorter network in the first step, the lane line region picture of interception is made as
The lane line categorized data set of lmdb database formats, lane line categorized data set includes training set and test set, in training set
On lane line sorter network is trained, test on test set to the effect of gained model, obtain lane line classification
Network model;
3rd step:Full articulamentum in lane line sorter network is revised as convolutional layer, builds full convolution lane detection
Network, and be initialization detection network model by the lane line sorter network model conversation obtained in second step, for initializing
Full convolution lane detection network, the picture input pixel size of lane line sorter network is n × n, with reference to lane line classification net
Sorter network parameter setting shown in network structure and table 1;
1 lane line sorter network structure of table and parameter configuration
Network layer | Convolution kernel number | Convolution kernel size | Step-length | Zero padding |
Convolutional layer 1 | 32 | 5×5 | 1 | 2 |
Activation primitive 1 | 32 | -- | -- | -- |
Pond layer 1 | 32 | 2×2 | 2 | 0 |
Convolutional layer 2 | 32 | 5×5 | 1 | 2 |
Activation primitive 2 | 32 | -- | -- | -- |
Pond layer 2 | 32 | 2×2 | 2 | 0 |
Convolutional layer 3 | 64 | 3×3 | 1 | 1 |
Activation primitive 3 | 64 | -- | -- | - |
Pond layer 3 | 64 | 2×2 | 2 | 0 |
Full articulamentum 1 | 64 | -- | -- | -- |
Activation primitive 4 | -- | -- | -- | -- |
Full articulamentum 2 | 5 | -- | -- | -- |
Loss layer | -- | -- | -- | -- |
Accuracy rate layer | -- | -- | -- | -- |
The convolution kernel for the conversion convolutional layer 1 being transformed by full articulamentum 1 is sized to 4 × 4, by full articulamentum 2
The convolution kernel for the conversion convolutional layer 2 being transformed is sized to 1 × 1, the convolution of the convolutional layer converted by full articulamentum
Core number and the output number of former full articulamentum are consistent;
The lane line sorter network model conversation is that the step of initialization detects network model is:
The parameter matrix of full articulamentum in lane line sorter network model is launched into column vector, then by the column vector
The element value conversion convolutional layer that is assigned to be transformed by full articulamentum by corresponding in full convolution lane detection network successively join
Element in the column vector of matrix number expansion, the parameter of remainder layer is directly by sorter network mould in full convolution lane detection network
Type obtains, and obtains initialization detection network model, initialization detection network model is using as full convolution lane detection network
Initial model, the training process applied to full convolution lane detection network;
4th step:Training lane detection network model
Using the parameter in the initialization detection network model obtained in the 3rd step in full convolution lane detection network
The parameter of each network layer carries out corresponding to assignment, completes the initialization of detection network, and lose in testing number using lane detection
Full convolution lane detection network is trained according to collection is upper, due to lane detection task need the classification of identification lane line with
And the position of lane line in the picture, Detectability loss includes Classification Loss and recurrence is lost, and it is that position is lost to return loss, track
Line Detectability loss is defined shown in L such as formulas (1):
L=α LC+βLR (1)
Wherein, α represents proportionality coefficient of the Classification Loss in Detectability loss, and β, which is represented, returns loss in Detectability loss
Proportionality coefficient, LCFor Classification Loss, LRLost to return;
Classification Loss represents the loss between prediction label and truthful data, and Classification Loss is defined as shown in formula (2):
Wherein, M represents the number of detection network inputs picture, and K represents the channel number of label matrix, with lane line containing the back of the body
The classification sum of scene area is consistent, and H represents the height of network end-point convolutional layer output characteristic figure, and W represents network end-point volume
The height and equivalent width of the height of lamination output characteristic figure, H and W and submatrix in each passage in label matrix, g (i, k,
H, w) label value at (i, k, h, w) place in truthful data label array is represented, represent that i-th input picture passes through after convolution
(h, w) place label classification is the probability of k on characteristic pattern, and numerical value is 0 or 1 in label array, and " 0 " represents the tag class at (h, w) place
It is not k, the label classification that " 1 " represents (h, w) place is k, when k is 0, represents background area, when k is 1, represents yellow reality
Line, when k is 2, represents dotted yellow line, when k is 3, represents solid white line, when k is 4, represents white dashed line;p(i,k,h,
W) represent i-th input picture pass through convolution after characteristic pattern on (h, w) place classification k prediction probability, probable value for (0,1]
The characteristic pattern of input is converted into prediction probability matrix, feature by the decimal within section, Detectability loss layer using Softmax algorithms
On figure shown in the computational methods such as formula (3) of each pixel prediction probability:
Wherein, y (i, c, h, w)=y'(i, c, h, w)-max (y'(i, k, h, w)), k ∈ { 0,1,2,3,4 }, y'(i, c,
H, w) represent input i-th convolution characteristic pattern (h, w) position upper channel serial number c pixel value, max (y'(i, k, h,
W) maximum of pixel among five passages on i-th characteristic pattern (h, w) position) is represented, k is to the progress time of characteristic pattern passage
The channel position lasted, since every characteristic pattern includes 5 passages, k values in { 0,1,2,3,4 };
Return in the track line position and label data of loss representative detection neural network forecast and lost between the line position of track, profit
Can interpolate that the position of lane line in characteristic pattern with the prediction probability in formula (3), then by with the lane line in label data
Position is compared calculating and returns loss, and the detailed step of the comparison is:
Certain a line in selected characteristic pattern, will predict that the column position for lane line occur is stored in vectorial P and predicts in this journey
In position vectors, input label data are corresponded to the column position for occurring lane line in row and are existed in vector L i.e. label position vector,
Column position, that is, the abscissa, then solves the L2 losses between P and L, you can the recurrence loss of certain a line in characteristic pattern is obtained,
The recurrence loss of output can be obtained by the recurrence loss read group total average value of all rows on characteristic pattern, calculation such as formula (4)
It is shown:
Wherein, D (j (i, k, h)-g'(i, k, h)) it is predicted position vector j (i, k, h) and label position vector g'(i, k,
H) the obtained vector of difference is made, j (i, k, h) represents the set of h rows classification in the i-th pictures output characteristic figure as the column position of k,
That is predicted position vector, the prediction probability p (i, k, h, w) of each pixel in characteristic pattern is compared with prediction probability threshold value
Compared with and comparative result being denoted as t (i, k, h, w), when p (i, k, h, w) is more than prediction probability threshold value, is then denoted as t (i, k, h, w)
1, otherwise, t (i, k, h, w) is denoted as 0, if t (i, k, h, w)=1, then w is stored in j (i, k, h), the definition of t (i, k, h, w)
As shown in formula (5):
Wherein, ptRepresent prediction probability threshold value, for differentiate current pixel point whether belong to lane line classification k, t (i, k,
H, w) it is that (h, w) that " 1 " interval scale i-th is opened on characteristic pattern to be classified as lane line classification k, t (i, k, h, w) be " 0 " interval scale
It is 0 interval scale background area that (h, w) place, which is not belonging to lane line classification k, k, g'(i, k, h) it is label position vector, acquisition process
It is similar with j (i, k, h), it is distinguished as the label data that detection data are concentrated and label probability 0 or 1 is provided, directly to number of tags
0 and 1 differentiation is carried out according to g (i, k, h, w), if g (i, k, h, w) value is 1, then w is saved in g'(i, k, h) in, if g
(i, k, h, w) value is 0, then w is not preserved;
||D(j(i,k,h)-g'(i,k,h))||2Predicted position vector j (i, k, h) and label position vector g'(i are represented,
K, h) between L2 loss, | | D (j (i, k, h)-g'(i, k, h)) | |2Calculating be divided into following four situation, the element is
The information of lane line:
● there is no element, g'(i, k, h in j (i, k, h)) in there is no element:Illustrate predicted position vector with label position to
Amount all occurs without lane line, then | | D (j (i, k, h)-g'(i, k, h)) | |2=0;
● there is no element, g'(i, k, h in j (i, k, h)) in have element:
● have element, g'(i, k, h in j (i, k, h)) in there is no element:
● have element, g'(i, k, h in j (i, k, h)) in have element:
Formula (6) is into formula (8), as long as predicted position vector j (i, k, h) has element, then w represent predicted position vector j (i,
K, h) in element, such as only label position vector g'(i, k, h) have element, then w represents label position vector g'(i, k, h) in
Element, W represents the width of network end-point convolutional layer output characteristic figure, and in formula (8), w " is label position vector g'(i, k, h)
In arbitrary element, w' is label position vector g'(i, k, h) element, and w' and w values make difference gained difference absolute value it is equal
Less than label position vector g'(i, k, h) in other elements and w values make the absolute value of difference obtained by difference, by label vector g'
Arbitrary element w " in (i, k, h) is traveled through, and finds g'(i, k, h) in make with w values difference gained difference absolute value it is minimum
Element, is w', in j (i, k, h) and g'(i, k, h) in the row coordinate that did not occur, network end-point convolutional layer is defeated
The recurrence loss section for going out corresponding point in characteristic pattern is split as 0;
The present invention carries out full convolution lane detection network according to backpropagation (Back Propagation, BP) algorithm
Training, carries out network renewal, shown in network renewal gradient calculation such as formula (9) using the derivative of lane detection loss:
Update in gradient shown in the derivative calculations mode such as formula (10) of Classification Loss:
C represents the overall channel number of network end-point convolutional layer output characteristic figure, and c represents network end-point convolutional layer output characteristic figure
Channel position;
According to | | D (j (i, k, h)-g'(i, k, h)) | |2Form of Definition, return loss derivative calculations it is as follows:
● there is no element, g'(i, k, h in j (i, k, h)) in there is no element:
● there is no element, g'(i, k, h in j (i, k, h)) in have element:
● have element, g'(i, k, h in j (i, k, h)) in there is no element:
● have element, g'(i, k, h in j (i, k, h)) in have element:
Formula (12) is into formula (15), as long as predicted position vector j (i, k, h) has element, then w represents predicted position vector j
Element in (i, k, h), such as only label position vector g'(i, k, h) there is element, then w represents label position vector g'(i, k,
H) element in, W represent the width of network end-point convolutional layer output characteristic figure, and in formula (15), w " is label position vector g'(i,
K, h) in arbitrary element, w' is g'(i, k, h in label position vector) element, and w' and w values make it is poor obtained by difference it is exhausted
Label position vector g'(i, k, h are respectively less than to value) in other elements and w values make the absolute value of difference obtained by difference, for j (i,
K, h) and g'(i, k, h) in the row coordinate that did not occur, by corresponding point in network end-point convolutional layer output characteristic figure
The derivative for returning loss part is set to 0;
Forward-propagating process of the process as Detectability loss layer of Detectability loss will be calculated, lane detection loss will be calculated
Error Back-Propagation process of the process of derivative as Detectability loss layer, and the ratio that the proportionality coefficient of Classification Loss, recurrence are lost
The layer parameter of example coefficient and prediction probability threshold value as Detectability loss layer, by setting the layer parameter of Detectability loss layer in testing number
Full convolution lane detection network is trained using backward (BackPropagation, the BP) algorithm that transmits according on collection, is obtained
Lane detection network model, the detection to lane line is realized using gained lane detection network model.
Present example comprises the following steps:
The first step:Build lane line sorter network.Building into driveway line sorter network, track in Caffe frames
Line sorter network structure as shown in Figure 1, in lane line sorter network the setting of each network layer parameter it is as shown in table 1.
Second step:Training lane line sorter network model.Present example is on lane line categorized data set to lane line
Sorter network is trained, and training set uses the picture of 32 × 32 pixel sizes with test set picture size.Training set and test
It is 5 to concentrate the ratio between picture number:1.The present embodiment is trained lane line sorter network using following strategy, and training network is every
1000 pictures of secondary input, are once tested, that trains is first after the input training of whole training set is finished on test set
Beginning learning rate is set to 0.001, and often training 200 epoch that learning rate just is multiplied by 0.1 is reduced, and the present embodiment instructs network
Practice 1000 epoch and obtain lane line sorter network model;Gained lane line sorter network model is for background area and per class
For the classification accuracy of lane line more than 92%, classification accuracy is higher, and effect is preferable.
3rd step:Full articulamentum in lane line sorter network is revised as convolutional layer, constructs full convolution lane detection
Network model, and the sorter network model conversation obtained in second step is detected into network model, full convolution lane line for initialization
Network structure is detected as shown in Fig. 2, each layer parameter is set as shown in table 2.
The full convolution lane detection network structure of table 2 and parameter configuration
4th step:Training lane detection network model.The present embodiment writes lane detection loss layer in Caffe,
And the proportionality coefficient of Classification Loss in Detectability loss layer is set to 0.5, the coefficient for returning loss is set to 0.5, prediction probability threshold value
0.8 is set to, detection network is joined using the initialization detection network model obtained by lane line sorter network model conversation
Number initialization.Full convolution lane detection network training, the initial learning rate of training are set on detection data set
0.00001, and remained unchanged in whole training process.10 pictures of input are trained network every time, train 100 altogether
Epoch obtains lane detection network model, and the detection to lane line is realized using gained lane detection network model.
Exist in the detection result of the initialization detection network model obtained by lane line sorter network model conversation a large amount of
Miscellaneous point (point that background area is detected as to lane line region), these fittings of miscellaneous point to next step into driveway line cause pole
Big interference.The detection result for the full convolution detection network model that training obtains detects network model, track compared to initialization
Line detection network model has weakened for the detectability of lane line interior zone, but remains to detect the frontier district of lane line
Domain point.Particular, it is important that lane detection network model eliminates substantial amounts of miscellaneous point, the fitting of next step lane line is reduced
Complexity.The detection result for detecting network model and lane detection network model is initialized by contrasting, defined in the present invention
Detectability loss function in recurrence loss part can make amendment to the test position of lane line, improve the detection of lane line
Effect.
The present invention is fitted the lane line region point extracted into driveway line using conic model, lane detection
Network model is preferable for the good lane detection effect of road conditions, for road conditions it is poor, there is the things such as abrasion, reflective and vehicle
The lane detection effect that body blocks is undesirable.Since feature is close between the solid line and dotted line of same color, deep learning
Technology can not accurately identify it lane detection network model occurs judges by accident between the solid line of same color and dotted line
Situation.
Lane detection network model detected magnitude is 1024 × 1280 picture, and it is 54.57ms (to new defeated averagely to take
Enter picture and be detected the forward-propagating process for only carrying out network), speed is very fast up to 18FPS, detection speed.Also, track
Line detection network model size is only 440kb, and shared memory space is smaller, suitable for vehicle-mounted embedded type equipment.
In short, lane detection network model can realize lane detection task, shared memory space is smaller, and examines
Speed is surveyed, meets the real-time of application, reaches goal of the invention.
Claims (1)
1. a kind of method for detecting lane lines based on full convolutional network, it is characterised in that comprise the following steps:
The first step:Build lane line sorter network
Lane line sorter network is made of three convolutional layers, three pond layers and two full articulamentums, lane line sorter network
Input is limited to the picture for n × n-pixel size and comprising lane line, exports as the classification of lane line included in input picture
Sequence number, sequence number 0 represent background area, and sequence number 1 represents yellow solid line, and sequence number 2 represents dotted yellow line, and sequence number 3 represents solid white line,
Sequence number 4 represents white dashed line, and each convolutional layer is all connected with a pond layer, and each convolution afterwards in lane line sorter network
Layer is connected with activation primitive, and first full articulamentum is connected with last pond layer, and is connected with activation primitive, i.e. lane line
Sorter network concrete structure is according to convolutional layer 1, pond layer 1, convolutional layer 2, pond layer 2, convolutional layer 3, pond layer 3, full articulamentum
1st, full articulamentum 2 is sequentially connected, and convolutional layer 1, convolutional layer 2, convolutional layer 3 and full articulamentum are respectively connected with activation primitive, damage
Lose layer and accuracy rate layer is connected with full articulamentum 2 at the same time, loss layer is not attached to therebetween with accuracy rate layer, also, label is believed
Breath, which needs to input as bottom, is connected to loss layer and accuracy rate layer, and the pond mode of pond layer uses in lane line sorter network
MAX ponds mode, MAX ponds mode is using the maximum of pixel in the range of the kernel covering of pond as pond as a result, to characteristic pattern
Realize dimensionality reduction;
Second step:Training lane line sorter network model
On lane line categorized data set to the first step in the lane line sorter network that builds be trained, obtain lane line classification
Network model, the lane line label information in original video sequence include lane line classification information and lane line boundary point regarding
Pixel dot position information in frequency frame, carries out fitting a straight line using the border dot position information of lane line, obtains lane line two
The absorbing boundary equation in sideline, then chooses coordinate points according to label information on two sidelines and forms rectangle frame, will using rectangle frame
The lane line region of correspondence position intercepts out in original video sequence, and the lane line region of interception saves as n × n sizes
Picture, it is consistent with the input picture size of sorter network in the first step, the lane line region picture of interception is made as lmdb numbers
According to the lane line categorized data set of library format, lane line categorized data set includes training set and test set, to car on training set
Diatom sorter network is trained, and is tested on test set to the effect of gained model, obtains lane line sorter network mould
Type;
3rd step:Full articulamentum in lane line sorter network is revised as convolutional layer, builds full convolution lane detection network,
And the lane line sorter network model conversation obtained in second step is detected into network model for initialization, for initializing full convolution
Lane detection network, the picture input pixel size of lane line sorter network is n × n, with reference to lane line sorter network structure
And sorter network parameter setting shown in table 1;
1 lane line sorter network structure of table and parameter configuration
The convolution kernel for the conversion convolutional layer 1 being transformed by full articulamentum 1 is sized to 4 × 4, is converted by full articulamentum 2
And the convolution kernel of the conversion convolutional layer 2 come is sized to 1 × 1, the convolution kernel of the convolutional layer that is converted by full articulamentum is a
The output number of number and former full articulamentum is consistent;
The lane line sorter network model conversation is that the step of initialization detects network model is:
The parameter matrix of full articulamentum in lane line sorter network model is launched into column vector, then by the member in the column vector
Plain value is assigned to by the corresponding conversion convolution layer parameter square being transformed by full articulamentum in full convolution lane detection network successively
Element in the column vector of battle array expansion, the parameter of remainder layer is directly obtained by sorter network model in full convolution lane detection network
, initialization detection network model is obtained, initialization detection network model is using as the initial of full convolution lane detection network
Model, the training process applied to full convolution lane detection network;
4th step:Training lane detection network model
Using the parameter in the initialization detection network model obtained in the 3rd step to each net in full convolution lane detection network
The parameter of network layers carries out corresponding to assignment, completes the initialization of detection network, and using lane detection loss in detection data set
On full convolution lane detection network is trained, due to lane detection task need identify lane line classification and car
The position of diatom in the picture, Detectability loss include Classification Loss and return loss, and it is position loss to return loss, and lane line is examined
Loss is surveyed to define shown in L such as formulas (1):
L=α LC+βLR (1)
Wherein, α represents proportionality coefficient of the Classification Loss in Detectability loss, and β represents the ratio for returning loss in Detectability loss
Coefficient, LCFor Classification Loss, LRLost to return;
Classification Loss represents the loss between prediction label and truthful data, and Classification Loss is defined as shown in formula (2):
<mrow>
<msub>
<mi>L</mi>
<mi>C</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>M</mi>
<mo>&times;</mo>
<mi>H</mi>
<mo>&times;</mo>
<mi>W</mi>
</mrow>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>K</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>h</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>H</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>w</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>W</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>g</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, M represents the number of detection network inputs picture, and K represents the channel number of label matrix, contains background area with lane line
The classification sum in domain is consistent, and H represents the height of network end-point convolutional layer output characteristic figure, and W represents network end-point convolutional layer
The height and equivalent width of the height of output characteristic figure, H and W and submatrix in each passage in label matrix, g (i, k, h, w)
The label value at (i, k, h, w) place in truthful data label array is represented, represents that i-th input picture passes through the feature after convolution
(h, w) place label classification is the probability of k on figure, and numerical value is 0 or 1 in label array, and " 0 " represents the label classification at (h, w) place not
It is k, the label classification that " 1 " represents (h, w) place is k, when k is 0, represents background area, when k is 1, represents yellow solid line,
When k is 2, dotted yellow line is represented, when k is 3, represents solid white line, when k is 4, represents white dashed line;p(i,k,h,w)
Represent i-th input picture pass through convolution after characteristic pattern on (h, w) place classification k prediction probability, probable value for (0,1] area
Between within decimal, the characteristic pattern of input is converted into prediction probability matrix, characteristic pattern by Detectability loss layer using Softmax algorithms
Shown in the computational methods such as formula (3) of upper each pixel prediction probability:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mi>exp</mi>
<mo>{</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>K</mi>
</munderover>
<mi>exp</mi>
<mo>{</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>c</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, y (i, c, h, w)=y'(i, c, h, w)-max (y'(i, k, h, w)), k ∈ { 0,1,2,3,4 }, y'(i, c, h, w)
Represent the value of the pixel of i-th convolution characteristic pattern (h, w) position upper channel serial number c of input, max (y'(i, k, h, w))
The maximum of pixel among five passages on i-th characteristic pattern (h, w) position is represented, k is that characteristic pattern passage is traveled through
When channel position, since every characteristic pattern includes 5 passages, k values in { 0,1,2,3,4 };
Return in the track line position and label data of loss representative detection neural network forecast and lost between the line position of track, utilize formula
(3) prediction probability in can interpolate that the position of lane line in characteristic pattern, then by with the track line position in label data
It is compared calculating and returns loss, the detailed step of the comparison is:
Certain a line in selected characteristic pattern, will predict that the column position for lane line occur is stored in vectorial P i.e. predicted position in this journey
In vector, input label data are corresponded to the column position for occurring lane line in row and are existed in vector L i.e. label position vector, it is described
Column position, that is, abscissa, then solves the L2 losses between P and L, you can obtains the recurrence loss of certain a line in characteristic pattern, output
Recurrence loss can be obtained by the recurrence loss read group total average value of all rows on characteristic pattern, shown in calculation such as formula (4):
<mrow>
<msub>
<mi>L</mi>
<mi>R</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>M</mi>
<mo>&times;</mo>
<mi>H</mi>
<mo>&times;</mo>
<mi>W</mi>
</mrow>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>K</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>h</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>H</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<mi>D</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
</mrow>
<mo>)</mo>
<mo>-</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, D (j (i, k, h)-g'(i, k, h)) be predicted position vector j (i, k, h) and label position vector g'(i, k, h) make
The vector that difference obtains, j (i, k, h) represents the set of h rows classification in the i-th pictures output characteristic figure as the column position of k, i.e., pre-
Position vector is surveyed, by the prediction probability p (i, k, h, w) of each pixel in characteristic pattern compared with prediction probability threshold value, and
Comparative result is denoted as t (i, k, h, w), when p (i, k, h, w) is more than prediction probability threshold value, then t (i, k, h, w) is denoted as 1, it is no
Then, t (i, k, h, w) is denoted as 0, if t (i, k, h, w)=1, then w is stored in j (i, k, h), the definition such as formula of t (i, k, h, w)
(5) shown in:
<mrow>
<mi>t</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo>&GreaterEqual;</mo>
<msub>
<mi>p</mi>
<mi>t</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo><</mo>
<msub>
<mi>p</mi>
<mi>t</mi>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, ptPrediction probability threshold value is represented, for differentiating whether current pixel point belongs to lane line classification k,
T (i, k, h, w) is classified as lane line classification k, t (i, k, h, w) for (h, w) that " 1 " interval scale i-th is opened on characteristic pattern
It is 0 interval scale background area that " 0 " interval scale (h, w) place, which is not belonging to lane line classification k, k, g'(i, k, h) it is label position vector,
Acquisition process is similar with j (i, k, h), is distinguished as the label data that detection data are concentrated and provides label probability 0 or 1, directly
0 and 1 differentiation is carried out to label data g (i, k, h, w), if g (i, k, h, w) value is 1, then w is saved in g'(i, k, h)
In, if g (i, k, h, w) value is 0, then w is not preserved;
||D(j(i,k,h)-g'(i,k,h))||2Represent predicted position vector j (i, k, h) and label position vector g'(i, k, h)
Between L2 loss, i.e. vector D (j (i, k, h)-g'(i, k, h)) mould square, | | D (j (i, k, h)-g'(i, k, h)) | |2's
Calculating is divided into following four situation, and the element is the information of lane line:
● there is no element, g'(i, k, h in j (i, k, h)) in there is no element:Illustrate predicted position vector and label position vector all
There is no lane line appearance, then | | D (j (i, k, h)-g'(i, k, h)) | |2=0;
● there is no element, g'(i, k, h in j (i, k, h)) in have element:
<mrow>
<mo>|</mo>
<mo>|</mo>
<mi>D</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
</mrow>
<mo>)</mo>
<mo>-</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>w</mi>
<mo>&Element;</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<msup>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
<mo><</mo>
<mo>|</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<msup>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
<mo>></mo>
<mo>|</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mrow>
● have element, g'(i, k, h in j (i, k, h)) in there is no element:
<mrow>
<mo>|</mo>
<mo>|</mo>
<mi>D</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
</mrow>
<mo>)</mo>
<mo>-</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>w</mi>
<mo>&Element;</mo>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<msup>
<mrow>
<mo>(</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
<mo><</mo>
<mo>|</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<msup>
<mrow>
<mo>(</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
<mo>></mo>
<mo>|</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</mrow>
● have element, g'(i, k, h in j (i, k, h)) in have element:
<mrow>
<mo>|</mo>
<mo>|</mo>
<mi>D</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
</mrow>
<mo>)</mo>
<mo>-</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>w</mi>
<mo>&Element;</mo>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mo>{</mo>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>-</mo>
<msup>
<mi>w</mi>
<mo>&prime;</mo>
</msup>
<mo>/</mo>
<mi>W</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mtable>
<mtr>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msup>
<mi>w</mi>
<mo>&prime;</mo>
</msup>
<mo>|</mo>
<mo><</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msup>
<mi>w</mi>
<mrow>
<mo>&prime;</mo>
<mo>&prime;</mo>
</mrow>
</msup>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>&ForAll;</mo>
<msup>
<mi>w</mi>
<mrow>
<mo>&prime;</mo>
<mo>&prime;</mo>
</mrow>
</msup>
<mo>&Element;</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>w</mi>
<mo>&prime;</mo>
</msup>
<mo>&Element;</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>)</mo>
</mrow>
</mrow>
Formula (6) is into formula (8), as long as predicted position vector j (i, k, h) has element, then w represents predicted position vector j (i, k, h)
In element, such as only label position vector g'(i, k, h) have element, then w represents label position vector g'(i, k, h) in member
Element, W represent the width of network end-point convolutional layer output characteristic figure, and in formula (8), w " is label position vector g'(i, k, h) in
Arbitrary element, w' are label position vector g'(i, k, h) element, and w' and w values make difference gained difference absolute value be respectively less than
Label position vector g'(i, k, h) in other elements and w values make the absolute value of difference obtained by difference, by label vector g'(i,
K, h) in arbitrary element w " traveled through, find g'(i, k, h) in w values make difference gained difference absolute value minimum member
Element, is w', in j (i, k, h) and g'(i, k, h) in the row coordinate that did not occur, network end-point convolutional layer is exported
The recurrence loss section of corresponding point is split as 0 in characteristic pattern;
The present invention is trained full convolution lane detection network according to back-propagation algorithm, utilizes lane detection loss
Derivative carries out network renewal, shown in network renewal gradient calculation such as formula (9):
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<mi>L</mi>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>=</mo>
<mi>&alpha;</mi>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mi>C</mi>
</msub>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>+</mo>
<mi>&beta;</mi>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mi>R</mi>
</msub>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>9</mn>
<mo>)</mo>
</mrow>
</mrow>
Update in gradient shown in the derivative calculations mode such as formula (10) of Classification Loss:
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mi>C</mi>
</msub>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>=</mo>
<mfrac>
<mrow>
<mi>b</mi>
<mo>&times;</mo>
<msup>
<mi>e</mi>
<mrow>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
</mrow>
</msup>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>C</mi>
</munderover>
<mi>y</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>c</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
</mrow>
<mo>)</mo>
<mo>-</mo>
<mi>g</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>10</mn>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>b</mi>
<mo>=</mo>
<mrow>
<mo>(</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>C</mi>
</munderover>
<mi>y</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>c</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>11</mn>
<mo>)</mo>
</mrow>
</mrow>
C represents the overall channel number of network end-point convolutional layer output characteristic figure, and c represents the logical of network end-point convolutional layer output characteristic figure
Road sequence number;
According to | | D (j (i, k, h)-g'(i, k, h)) | |2Form of Definition, return loss derivative calculations it is as follows:
● there is no element, g'(i, k, h in j (i, k, h)) in there is no element:
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mi>R</mi>
</msub>
</mrow>
<mrow>
<mn>2</mn>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>=</mo>
<mn>0</mn>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>12</mn>
<mo>)</mo>
</mrow>
</mrow>
● there is no element, g'(i, k, h in j (i, k, h)) in have element:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mi>R</mi>
</msub>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
<mo><</mo>
<mo>|</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
<mo>></mo>
<mo>|</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>w</mi>
<mo>&Element;</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>13</mn>
<mo>)</mo>
</mrow>
</mrow>
● have element, g'(i, k, h in j (i, k, h)) in there is no element:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mi>R</mi>
</msub>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
<mo><</mo>
<mo>|</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
<mo>></mo>
<mo>|</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>w</mi>
<mo>&Element;</mo>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>14</mn>
<mo>)</mo>
</mrow>
</mrow>
● have element, g'(i, k, h in j (i, k, h)) in have element:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mi>R</mi>
</msub>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>w</mi>
<mo>/</mo>
<mi>W</mi>
<mo>-</mo>
<msup>
<mi>w</mi>
<mo>&prime;</mo>
</msup>
<mo>/</mo>
<mi>W</mi>
</mrow>
</mtd>
<mtd>
<mtable>
<mtr>
<mtd>
<mrow>
<mo>|</mo>
<mi>w</mi>
<mo>/</mo>
<msup>
<mi>w</mi>
<mo>&prime;</mo>
</msup>
<mo>|</mo>
<mo><</mo>
<mo>|</mo>
<mi>w</mi>
<mo>-</mo>
<msup>
<mi>w</mi>
<mrow>
<mo>&prime;</mo>
<mo>&prime;</mo>
</mrow>
</msup>
<mo>|</mo>
<mo>,</mo>
<mo>&ForAll;</mo>
<msup>
<mi>w</mi>
<mrow>
<mo>&prime;</mo>
<mo>&prime;</mo>
</mrow>
</msup>
<mo>&Element;</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>w</mi>
<mo>&prime;</mo>
</msup>
<mo>&Element;</mo>
<msup>
<mi>g</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>w</mi>
<mo>&Element;</mo>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>k</mi>
<mo>,</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>15</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Formula (12) is into formula (15), as long as predicted position vector j (i, k, h) has element, then w represent predicted position vector j (i, k,
H) element in, such as only label position vector g'(i, k, h) there is element, then w represents label position vector g'(i, k, h) in
Element, W represent the width of network end-point convolutional layer output characteristic figure, and in formula (15), w " is label position vector g'(i, k, h) in
Arbitrary element, w' is g'(i, k, h in label position vector) element, and w' and w values make it is poor obtained by difference absolute value it is equal
Less than label position vector g'(i, k, h) in other elements and w values make the absolute value of difference obtained by difference, by label vector g'
Arbitrary element w " in (i, k, h) is traveled through, and finds g'(i, k, h) in make with w values difference gained difference absolute value it is minimum
Element, is w', in j (i, k, h) and g'(i, k, h) in the row coordinate that did not occur, network end-point convolutional layer is defeated
The derivative for going out the recurrence loss part of corresponding point in characteristic pattern is set to 0;
Forward-propagating process of the process as Detectability loss layer of Detectability loss will be calculated, lane detection loss derivative will be calculated
Error Back-Propagation process of the process as Detectability loss layer, and by the proportionality coefficient of Classification Loss, return the ratio system of loss
The layer parameter of number and prediction probability threshold value as Detectability loss layer, by setting the layer parameter of Detectability loss layer in detection data set
It is upper that full convolution lane detection network is trained using backward pass-algorithm, lane detection network model is obtained, is utilized
Gained lane detection network model realizes the detection to lane line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711420524.6A CN108009524B (en) | 2017-12-25 | 2017-12-25 | Lane line detection method based on full convolution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711420524.6A CN108009524B (en) | 2017-12-25 | 2017-12-25 | Lane line detection method based on full convolution network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108009524A true CN108009524A (en) | 2018-05-08 |
CN108009524B CN108009524B (en) | 2021-07-09 |
Family
ID=62061049
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711420524.6A Active CN108009524B (en) | 2017-12-25 | 2017-12-25 | Lane line detection method based on full convolution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108009524B (en) |
Cited By (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108846328A (en) * | 2018-05-29 | 2018-11-20 | 上海交通大学 | Lane detection method based on geometry regularization constraint |
CN108960055A (en) * | 2018-05-30 | 2018-12-07 | 广西大学 | A kind of method for detecting lane lines based on local line's stage mode feature |
CN109300139A (en) * | 2018-09-30 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines and device |
CN109345589A (en) * | 2018-09-11 | 2019-02-15 | 百度在线网络技术(北京)有限公司 | Method for detecting position, device, equipment and medium based on automatic driving vehicle |
CN109345547A (en) * | 2018-10-19 | 2019-02-15 | 天津天地伟业投资管理有限公司 | Traffic lane line detecting method and device based on deep learning multitask network |
CN109472272A (en) * | 2018-11-05 | 2019-03-15 | 四川长虹电器股份有限公司 | A kind of lines detection method based on from coding convolutional network |
CN109635744A (en) * | 2018-12-13 | 2019-04-16 | 合肥工业大学 | A kind of method for detecting lane lines based on depth segmentation network |
CN109631794A (en) * | 2018-12-28 | 2019-04-16 | 天津大学 | Target object curvature measurement method based on convolutional neural networks |
CN109740469A (en) * | 2018-12-24 | 2019-05-10 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, computer equipment and storage medium |
CN109871778A (en) * | 2019-01-23 | 2019-06-11 | 长安大学 | Lane based on transfer learning keeps control method |
CN109886176A (en) * | 2019-02-14 | 2019-06-14 | 武汉大学 | Method for detecting lane lines under complicated Driving Scene |
CN109902758A (en) * | 2019-03-11 | 2019-06-18 | 重庆邮电大学 | The data set scaling method of lane region recognition based on deep learning |
CN109934272A (en) * | 2019-03-01 | 2019-06-25 | 大连理工大学 | A kind of image matching method based on full convolutional network |
CN109961013A (en) * | 2019-02-21 | 2019-07-02 | 杭州飞步科技有限公司 | Recognition methods, device, equipment and the computer readable storage medium of lane line |
CN110020592A (en) * | 2019-02-03 | 2019-07-16 | 平安科技(深圳)有限公司 | Object detection model training method, device, computer equipment and storage medium |
CN110163077A (en) * | 2019-03-11 | 2019-08-23 | 重庆邮电大学 | A kind of lane recognition method based on full convolutional neural networks |
CN110197151A (en) * | 2019-05-28 | 2019-09-03 | 大连理工大学 | A kind of lane detection system and method for combination double branching networks and custom function network |
CN110232368A (en) * | 2019-06-20 | 2019-09-13 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN110348383A (en) * | 2019-07-11 | 2019-10-18 | 重庆市地理信息中心 | A kind of road axis and two-wire extracting method based on convolutional neural networks recurrence |
CN110363160A (en) * | 2019-07-17 | 2019-10-22 | 河南工业大学 | A kind of Multi-lane Lines recognition methods and device |
CN110378174A (en) * | 2018-08-10 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Road extracting method and device |
CN110414386A (en) * | 2019-07-12 | 2019-11-05 | 武汉理工大学 | Based on the method for detecting lane lines for improving SCNN network |
CN110487562A (en) * | 2019-08-21 | 2019-11-22 | 北京航空航天大学 | One kind being used for unpiloted road-holding ability detection system and method |
WO2019228450A1 (en) * | 2018-05-31 | 2019-12-05 | 杭州海康威视数字技术股份有限公司 | Image processing method, device, and equipment, and readable medium |
CN110569384A (en) * | 2019-09-09 | 2019-12-13 | 深圳市乐福衡器有限公司 | AI scanning method |
CN110879943A (en) * | 2018-09-05 | 2020-03-13 | 北京嘀嘀无限科技发展有限公司 | Image data processing method and system |
CN110889318A (en) * | 2018-09-05 | 2020-03-17 | 斯特拉德视觉公司 | Lane detection method and apparatus using CNN |
CN110920631A (en) * | 2019-11-27 | 2020-03-27 | 北京三快在线科技有限公司 | Method and device for controlling vehicle, electronic equipment and readable storage medium |
CN111008600A (en) * | 2019-12-06 | 2020-04-14 | 中国科学技术大学 | Lane line detection method |
CN111046723A (en) * | 2019-10-17 | 2020-04-21 | 安徽清新互联信息科技有限公司 | Deep learning-based lane line detection method |
CN111126327A (en) * | 2019-12-30 | 2020-05-08 | 中国科学院自动化研究所 | Lane line detection method and system, vehicle-mounted system and vehicle |
CN111209780A (en) * | 2018-11-21 | 2020-05-29 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic device and readable storage medium |
CN111259796A (en) * | 2020-01-16 | 2020-06-09 | 东华大学 | Lane line detection method based on image geometric features |
CN111259705A (en) * | 2018-12-03 | 2020-06-09 | 初速度(苏州)科技有限公司 | Special linear lane line detection method and system |
CN111259704A (en) * | 2018-12-03 | 2020-06-09 | 初速度(苏州)科技有限公司 | Training method of dotted lane line endpoint detection model |
CN111369566A (en) * | 2018-12-25 | 2020-07-03 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for determining position of pavement blanking point and storage medium |
CN111460984A (en) * | 2020-03-30 | 2020-07-28 | 华南理工大学 | Global lane line detection method based on key point and gradient balance loss |
CN111462130A (en) * | 2019-01-22 | 2020-07-28 | 斯特拉德视觉公司 | Method and apparatus for detecting lane line included in input image using lane mask |
CN111553210A (en) * | 2020-04-16 | 2020-08-18 | 雄狮汽车科技(南京)有限公司 | Training method of lane line detection model, and lane line detection method and device |
CN111914596A (en) * | 2019-05-09 | 2020-11-10 | 北京四维图新科技股份有限公司 | Lane line detection method, device, system and storage medium |
CN112131914A (en) * | 2019-06-25 | 2020-12-25 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic equipment and intelligent equipment |
CN112215795A (en) * | 2020-09-02 | 2021-01-12 | 苏州超集信息科技有限公司 | Intelligent server component detection method based on deep learning |
CN112241670A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Image processing method and device |
CN112241669A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Target identification method, device, system and equipment, and storage medium |
CN112287912A (en) * | 2020-12-25 | 2021-01-29 | 浙江大华技术股份有限公司 | Deep learning-based lane line detection method and device |
CN112434591A (en) * | 2020-11-19 | 2021-03-02 | 腾讯科技(深圳)有限公司 | Lane line determination method and device |
CN112446230A (en) * | 2019-08-27 | 2021-03-05 | 中车株洲电力机车研究所有限公司 | Method and device for recognizing lane line image |
CN112560717A (en) * | 2020-12-21 | 2021-03-26 | 青岛科技大学 | Deep learning-based lane line detection method |
CN112758107A (en) * | 2021-02-07 | 2021-05-07 | 的卢技术有限公司 | Automatic lane changing method for vehicle, control device, electronic equipment and automobile |
CN112926365A (en) * | 2019-12-06 | 2021-06-08 | 广州汽车集团股份有限公司 | Lane line detection method and system |
CN112926354A (en) * | 2019-12-05 | 2021-06-08 | 北京超星未来科技有限公司 | Deep learning-based lane line detection method and device |
CN113011338A (en) * | 2021-03-19 | 2021-06-22 | 华南理工大学 | Lane line detection method and system |
CN113052135A (en) * | 2021-04-22 | 2021-06-29 | 淮阴工学院 | Lane line detection method and system based on deep neural network Lane-Ar |
CN113095164A (en) * | 2021-03-22 | 2021-07-09 | 西北工业大学 | Lane line detection and positioning method based on reinforcement learning and mark point characterization |
CN113221643A (en) * | 2021-04-06 | 2021-08-06 | 中国科学院合肥物质科学研究院 | Lane line classification method and system adopting cascade network |
CN113239865A (en) * | 2021-05-31 | 2021-08-10 | 西安电子科技大学 | Deep learning-based lane line detection method |
CN113392704A (en) * | 2021-05-12 | 2021-09-14 | 重庆大学 | Mountain road sideline position detection method |
CN113487542A (en) * | 2021-06-16 | 2021-10-08 | 成都唐源电气股份有限公司 | Method for extracting worn area of contact line conductor |
CN113822218A (en) * | 2021-09-30 | 2021-12-21 | 厦门汇利伟业科技有限公司 | Lane line detection method and computer-readable storage medium |
WO2022031228A1 (en) * | 2020-08-07 | 2022-02-10 | Grabtaxi Holdings Pte. Ltd | Method of predicting road attributes, data processing system and computer executable code |
CN114463720A (en) * | 2022-01-25 | 2022-05-10 | 杭州飞步科技有限公司 | Lane line detection method based on line segment intersection-to-parallel ratio loss function |
CN114724119A (en) * | 2022-06-09 | 2022-07-08 | 天津所托瑞安汽车科技有限公司 | Lane line extraction method, lane line detection apparatus, and storage medium |
CN115082888A (en) * | 2022-08-18 | 2022-09-20 | 北京轻舟智航智能技术有限公司 | Lane line detection method and device |
CN115393595A (en) * | 2022-10-27 | 2022-11-25 | 福思(杭州)智能科技有限公司 | Segmentation network model training method, lane line detection method and electronic device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608456A (en) * | 2015-12-22 | 2016-05-25 | 华中科技大学 | Multi-directional text detection method based on full convolution network |
CN105631426A (en) * | 2015-12-29 | 2016-06-01 | 中国科学院深圳先进技术研究院 | Image text detection method and device |
CN106097444A (en) * | 2016-05-30 | 2016-11-09 | 百度在线网络技术(北京)有限公司 | High-precision map generates method and apparatus |
CN106940562A (en) * | 2017-03-09 | 2017-07-11 | 华南理工大学 | A kind of mobile robot wireless clustered system and neutral net vision navigation method |
CN107229904A (en) * | 2017-04-24 | 2017-10-03 | 东北大学 | A kind of object detection and recognition method based on deep learning |
CN107424161A (en) * | 2017-04-25 | 2017-12-01 | 南京邮电大学 | A kind of indoor scene image layout method of estimation by thick extremely essence |
CN107506765A (en) * | 2017-10-13 | 2017-12-22 | 厦门大学 | A kind of method of the license plate sloped correction based on neutral net |
-
2017
- 2017-12-25 CN CN201711420524.6A patent/CN108009524B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608456A (en) * | 2015-12-22 | 2016-05-25 | 华中科技大学 | Multi-directional text detection method based on full convolution network |
CN105631426A (en) * | 2015-12-29 | 2016-06-01 | 中国科学院深圳先进技术研究院 | Image text detection method and device |
CN106097444A (en) * | 2016-05-30 | 2016-11-09 | 百度在线网络技术(北京)有限公司 | High-precision map generates method and apparatus |
CN106940562A (en) * | 2017-03-09 | 2017-07-11 | 华南理工大学 | A kind of mobile robot wireless clustered system and neutral net vision navigation method |
CN107229904A (en) * | 2017-04-24 | 2017-10-03 | 东北大学 | A kind of object detection and recognition method based on deep learning |
CN107424161A (en) * | 2017-04-25 | 2017-12-01 | 南京邮电大学 | A kind of indoor scene image layout method of estimation by thick extremely essence |
CN107506765A (en) * | 2017-10-13 | 2017-12-22 | 厦门大学 | A kind of method of the license plate sloped correction based on neutral net |
Non-Patent Citations (2)
Title |
---|
YINGYING ZHU等: "Traffic sign detection and recognition using fully convolutional network guided proposals", 《NEUROCOMPUTING》 * |
曾治等: "一种实时的城市道路车道线识别方法及实现", 《电子技术与软件工程》 * |
Cited By (106)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108846328B (en) * | 2018-05-29 | 2020-10-16 | 上海交通大学 | Lane detection method based on geometric regularization constraint |
CN108846328A (en) * | 2018-05-29 | 2018-11-20 | 上海交通大学 | Lane detection method based on geometry regularization constraint |
CN108960055A (en) * | 2018-05-30 | 2018-12-07 | 广西大学 | A kind of method for detecting lane lines based on local line's stage mode feature |
CN108960055B (en) * | 2018-05-30 | 2021-06-08 | 广西大学 | Lane line detection method based on local line segment mode characteristics |
WO2019228450A1 (en) * | 2018-05-31 | 2019-12-05 | 杭州海康威视数字技术股份有限公司 | Image processing method, device, and equipment, and readable medium |
CN110378174A (en) * | 2018-08-10 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Road extracting method and device |
CN110889318B (en) * | 2018-09-05 | 2024-01-19 | 斯特拉德视觉公司 | Lane detection method and device using CNN |
CN110889318A (en) * | 2018-09-05 | 2020-03-17 | 斯特拉德视觉公司 | Lane detection method and apparatus using CNN |
CN110879943A (en) * | 2018-09-05 | 2020-03-13 | 北京嘀嘀无限科技发展有限公司 | Image data processing method and system |
CN110879943B (en) * | 2018-09-05 | 2022-08-19 | 北京嘀嘀无限科技发展有限公司 | Image data processing method and system |
US11170525B2 (en) | 2018-09-11 | 2021-11-09 | Baidu Online Network Technology (Beijing) Co., Ltd. | Autonomous vehicle based position detection method and apparatus, device and medium |
CN109345589A (en) * | 2018-09-11 | 2019-02-15 | 百度在线网络技术(北京)有限公司 | Method for detecting position, device, equipment and medium based on automatic driving vehicle |
CN109300139A (en) * | 2018-09-30 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines and device |
CN109345547B (en) * | 2018-10-19 | 2021-08-24 | 天津天地伟业投资管理有限公司 | Traffic lane line detection method and device based on deep learning multitask network |
CN109345547A (en) * | 2018-10-19 | 2019-02-15 | 天津天地伟业投资管理有限公司 | Traffic lane line detecting method and device based on deep learning multitask network |
CN109472272A (en) * | 2018-11-05 | 2019-03-15 | 四川长虹电器股份有限公司 | A kind of lines detection method based on from coding convolutional network |
CN111209780A (en) * | 2018-11-21 | 2020-05-29 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic device and readable storage medium |
CN111259705A (en) * | 2018-12-03 | 2020-06-09 | 初速度(苏州)科技有限公司 | Special linear lane line detection method and system |
CN111259704A (en) * | 2018-12-03 | 2020-06-09 | 初速度(苏州)科技有限公司 | Training method of dotted lane line endpoint detection model |
CN111259705B (en) * | 2018-12-03 | 2022-06-10 | 魔门塔(苏州)科技有限公司 | Special linear lane line detection method and system |
CN111259704B (en) * | 2018-12-03 | 2022-06-10 | 魔门塔(苏州)科技有限公司 | Training method of dotted lane line endpoint detection model |
CN109635744A (en) * | 2018-12-13 | 2019-04-16 | 合肥工业大学 | A kind of method for detecting lane lines based on depth segmentation network |
CN109740469A (en) * | 2018-12-24 | 2019-05-10 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, computer equipment and storage medium |
CN111369566A (en) * | 2018-12-25 | 2020-07-03 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for determining position of pavement blanking point and storage medium |
CN111369566B (en) * | 2018-12-25 | 2023-12-05 | 杭州海康威视数字技术股份有限公司 | Method, device, equipment and storage medium for determining position of pavement blanking point |
CN109631794A (en) * | 2018-12-28 | 2019-04-16 | 天津大学 | Target object curvature measurement method based on convolutional neural networks |
CN111462130B (en) * | 2019-01-22 | 2023-10-17 | 斯特拉德视觉公司 | Method and apparatus for detecting lane lines included in input image using lane mask |
CN111462130A (en) * | 2019-01-22 | 2020-07-28 | 斯特拉德视觉公司 | Method and apparatus for detecting lane line included in input image using lane mask |
CN109871778A (en) * | 2019-01-23 | 2019-06-11 | 长安大学 | Lane based on transfer learning keeps control method |
CN109871778B (en) * | 2019-01-23 | 2022-11-15 | 长安大学 | Lane keeping control method based on transfer learning |
CN110020592A (en) * | 2019-02-03 | 2019-07-16 | 平安科技(深圳)有限公司 | Object detection model training method, device, computer equipment and storage medium |
CN110020592B (en) * | 2019-02-03 | 2024-04-09 | 平安科技(深圳)有限公司 | Object detection model training method, device, computer equipment and storage medium |
CN109886176A (en) * | 2019-02-14 | 2019-06-14 | 武汉大学 | Method for detecting lane lines under complicated Driving Scene |
CN109886176B (en) * | 2019-02-14 | 2023-02-24 | 武汉大学 | Lane line detection method in complex driving scene |
CN109961013A (en) * | 2019-02-21 | 2019-07-02 | 杭州飞步科技有限公司 | Recognition methods, device, equipment and the computer readable storage medium of lane line |
CN109934272B (en) * | 2019-03-01 | 2022-03-29 | 大连理工大学 | Image matching method based on full convolution network |
CN109934272A (en) * | 2019-03-01 | 2019-06-25 | 大连理工大学 | A kind of image matching method based on full convolutional network |
CN109902758B (en) * | 2019-03-11 | 2022-05-31 | 重庆邮电大学 | Deep learning-based lane area identification data set calibration method |
CN110163077A (en) * | 2019-03-11 | 2019-08-23 | 重庆邮电大学 | A kind of lane recognition method based on full convolutional neural networks |
CN109902758A (en) * | 2019-03-11 | 2019-06-18 | 重庆邮电大学 | The data set scaling method of lane region recognition based on deep learning |
CN111914596A (en) * | 2019-05-09 | 2020-11-10 | 北京四维图新科技股份有限公司 | Lane line detection method, device, system and storage medium |
CN111914596B (en) * | 2019-05-09 | 2024-04-09 | 北京四维图新科技股份有限公司 | Lane line detection method, device, system and storage medium |
CN110197151A (en) * | 2019-05-28 | 2019-09-03 | 大连理工大学 | A kind of lane detection system and method for combination double branching networks and custom function network |
CN110232368B (en) * | 2019-06-20 | 2021-08-24 | 百度在线网络技术(北京)有限公司 | Lane line detection method, lane line detection device, electronic device, and storage medium |
CN110232368A (en) * | 2019-06-20 | 2019-09-13 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN112131914B (en) * | 2019-06-25 | 2022-10-21 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic equipment and intelligent equipment |
CN112131914A (en) * | 2019-06-25 | 2020-12-25 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic equipment and intelligent equipment |
CN110348383B (en) * | 2019-07-11 | 2020-07-31 | 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) | Road center line and double line extraction method based on convolutional neural network regression |
CN110348383A (en) * | 2019-07-11 | 2019-10-18 | 重庆市地理信息中心 | A kind of road axis and two-wire extracting method based on convolutional neural networks recurrence |
CN110414386A (en) * | 2019-07-12 | 2019-11-05 | 武汉理工大学 | Based on the method for detecting lane lines for improving SCNN network |
CN110414386B (en) * | 2019-07-12 | 2022-01-21 | 武汉理工大学 | Lane line detection method based on improved SCNN (traffic channel network) |
CN110363160A (en) * | 2019-07-17 | 2019-10-22 | 河南工业大学 | A kind of Multi-lane Lines recognition methods and device |
CN110363160B (en) * | 2019-07-17 | 2022-09-23 | 河南工业大学 | Multi-lane line identification method and device |
CN112241669A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Target identification method, device, system and equipment, and storage medium |
CN112241670A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Image processing method and device |
CN112241670B (en) * | 2019-07-18 | 2024-03-01 | 杭州海康威视数字技术股份有限公司 | Image processing method and device |
CN110487562A (en) * | 2019-08-21 | 2019-11-22 | 北京航空航天大学 | One kind being used for unpiloted road-holding ability detection system and method |
CN112446230A (en) * | 2019-08-27 | 2021-03-05 | 中车株洲电力机车研究所有限公司 | Method and device for recognizing lane line image |
CN112446230B (en) * | 2019-08-27 | 2024-04-09 | 中车株洲电力机车研究所有限公司 | Lane line image recognition method and device |
CN110569384B (en) * | 2019-09-09 | 2021-02-26 | 深圳市乐福衡器有限公司 | AI scanning method |
CN110569384A (en) * | 2019-09-09 | 2019-12-13 | 深圳市乐福衡器有限公司 | AI scanning method |
CN111046723B (en) * | 2019-10-17 | 2023-06-02 | 安徽清新互联信息科技有限公司 | Lane line detection method based on deep learning |
CN111046723A (en) * | 2019-10-17 | 2020-04-21 | 安徽清新互联信息科技有限公司 | Deep learning-based lane line detection method |
CN110920631A (en) * | 2019-11-27 | 2020-03-27 | 北京三快在线科技有限公司 | Method and device for controlling vehicle, electronic equipment and readable storage medium |
CN110920631B (en) * | 2019-11-27 | 2021-02-12 | 北京三快在线科技有限公司 | Method and device for controlling vehicle, electronic equipment and readable storage medium |
CN112926354A (en) * | 2019-12-05 | 2021-06-08 | 北京超星未来科技有限公司 | Deep learning-based lane line detection method and device |
CN112926365A (en) * | 2019-12-06 | 2021-06-08 | 广州汽车集团股份有限公司 | Lane line detection method and system |
CN111008600B (en) * | 2019-12-06 | 2023-04-07 | 中国科学技术大学 | Lane line detection method |
CN111008600A (en) * | 2019-12-06 | 2020-04-14 | 中国科学技术大学 | Lane line detection method |
CN111126327A (en) * | 2019-12-30 | 2020-05-08 | 中国科学院自动化研究所 | Lane line detection method and system, vehicle-mounted system and vehicle |
CN111126327B (en) * | 2019-12-30 | 2023-09-15 | 中国科学院自动化研究所 | Lane line detection method and system, vehicle-mounted system and vehicle |
CN111259796A (en) * | 2020-01-16 | 2020-06-09 | 东华大学 | Lane line detection method based on image geometric features |
CN111460984A (en) * | 2020-03-30 | 2020-07-28 | 华南理工大学 | Global lane line detection method based on key point and gradient balance loss |
CN111460984B (en) * | 2020-03-30 | 2023-05-23 | 华南理工大学 | Global lane line detection method based on key points and gradient equalization loss |
CN111553210B (en) * | 2020-04-16 | 2024-04-09 | 雄狮汽车科技(南京)有限公司 | Training method of lane line detection model, lane line detection method and device |
CN111553210A (en) * | 2020-04-16 | 2020-08-18 | 雄狮汽车科技(南京)有限公司 | Training method of lane line detection model, and lane line detection method and device |
WO2022031228A1 (en) * | 2020-08-07 | 2022-02-10 | Grabtaxi Holdings Pte. Ltd | Method of predicting road attributes, data processing system and computer executable code |
US11828620B2 (en) | 2020-08-07 | 2023-11-28 | Grabtaxi Holdings Pte. Ltd. | Method of predicting road attributes, data processing system and computer executable code |
CN112215795B (en) * | 2020-09-02 | 2024-04-09 | 苏州超集信息科技有限公司 | Intelligent detection method for server component based on deep learning |
CN112215795A (en) * | 2020-09-02 | 2021-01-12 | 苏州超集信息科技有限公司 | Intelligent server component detection method based on deep learning |
CN112434591A (en) * | 2020-11-19 | 2021-03-02 | 腾讯科技(深圳)有限公司 | Lane line determination method and device |
CN112560717B (en) * | 2020-12-21 | 2023-04-21 | 青岛科技大学 | Lane line detection method based on deep learning |
CN112560717A (en) * | 2020-12-21 | 2021-03-26 | 青岛科技大学 | Deep learning-based lane line detection method |
CN112287912B (en) * | 2020-12-25 | 2021-03-30 | 浙江大华技术股份有限公司 | Deep learning-based lane line detection method and device |
WO2022134996A1 (en) * | 2020-12-25 | 2022-06-30 | Zhejiang Dahua Technology Co., Ltd. | Lane line detection method based on deep learning, and apparatus |
CN112287912A (en) * | 2020-12-25 | 2021-01-29 | 浙江大华技术股份有限公司 | Deep learning-based lane line detection method and device |
CN112758107A (en) * | 2021-02-07 | 2021-05-07 | 的卢技术有限公司 | Automatic lane changing method for vehicle, control device, electronic equipment and automobile |
CN113011338A (en) * | 2021-03-19 | 2021-06-22 | 华南理工大学 | Lane line detection method and system |
CN113011338B (en) * | 2021-03-19 | 2023-08-22 | 华南理工大学 | Lane line detection method and system |
CN113095164A (en) * | 2021-03-22 | 2021-07-09 | 西北工业大学 | Lane line detection and positioning method based on reinforcement learning and mark point characterization |
CN113221643B (en) * | 2021-04-06 | 2023-04-11 | 中国科学院合肥物质科学研究院 | Lane line classification method and system adopting cascade network |
CN113221643A (en) * | 2021-04-06 | 2021-08-06 | 中国科学院合肥物质科学研究院 | Lane line classification method and system adopting cascade network |
CN113052135B (en) * | 2021-04-22 | 2023-03-24 | 淮阴工学院 | Lane line detection method and system based on deep neural network Lane-Ar |
CN113052135A (en) * | 2021-04-22 | 2021-06-29 | 淮阴工学院 | Lane line detection method and system based on deep neural network Lane-Ar |
CN113392704A (en) * | 2021-05-12 | 2021-09-14 | 重庆大学 | Mountain road sideline position detection method |
CN113239865A (en) * | 2021-05-31 | 2021-08-10 | 西安电子科技大学 | Deep learning-based lane line detection method |
CN113487542A (en) * | 2021-06-16 | 2021-10-08 | 成都唐源电气股份有限公司 | Method for extracting worn area of contact line conductor |
CN113487542B (en) * | 2021-06-16 | 2023-08-04 | 成都唐源电气股份有限公司 | Extraction method of contact net wire abrasion area |
CN113822218A (en) * | 2021-09-30 | 2021-12-21 | 厦门汇利伟业科技有限公司 | Lane line detection method and computer-readable storage medium |
CN114463720A (en) * | 2022-01-25 | 2022-05-10 | 杭州飞步科技有限公司 | Lane line detection method based on line segment intersection-to-parallel ratio loss function |
CN114463720B (en) * | 2022-01-25 | 2022-10-21 | 杭州飞步科技有限公司 | Lane line detection method based on line segment intersection ratio loss function |
CN114724119A (en) * | 2022-06-09 | 2022-07-08 | 天津所托瑞安汽车科技有限公司 | Lane line extraction method, lane line detection apparatus, and storage medium |
CN114724119B (en) * | 2022-06-09 | 2022-09-06 | 天津所托瑞安汽车科技有限公司 | Lane line extraction method, lane line detection device, and storage medium |
CN115082888B (en) * | 2022-08-18 | 2022-10-25 | 北京轻舟智航智能技术有限公司 | Lane line detection method and device |
CN115082888A (en) * | 2022-08-18 | 2022-09-20 | 北京轻舟智航智能技术有限公司 | Lane line detection method and device |
CN115393595A (en) * | 2022-10-27 | 2022-11-25 | 福思(杭州)智能科技有限公司 | Segmentation network model training method, lane line detection method and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN108009524B (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108009524A (en) | A kind of method for detecting lane lines based on full convolutional network | |
CN111598030B (en) | Method and system for detecting and segmenting vehicle in aerial image | |
CN110348376B (en) | Pedestrian real-time detection method based on neural network | |
CN104978580B (en) | A kind of insulator recognition methods for unmanned plane inspection transmission line of electricity | |
CN104392212B (en) | The road information detection and front vehicles recognition methods of a kind of view-based access control model | |
CN112633176B (en) | Rail transit obstacle detection method based on deep learning | |
CN108171752A (en) | A kind of sea ship video detection and tracking based on deep learning | |
CN107871126A (en) | Model recognizing method and system based on deep-neural-network | |
CN109145769A (en) | The target detection network design method of blending image segmentation feature | |
CN108171112A (en) | Vehicle identification and tracking based on convolutional neural networks | |
CN106372577A (en) | Deep learning-based traffic sign automatic identifying and marking method | |
CN105354568A (en) | Convolutional neural network based vehicle logo identification method | |
CN108280397A (en) | Human body image hair detection method based on depth convolutional neural networks | |
CN110163077A (en) | A kind of lane recognition method based on full convolutional neural networks | |
He et al. | Rail transit obstacle detection based on improved CNN | |
CN105022990A (en) | Water surface target rapid-detection method based on unmanned vessel application | |
CN110226170A (en) | A kind of traffic sign recognition method in rain and snow weather | |
CN104240256A (en) | Image salient detecting method based on layering sparse modeling | |
CN107767400A (en) | Remote sensing images sequence moving target detection method based on stratification significance analysis | |
CN111460894B (en) | Intelligent car logo detection method based on convolutional neural network | |
CN106127812A (en) | A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring | |
CN107038442A (en) | A kind of car plate detection and global recognition method based on deep learning | |
He et al. | Detection of foreign matter on high-speed train underbody based on deep learning | |
CN109948643A (en) | A kind of type of vehicle classification method based on deep layer network integration model | |
CN106971193A (en) | Object detection method based on structural type Haar and Adaboost |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |