CN109345547A - Traffic lane line detecting method and device based on deep learning multitask network - Google Patents
Traffic lane line detecting method and device based on deep learning multitask network Download PDFInfo
- Publication number
- CN109345547A CN109345547A CN201811222879.9A CN201811222879A CN109345547A CN 109345547 A CN109345547 A CN 109345547A CN 201811222879 A CN201811222879 A CN 201811222879A CN 109345547 A CN109345547 A CN 109345547A
- Authority
- CN
- China
- Prior art keywords
- image
- lane line
- block
- coordinate
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000006870 function Effects 0.000 claims abstract description 20
- 239000000284 extract Substances 0.000 claims abstract description 12
- 238000010606 normalization Methods 0.000 claims abstract description 11
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 4
- 238000012549 training Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 6
- 150000001875 compounds Chemical class 0.000 claims description 6
- 238000012935 Averaging Methods 0.000 claims description 5
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 238000013136 deep learning model Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims 1
- 238000001514 detection method Methods 0.000 abstract description 11
- 238000010191 image analysis Methods 0.000 abstract 1
- 102100031051 Cysteine and glycine-rich protein 1 Human genes 0.000 description 6
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 101710185487 Cysteine and glycine-rich protein 1 Proteins 0.000 description 1
- 102100027364 Cysteine-rich protein 3 Human genes 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 108010023942 cysteine and glycine-rich protein 3 Proteins 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of traffic scene method for detecting lane lines and device based on deep learning multitask network, this method is relative to traditional lane detection algorithm based on straight-line detection, introduce deep learning multitask convolutional neural networks (CNN), extract lane line characteristic information, make full use of each layer detailed information of image, brightness is carried out to image first and marginal information is collected and assessment, cut size is adjusted according to assessment result, several are divided the image into as block, then image normalization is carried out, it is sent into deep learning network, export lane line image type and coordinate, recycle spatial domain picture correlation, carry out lane line fitting, it is accurate under different scenes and brightness to finally realize, the quickly function of identification lane line information.The present invention is suitable for intelligent transportation field bayonet camera and electronic police application and makes full use of deep learning network under the premise of guaranteeing image analysis real-time, effectively improves the adaptability and accuracy rate of lane detection function.
Description
Technical field
The invention belongs to field of intelligent video surveillance, and in particular to a kind of traffic field based on deep learning multitask network
Scape method for detecting lane lines and device.
Background technique
At present in industry for the main method of Lane detection be Hough line detection method, the above method firstly the need of
Color image is switched into gray level image, the colouring information being lost in image, then binary conversion treatment is carried out, in the process unavoidably
Ground loses a large amount of edge details, thus further reduced the accuracy rate of detection, meanwhile, the adaptation for different scenes image
Property have after obvious shortcoming, such as rain ground is reflective, night brightness is lower, shade covering and occlusion scene cannot be preferable
Identify lane line.
Summary of the invention
The object of the present invention is to provide a kind of traffic lane line detecting method and dress based on deep learning multitask network
It sets, multitask deep learning network training is based on, under different brightness, colour temperature, weather environment and the traffic field of various complexity
Jing Zhong has good adaptability, while meeting the real-time demand of intelligent transportation, has accurate, quick, adaptable spy
Point.
In order to achieve the above objectives, the technical scheme of the present invention is realized as follows:
A kind of traffic lane line detecting method based on deep learning multitask network, comprising:
S1, brightness and marginal information are collected: being extracted mean picture brightness information and passed through two-dimensional convolution operator extraction image
Edge strength;
S2, image adaptive are cut: according to the proportionate relationship of mean picture brightness and edge strength, unit of account area side
Edge intensity threshold divides the image into several picture blocks, is ready for image normalization to adaptively cut image;
S3, image normalization: being zoomed in and out the image cut according to edge strength threshold value by fixed dimension, prepares to be sent into
Deep learning;
S4, deep learning: it to the image block of compound convolutional neural networks input uniform sizes, is generated by network model horizontal
To, longitudinal lane line and non-lane line classification results and lane line coordinates;
S5, category analysis and coordinate restore: the classification information exported according to the network model of deep learning extracts vehicle respectively
The coordinate of diatom block, and according to the size as block in original image, calculate the respective coordinate value in original image;
S6, lane line fitting: according to each as block sort information and coordinate value, using each spatial coherence as block,
The each coordinate as block of same lane line is fitted, to export final lane line coordinates.
Further, the specific method of step S1 includes:
S11, the spatial coherence based on pixel in actual scene image, image is sampled, and sample rate is 1/ (3*
3), i.e., the every column of 3 row 3 acquisition central point, if original image, having a size of W*H, the size obtained after sampling is the breviary of (W/3) * (H/3)
Figure, what edge was unsatisfactory for 3*3 does padding processing, seeks average brightness value:
S12, two-dimensional convolution extraction marginal information being carried out to image, convolution operator is based on scharr operator and optimizes, point
For horizontal and vertical directions:WithEdge does padding processing, respectively obtains
Horizontal and vertical gradient, gradient obtain image averaging marginal information intensity after merging:
Further, the specific method of step S2 includes: when seeking segmented image size S, using piecewise function:
Here, Smin=112, Smax=224, cutting as block length and width are S.
Further, deep learning network described in step S4 is based on resnet and DARKnet Network Evolution,
Residual error transmitting is added after the CRP layer of one pond layer of every two convolution sum, and refers to MTCNN framework, multitask training is added
Function exports lateral, longitudinal lane line, 3 kinds of classification results of non-lane line and 3 confidence levels, while exporting line segment starting and terminal point
Coordinate, totally 10 output parameters.
Further, step S6 method particularly includes: obtain in each position as block in original image and block line segment in original
Coordinate in figure, is then clustered within the scope of full figure, is shown that slope and spatial position are similar as block, is used least square method
As block line segment coordinate fitting is in line, final output lane line coordinates completes lane line automatic identification in same class.
Another aspect of the present invention additionally provides a kind of special bus diatom detection dress based on deep learning multitask network
It sets, comprising:
Brightness and marginal information collection module: it extracts mean picture brightness information and passes through two-dimensional convolution operator extraction image
Edge strength;
Image adaptive cuts module: according to the proportionate relationship of mean picture brightness and edge strength, unit of account area
Edge strength threshold value divides the image into several picture blocks, is ready for image normalization to adaptively cut image;
As image normalization module: zooming in and out, prepare by fixed dimension to the image cut according to edge strength threshold value
It is sent into deep learning model;
Compound convolution neural network module: inputting the image block of uniform sizes, generates lateral, longitudinal lane by network model
Line and non-lane line classification results and lane line coordinates;
Category analysis and coordinate restoration module: the classification information exported according to deep learning network model extracts vehicle respectively
The coordinate of diatom block, and according to the size as block in original image, calculate the respective coordinate value in original image;
Lane line fitting module: according to each as block sort information and coordinate value, using each spatial coherence as block,
The each coordinate as block of same lane line is fitted, to export final lane line coordinates.
Further, the brightness includes: with marginal information collection module
Average brightness value cell: the spatial coherence based on pixel in actual scene image samples image, sampling
Rate is 1/ (3*3), i.e., the every column of 3 row 3 acquisition central point, if original image, having a size of W*H, the size obtained after sampling is (W/3) *
(H/3) thumbnail, what edge was unsatisfactory for 3*3 does padding processing, seeks average brightness value:
Average edge information strength unit: two-dimensional convolution is carried out to image and extracts marginal information, convolution operator is based on
Scharr operator optimizes, and is divided into horizontal and vertical directions:WithIt does at edge
Padding processing respectively obtains horizontal and vertical gradient, and gradient obtains image averaging marginal information intensity after merging:
Further, it includes piecewise function unit that described image, which adaptively cuts module, when seeking segmented image size S,
Using piecewise function:
Here, Smin=112,3max=224, cutting as block length and width are S.
Further, the compound convolution neural network module includes deep learning network unit: based on resnet and
Residual error transmitting is added after the CRP layer of one pond layer of every two convolution sum, and refers to for DARKnet Network Evolution
Multitask training function is added in MTCNN framework, exports lateral, longitudinal lane line, 3 kinds of classification results of non-lane line and 3 confidences
It spends, while exporting line segment starting and terminal point coordinate, totally 10 output parameters.
Further, the lane line fitting module includes cluster fitting unit: obtaining each position as block in original image
The coordinate with line segment in block in original image is set, is then clustered within the scope of full figure, is obtained similar in slope and spatial position
As block, with least square method as block line segment coordinate fitting is in line, final output lane line coordinates completes vehicle in same class
Diatom automatic identification.
Compared with prior art, the present invention have it is following the utility model has the advantages that
It is under different brightness, colour temperature, weather environment and various the present invention is based on multitask deep learning network training
In complicated traffic scene, have good adaptability, while meeting the real-time demand of intelligent transportation, have it is accurate, quickly,
Adaptable feature;Compared to traditional method for detecting lane lines, no matter it is obviously improved in adaptability and accuracy.
Detailed description of the invention
Fig. 1 is the universal architecture CRP schematic diagram of a layer structure in the deep learning network of the embodiment of the present invention;
Fig. 2 is the training schematic network structure of the embodiment of the present invention;
The overall structure diagram of Fig. 3 books inventive embodiments.
Specific embodiment
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase
Mutually combination.
Traffic scene lane detection new method based on deep learning multitask network of the invention, implementation are as follows:
Extracting brightness and when marginal information, based on the spatial coherence of pixel in actual scene image, first by image into
Row down-sampling, sample rate are 1/ (3*3), i.e., the every column of 3 row 3 acquisition central point, if original image obtains after sampling having a size of W*H
Thumbnail having a size of (W/3) * (H/3), what edge was unsatisfactory for 3*3 does padding processing, seeks average brightness value:Two-dimensional convolution is carried out to image again and extracts marginal information, convolution operator is calculated based on scharr
Son optimizes, and enhances the correlation of adjacent pixels, is broadly divided into horizontal and vertical directions:WithEdge does padding processing, respectively obtains horizontal and vertical gradient, and gradient obtains image averaging after merging
Marginal information intensityThe relative coefficient R=B of brightness of image and marginal informationavg/
Lavg.It is obtained according to experimental data, the luminance edges relative coefficient of traffic scene is lower than 0.75 between 0.75 to 3, indicates
Picture noise is excessive or details is abundant, indicates that image scene is more single higher than 3.Noise and details scene more abundant are known
Other difficulty is bigger, and the picture block size for needing to divide feeding deep learning model is just smaller, when seeking segmented image size S, uses
Piecewise function:
Here, Smin=112, Smax=224, cutting as block length and width are S, by image normalization, be uniformly scaled to
224*224 is sent into deep learning model.
Deep learning network selected by the present invention is based on resnet and DARKnet Network Evolution, in every two
It joined residual error transmitting after the CRP layer of one pond layer of convolution sum, increasing network depth, sufficiently extracting the same of characteristics of image
When, the problem of effectively preventing gradient disperse, and with reference to MTCNN framework, it joined multitask training function, output is horizontal
To, 3 kinds of longitudinal lane line, non-lane line classification results and 3 confidence levels, while exporting line segment starting and terminal point coordinate, totally 10
Output parameter, network structure is as depicted in figs. 1 and 2, and Fig. 1 is CRP layers of universal architecture in inventive network, includes two layers of 3*
3convolution layers, one layer ReLU layers and one layer maxpooling layers;Fig. 2 is present invention training network structure.
The classification of deep learning model output and regression result are sent into category analysis and coordinate restoration module, obtain each picture
The block coordinate of line segment in original image in the position and block in original image, is then clustered within the scope of full figure, obtain slope and
Spatial position is similar as block, with least square method in same class as block line segment coordinate fitting is in line, final output vehicle
Road line coordinates completes the algorithmic procedure of lane line automatic identification.
In conclusion the traffic scene lane detection new method of the invention based on deep learning multitask network, by
Brightness and marginal information collection module, image adaptive cut module, image normalization module, deep learning network characterization and extract
The composition such as module, category analysis and coordinate restoration module, lane line fitting module, is based on multitask deep learning network training,
Under different brightness, colour temperature, weather environment and in the traffic scene of various complexity, there is good adaptability, meet simultaneously
The real-time demand of intelligent transportation has the characteristics that accurate, quick, adaptable.Friendship based on deep learning multitask network
Logical scene lane detection system structure is as shown in Figure 3.
The present invention in the industry cycle can be achieved on the embedded platform with deep learning module of mainstream, training of the invention
Sample does positive sample using cross, the longitudinal lane line picture of 224*224 size, other pictures intercepted in traffic scene are as negative
Sample, 20% is respectively selected in positive sample as the sample for returning coordinate, training sample ratio are as follows: Hor Lane:Ver Lane:
Neg Sample:Hor Landmark:Ver Landmark=5:5:15:1:1.In Fig. 1-2, by common feature extraction
Layer CRP1~After CRP3, classification and recurrence task extract feature respectively on common feature map, and sorter network feature mentions
Taking path is CPR4-1 and 5-1, and Recurrent networks feature extraction path is CPR4-2 and 5-2.Classification based training loss assessment function is adopted
With sparse intersection entropy function:
Wherein, k is classification number, pk0≤p of classification confidence for such confidence level, after softmaxk≤ 1, three classes
Cross entropy summation is the sparse cross entropy penalty values classified.It returns coordinate loss function and uses Euclidean distance sum of squares function:
Wherein xk、ykAnd Xk、YkThe actual value that the predicted value and Label for respectively indicating recurrence coordinate are demarcated, and final damage
Losing function is classification and the weighted sum for returning loss function, and the value for being added to the proportionality coefficient ξ, ξ of two kinds of penalty values here is being instructed
0.5 is taken in white silk:
After tested, the traffic scene lane detection new method of the invention based on deep learning multitask network, is compared
In traditional method for detecting lane lines, no matter it is obviously improved in adaptability and accuracy, in different brightness, colour temperature, weather
Under environment and in the traffic scene of various complexity, there is good performance, while meeting the real-time demand of intelligent transportation,
Have the characteristics that accurate, quick, adaptable, meets the requirement of Current traffic product headend equipment.
The above described is only a preferred embodiment of the present invention, be not intended to limit the present invention in any form, though
The right present invention has been described by way of example and in terms of the preferred embodiments, however, being not intended to limit the invention, any technology people for being familiar with this profession
Member can make a little change or modification using the technology contents disclosed certainly without departing from the scope of the present invention, at
For the equivalent embodiment of equivalent variations, but anything that does not depart from the technical scheme of the invention content, according to the technical essence of the invention
Any simple modification, equivalent change and modification to the above embodiments, belong in the range of technical solution of the present invention.
Claims (10)
1. a kind of traffic lane line detecting method based on deep learning multitask network characterized by comprising
S1, brightness and marginal information are collected: being extracted mean picture brightness information and passed through two-dimensional convolution operator extraction image border
Intensity;
S2, image adaptive are cut: according to the proportionate relationship of mean picture brightness and edge strength, unit of account area edge is strong
Threshold value is spent, to adaptively cut image, several picture blocks is divided the image into, is ready for image normalization;
S3, image normalization: being zoomed in and out the image cut according to edge strength threshold value by fixed dimension, prepares to be sent into depth
Study;
S4, deep learning: it to the image block of compound convolutional neural networks input uniform sizes, is generated by network model lateral, vertical
To lane line and non-lane line classification results and lane line coordinates;
S5, category analysis and coordinate restore: the classification information exported according to the network model of deep learning extracts lane line respectively
The coordinate of block, and according to the size as block in original image, calculate the respective coordinate value in original image;
S6, lane line fitting: according to each as block sort information and coordinate value, using each spatial coherence as block, same
The each coordinate as block of lane line is fitted, to export final lane line coordinates.
2. the method according to claim 1, wherein the specific method of step S1 includes:
S11, the spatial coherence based on pixel in actual scene image, image is sampled, and sample rate is 1/ (3*3), i.e.,
The every column of 3 row 3 acquisition central point, if original image, having a size of W*H, the size obtained after sampling is the thumbnail of (W/3) * (H/3), side
What edge was unsatisfactory for 3*3 does padding processing, seeks average brightness value:
S12, two-dimensional convolution extraction marginal information is carried out to image, convolution operator is based on scharr operator and optimizes, and is divided into water
Gentle vertical both direction:WithEdge does padding processing, respectively obtain it is horizontal and
Vertical gradient, gradient obtain image averaging marginal information intensity after merging:
3. the method according to claim 1, wherein the specific method of step S2 includes: to seek segmented image ruler
When very little S, using piecewise function:
Here, Smin=112, Smax=224, cutting as block length and width are S.
4. the method according to claim 1, wherein deep learning network described in step S4, is based on resnet
With DARKnet Network Evolution, residual error transmitting is added after the CRP layer of one pond layer of every two convolution sum, and refers to
Multitask training function is added in MTCNN framework, exports lateral, longitudinal lane line, 3 kinds of classification results of non-lane line and 3 confidences
It spends, while exporting line segment starting and terminal point coordinate, totally 10 output parameters.
5. the method according to claim 1, wherein step S6 method particularly includes: obtain each as block is in original
Coordinate of the line segment in original image, is then clustered within the scope of full figure, obtains slope and space bit in position and block in figure
Set it is similar as block, with least square method as block line segment coordinate fitting is in line, final output lane line is sat in same class
Mark completes lane line automatic identification.
6. a kind of traffic lane line detector based on deep learning multitask network characterized by comprising
Brightness and marginal information collection module: it extracts mean picture brightness information and passes through two-dimensional convolution operator extraction image border
Intensity;
Image adaptive cuts module: according to the proportionate relationship of mean picture brightness and edge strength, unit of account area edge
Intensity threshold divides the image into several picture blocks, is ready for image normalization to adaptively cut image;
As image normalization module: being zoomed in and out to the image cut according to edge strength threshold value by fixed dimension, prepare to be sent into
Deep learning model;
Compound convolution neural network module: inputting the image block of uniform sizes, by network model generate laterally, longitudinal lane line and
Non- lane line classification results and lane line coordinates;
Category analysis and coordinate restoration module: the classification information exported according to deep learning network model extracts lane line respectively
The coordinate of block, and according to the size as block in original image, calculate the respective coordinate value in original image;
Lane line fitting module: according to each as block sort information and coordinate value, using each spatial coherence as block, same
The each coordinate as block of lane line is fitted, to export final lane line coordinates.
7. device according to claim 6, which is characterized in that the brightness includes: with marginal information collection module
Average brightness value cell: the spatial coherence based on pixel in actual scene image samples image, and sample rate is
1/ (3*3), i.e., the every column of 3 row 3 acquisition central point, if original image, having a size of W*H, the size obtained after sampling is (W/3) * (H/3)
Thumbnail, what edge was unsatisfactory for 3*3 does padding processing, seek average brightness value:
Average edge information strength unit: two-dimensional convolution is carried out to image and extracts marginal information, convolution operator is calculated based on scharr
Son optimizes, and is divided into horizontal and vertical directions:WithIt is at padding at edge
Reason respectively obtains horizontal and vertical gradient, and gradient obtains image averaging marginal information intensity after merging:
8. device according to claim 6, which is characterized in that it includes piecewise function list that described image, which adaptively cuts module,
Member, when seeking segmented image size S, using piecewise function:
Here, Smin=112, Smax=224, cutting as block length and width are S.
9. device according to claim 6, which is characterized in that the compound convolution neural network module includes deep learning
Network unit: being based on resnet and DARKnet Network Evolution, adds after the CRP layer of one pond layer of every two convolution sum
Enter residual error transmitting, and refer to MTCNN framework, multitask training function is added, exports lateral, longitudinal lane line, non-lane line 3
Classification results and 3 confidence levels are planted, while exporting line segment starting and terminal point coordinate, totally 10 output parameters.
10. device according to claim 6, which is characterized in that the lane line fitting module includes cluster fitting unit:
Coordinate of the line segment in original image in each position as block in original image and block is obtained, is then clustered within the scope of full figure,
Show that slope and spatial position are similar as block, with least square method in same class as block line segment coordinate fitting is in line,
Final output lane line coordinates completes lane line automatic identification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811222879.9A CN109345547B (en) | 2018-10-19 | 2018-10-19 | Traffic lane line detection method and device based on deep learning multitask network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811222879.9A CN109345547B (en) | 2018-10-19 | 2018-10-19 | Traffic lane line detection method and device based on deep learning multitask network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109345547A true CN109345547A (en) | 2019-02-15 |
CN109345547B CN109345547B (en) | 2021-08-24 |
Family
ID=65311326
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811222879.9A Active CN109345547B (en) | 2018-10-19 | 2018-10-19 | Traffic lane line detection method and device based on deep learning multitask network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109345547B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109961105A (en) * | 2019-04-08 | 2019-07-02 | 上海市测绘院 | A kind of Classification of High Resolution Satellite Images method based on multitask deep learning |
CN110363182A (en) * | 2019-07-24 | 2019-10-22 | 北京信息科技大学 | Method for detecting lane lines based on deep learning |
TWI694019B (en) * | 2019-06-05 | 2020-05-21 | 國立中正大學 | Lane line detection and tracking method |
CN111400040A (en) * | 2020-03-12 | 2020-07-10 | 重庆大学 | Industrial Internet system based on deep learning and edge calculation and working method |
CN112966639A (en) * | 2021-03-22 | 2021-06-15 | 新疆爱华盈通信息技术有限公司 | Vehicle detection method and device, electronic equipment and storage medium |
CN113313071A (en) * | 2021-06-28 | 2021-08-27 | 浙江同善人工智能技术有限公司 | Road area identification method and system |
CN113516010A (en) * | 2021-04-08 | 2021-10-19 | 柯利达信息技术有限公司 | Intelligent network identification and processing system for foreign matters on highway |
CN113822218A (en) * | 2021-09-30 | 2021-12-21 | 厦门汇利伟业科技有限公司 | Lane line detection method and computer-readable storage medium |
CN114022863A (en) * | 2021-10-28 | 2022-02-08 | 广东工业大学 | Deep learning-based lane line detection method, system, computer and storage medium |
CN115376082A (en) * | 2022-08-02 | 2022-11-22 | 北京理工大学 | Lane line detection method integrating traditional feature extraction and deep neural network |
CN116543365A (en) * | 2023-07-06 | 2023-08-04 | 广汽埃安新能源汽车股份有限公司 | Lane line identification method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073848A (en) * | 2010-12-31 | 2011-05-25 | 深圳市永达电子股份有限公司 | Intelligent optimization-based road recognition system and method |
CN102567713A (en) * | 2010-11-30 | 2012-07-11 | 富士重工业株式会社 | Image processing apparatus |
CN106203398A (en) * | 2016-07-26 | 2016-12-07 | 东软集团股份有限公司 | A kind of detect the method for lane boundary, device and equipment |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
US20180204073A1 (en) * | 2017-01-16 | 2018-07-19 | Denso Corporation | Lane detection apparatus |
US20180225527A1 (en) * | 2015-08-03 | 2018-08-09 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, apparatus, storage medium and device for modeling lane line identification, and method, apparatus, storage medium and device for identifying lane line |
-
2018
- 2018-10-19 CN CN201811222879.9A patent/CN109345547B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567713A (en) * | 2010-11-30 | 2012-07-11 | 富士重工业株式会社 | Image processing apparatus |
CN102073848A (en) * | 2010-12-31 | 2011-05-25 | 深圳市永达电子股份有限公司 | Intelligent optimization-based road recognition system and method |
US20180225527A1 (en) * | 2015-08-03 | 2018-08-09 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, apparatus, storage medium and device for modeling lane line identification, and method, apparatus, storage medium and device for identifying lane line |
CN106203398A (en) * | 2016-07-26 | 2016-12-07 | 东软集团股份有限公司 | A kind of detect the method for lane boundary, device and equipment |
US20180204073A1 (en) * | 2017-01-16 | 2018-07-19 | Denso Corporation | Lane detection apparatus |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
Non-Patent Citations (2)
Title |
---|
THUNIKI YASHWANTH REDDY ET AL.: "A novel variable block-size image compression based on edge detection", 《2017 CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY(CICT17)》 * |
贾允 等: "改进图像阈值分割算法的研究", 《光学技术》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109961105A (en) * | 2019-04-08 | 2019-07-02 | 上海市测绘院 | A kind of Classification of High Resolution Satellite Images method based on multitask deep learning |
TWI694019B (en) * | 2019-06-05 | 2020-05-21 | 國立中正大學 | Lane line detection and tracking method |
CN110363182A (en) * | 2019-07-24 | 2019-10-22 | 北京信息科技大学 | Method for detecting lane lines based on deep learning |
CN110363182B (en) * | 2019-07-24 | 2021-06-18 | 北京信息科技大学 | Deep learning-based lane line detection method |
CN111400040A (en) * | 2020-03-12 | 2020-07-10 | 重庆大学 | Industrial Internet system based on deep learning and edge calculation and working method |
CN112966639B (en) * | 2021-03-22 | 2024-04-26 | 新疆爱华盈通信息技术有限公司 | Vehicle detection method, device, electronic equipment and storage medium |
CN112966639A (en) * | 2021-03-22 | 2021-06-15 | 新疆爱华盈通信息技术有限公司 | Vehicle detection method and device, electronic equipment and storage medium |
CN113516010A (en) * | 2021-04-08 | 2021-10-19 | 柯利达信息技术有限公司 | Intelligent network identification and processing system for foreign matters on highway |
CN113313071A (en) * | 2021-06-28 | 2021-08-27 | 浙江同善人工智能技术有限公司 | Road area identification method and system |
CN113822218A (en) * | 2021-09-30 | 2021-12-21 | 厦门汇利伟业科技有限公司 | Lane line detection method and computer-readable storage medium |
CN114022863A (en) * | 2021-10-28 | 2022-02-08 | 广东工业大学 | Deep learning-based lane line detection method, system, computer and storage medium |
CN115376082A (en) * | 2022-08-02 | 2022-11-22 | 北京理工大学 | Lane line detection method integrating traditional feature extraction and deep neural network |
CN116543365A (en) * | 2023-07-06 | 2023-08-04 | 广汽埃安新能源汽车股份有限公司 | Lane line identification method and device, electronic equipment and storage medium |
CN116543365B (en) * | 2023-07-06 | 2023-10-10 | 广汽埃安新能源汽车股份有限公司 | Lane line identification method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109345547B (en) | 2021-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109345547A (en) | Traffic lane line detecting method and device based on deep learning multitask network | |
CN106651872B (en) | Pavement crack identification method and system based on Prewitt operator | |
CN112101175A (en) | Expressway vehicle detection and multi-attribute feature extraction method based on local images | |
CN109657632B (en) | Lane line detection and identification method | |
CN111046880A (en) | Infrared target image segmentation method and system, electronic device and storage medium | |
CN104134068B (en) | Monitoring vehicle feature representation and classification method based on sparse coding | |
CN104766071B (en) | A kind of traffic lights fast algorithm of detecting applied to pilotless automobile | |
CN106709412B (en) | Traffic sign detection method and device | |
CN109631848A (en) | Electric line foreign matter intruding detection system and detection method | |
CN109241902A (en) | A kind of landslide detection method based on multi-scale feature fusion | |
CN112365467B (en) | Foggy image visibility estimation method based on single image depth estimation | |
CN105069816B (en) | A kind of method and system of inlet and outlet people flow rate statistical | |
CN110175556B (en) | Remote sensing image cloud detection method based on Sobel operator | |
CN105184308B (en) | Remote sensing image building detection classification method based on global optimization decision | |
CN111815528A (en) | Bad weather image classification enhancement method based on convolution model and feature fusion | |
Su et al. | A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification | |
CN112465699A (en) | Remote sensing image splicing method based on cloud detection | |
Alami et al. | Local fog detection based on saturation and RGB-correlation | |
CN111027564A (en) | Low-illumination imaging license plate recognition method and device based on deep learning integration | |
CN112863194B (en) | Image processing method, device, terminal and medium | |
CN112115778B (en) | Intelligent lane line identification method under ring simulation condition | |
CN112052811A (en) | Pasture grassland desertification detection method based on artificial intelligence and aerial image | |
CN109800693B (en) | Night vehicle detection method based on color channel mixing characteristics | |
CN115294486B (en) | Method for identifying and judging illegal garbage based on unmanned aerial vehicle and artificial intelligence | |
CN106815580A (en) | A kind of license plate locating method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210901 Address after: No.8, Haitai Huake 2nd Road, Huayuan Industrial Zone, Binhai New Area, Tianjin, 300450 Patentee after: TIANDY TECHNOLOGIES Co.,Ltd. Address before: 300384 a222, building 4, No. 8, Haitai Huake Second Road, Huayuan Industrial Zone (outside the ring), high tech Zone, Binhai New Area, Tianjin Patentee before: TIANJIN TIANDI WEIYE INVESTMENT MANAGEMENT Co.,Ltd. |
|
TR01 | Transfer of patent right |