CN113780189A - Lane line detection method based on U-Net improvement - Google Patents
Lane line detection method based on U-Net improvement Download PDFInfo
- Publication number
- CN113780189A CN113780189A CN202111075470.0A CN202111075470A CN113780189A CN 113780189 A CN113780189 A CN 113780189A CN 202111075470 A CN202111075470 A CN 202111075470A CN 113780189 A CN113780189 A CN 113780189A
- Authority
- CN
- China
- Prior art keywords
- lane line
- net
- lane
- line detection
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 116
- 230000006872 improvement Effects 0.000 title claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 5
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 29
- 230000011218 segmentation Effects 0.000 claims description 10
- 238000000034 method Methods 0.000 claims description 8
- 230000015556 catabolic process Effects 0.000 claims description 6
- 238000006731 degradation reaction Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 description 7
- 238000005070 sampling Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 241000282838 Lama Species 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 239000013585 weight reducing agent Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a lane line detection method based on U-Net improvement, which comprises the following steps: establishing a lane line detection U-Net improved model; the improvement of U-Net comprises: reducing the number of output channels of each layer in the U-Net model, and modifying the original U-Net architecture; and acquiring a target road image set with a lane line, and transmitting the target road image set to the trained lane line detection model for graphic processing to obtain a lane line detection result image. The lane line detection model constructed by the invention has light weight, high precision, rapidness and good robustness, can detect the existence of serious sheltering and unmarked lanes and incomplete lane lines, and obtains better detection results in some complex and challenging scenes; the lane detection model established by the invention has relatively less occupied memory amount while keeping high precision, thereby greatly improving the detection efficiency.
Description
Technical Field
The invention relates to the field of intelligent automobile sensing, in particular to a lane line detection method based on U-Net improvement.
Background
With the development of computer vision and the explosive growth of successful applications, Advanced Driver Assistance Systems (ADAS) have been extensively studied by many intelligent agencies, such as tesla, hectic autonomous and google autonomous vehicles. Lane detection is a key and important basic component in automatic driving, and plays an important role in assisting the safe driving of an automatic driving automobile. In fact, lane detection not only allows the vehicle to locate its own position, but also provides important information for control decisions and path planning, thereby directing the driver to drive the vehicle more safely in some cases. Therefore, lane detection has been a continuing concern of the research community.
Many studies on lane detection have been reported, and the methods are roughly classified into a conventional method and a deep learning-based method. Conventional methods are better suited for detecting lane markings if the lane markings are characterized as static or hand-made. However, lane detection is another return when the autonomous vehicle is operating in a dynamic driving environment. Although the lane lines themselves are regular, their shape and color may vary as the scene changes. The shapes may include straight lines, curved lines, dashed lines, fuzzy lane lines, and the like. The color may be white, yellow, affected by light, etc. Such complex and varied elements make lane detection more uncertain, in combination with the effects of weather, light conditions, and the presence of pedestrians and vehicles. Lane detection still presents many challenges, and there are many challenging and complex driving scenarios to detect, such as edge information and lane lines not clear, occlusion severe, lane lines not marked, and more than 4 lane lines. Therefore, a more rational and robust lane detection model to cover these problems remains a challenging problem, especially one that improves lane detection accuracy while keeping the number of parameters as low as possible and keeping high speed.
Disclosure of Invention
In order to solve the above problems, the present invention aims to provide an improved lane line detection method based on U-Net.
In order to achieve the above purpose, the invention adopts the technical scheme that:
a lane line detection method based on U-Net improvement comprises the following steps:
s1: establishing a lane line detection U-Net improved model; the improvement of U-Net comprises:
reducing the number of channels of the convolution layer in the U-Net model, and modifying the original U-Net architecture;
s2: and acquiring a target road image set with a lane line, and transmitting the target road image set to the trained lane line detection model for graphic processing to obtain a lane line detection result image.
Preferably, the improvement of U-Net in step S1 further includes: replacing at least one convolution layer with a residual block alleviates the gradient vanishing or degradation problem and improves the accuracy of the proposed model.
Preferably, the number of residual blocks is 1, and the residual blocks replace the second standard convolutional layer in the last encoder stage.
Preferably, the residual block is a non-bottleneck residual block.
Preferably, the final output channel of the lane line detection model is set to be 5, so that the problem of lane line instance segmentation is solved.
The invention has the beneficial effects that:
the invention combines the advantages of the residual block and the U-Net to construct a lane detection model for lane detection, the detection model has high precision, rapidness and good robustness, can detect the existence of serious sheltering and unmarked lanes and incomplete lane lines, and obtains better detection results in some complex and challenging scenes; the invention can identify different lane lines and solve the example segmentation problem of multiple lane lines; the lane detection model established by the invention has the advantages of keeping high precision, occupying relatively less memory, reducing the calculation amount, saving the resources and greatly improving the detection efficiency.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic structural diagram of a U-Net model;
FIG. 3 is a graph of a model lane example segmentation effect replacing different residual modules;
FIG. 4 is a schematic diagram of different residual module structures;
FIG. 5 is a comparison graph of the detection effect of the model of the present application and other models in a Tusimple lane line data set;
FIG. 6 is a comparison graph of the detection effect of the model of the present application and other models in an unsupervised LLAMAS lane line data set.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings. In the description of the present invention, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
Example 1
As shown in fig. 1, the invention provides a lane line detection method based on U-Net improvement, comprising the following steps:
s1: establishing a lane line detection U-Net improved model; the improvement of U-Net comprises:
reducing the number of output channels of each layer in the U-Net model, and modifying the original U-Net architecture;
s2: and acquiring a target road image set with a lane line, and transmitting the target road image set to the trained lane line detection model for graphic processing to obtain a lane line detection result image.
The lane detection model is improved on the basis of U-Net, and the original U-Net model structure is shown in figure 2 and comprises a feature extraction and up-sampling part; the feature extraction part is one scale after passing through each pooling layer, and comprises 5 scales of the original image. And the up-sampling part fuses the channels corresponding to the feature extraction part in the same scale every time of up-sampling, and fuses the features of the lower feature extraction part of the front part and the features of the upper sampling part of the rear part to obtain more accurate context information so as to achieve better segmentation effect. However, as the pooling layer is deepened, the number of output channels is gradually increased, parameters are multiplied, and the operation efficiency is reduced. Therefore to address this problem, we propose a lightweight lane detection model that includes a decoder and encoder structure, each step of the encoder and decoder sections containing a set of specified operations including sampling, standard convolution, batch normalization and ReLU activation functions. The encoder part is mainly responsible for reducing an input image to a specified size, encoding information contained in the image and extracting a characteristic diagram; the main task of the decoder part is to restore the encoded information to the same size as the original input. The method comprises the steps of improving an original U-Net structure by reducing the number of output channels of each layer in a U-Net model, reducing the number of output channels of each layer, reducing the parameters of the lane detection model, establishing a lightweight lane detection model, enhancing the transportability of the lane line detection model, and enabling the lane detection model to have stronger applicability and be applied to a mobile development platform with limited computing resources; on the premise of ensuring the precision, the calculation speed is increased more quickly, and less memory is occupied. And training the established lane detection model to obtain accurate model parameters, iterating for multiple times to obtain the trained model, and using the trained model to detect the actual lane line. The trained lane line detection model is arranged in a vehicle-mounted camera, the camera acquires a target road image set with a lane line, and the target road image set is transmitted to the trained lane line detection model for image processing to obtain a lane line detection result image.
Example 2
The present embodiment is developed on the basis of the above embodiment, and specifically, the improvement of U-Net in step S1 further includes: replacing at least one convolution layer with a residual block alleviates the gradient vanishing or degradation problem and improves the accuracy of the proposed model. According to the lane line detection model provided by the invention, the U-Net architecture is changed by reducing the number of channels of the convolutional layer, the changed architecture possibly has the problems of degradation, disappearance gradient or explosion gradient and the loss of the capability of extracting an effective characteristic diagram, so that the performance of the model is influenced. This problem is therefore solved by replacing the convolutional layer in the model with a residual block, which is a good mitigation function, and the skipped connections in the residual block can help to recover the spatial information used to supplement the information that may be lost during the training phase, alleviate the gradient vanishing or degradation problem due to the reduced number of channels and improve the accuracy of the lane detection model. As shown in the table below, different convolutional layers are replaced with different numbers and different types of residual blocks to test their overall performance, resulting in different positions of convolutional layers being replaced with different numbers and different types of residual blocks, resulting in different test results, as a preferred embodiment, a non-bottleneck residual block is used to replace the second standard convolutional layer in the last encoder stage.
When different convolution layers are replaced by different residual blocks to test the comprehensive performance, the test experiment results are shown in table 1:
table 1: experimental result table for replacing different convolution layers by different residual blocks
In the table: d1 denotes applying residual blocks with different convolutional layers to the first down-sampling stage in our proposed network; d12 denotes applying residual blocks with different convolutional layers to the first and second downsampling stages simultaneously, similarly otherwise; LRNet1 represents a non-bottleneck module; LRNet2 represents a bottleneck module; LRNet3 represents a non-bottleneck-1D module; LRNet4 denotes the Res2Net module.
As can be seen from table 1, when different residual error modules are used, the measured overall performance is different, and when a non-bottleneck module is embedded into the last down-sampling stage of the encoder part (D4 in table 1), the lane line detection model has the best overall performance and a smaller amount of parameters, so that as a better implementation, a residual error block is used to replace the second standard convolution layer in the last encoder stage, so that the model achieves the highest computational efficiency while improving the accuracy.
In some preferred embodiments, as shown in FIG. 4, in which: a is a non-bottleneck module; b is a bottleneck module; c is a Res2Net module; d is a non-bottleneck-1D module; in order to achieve the best performance of the lane detection model, the residual block is a non-bottleneck residual block. The second standard convolutional layer in the last encoder stage is replaced with a different residual block module, as shown in fig. 3, where: a is the real condition of the lane line; b is using a non-bottleneck module model; c is using a bottleneck module model; d is using a non-bottleneck-1D model; e is using Res2Net module model; in the obtained actual effect diagram, when other types of residual blocks are used, the detection result diagram shows that the detected lane lines have double images, discontinuity and deviation, when the non-bottleneck residual blocks are used, the detected lane line outline is clearer and more accords with the condition of the real lane lines, and the fuzzy and shielded lane lines can be clearly detected.
Example 3
The embodiment is developed on the basis of the above embodiment, specifically, the final output channel of the lane line detection model is set to be 5, so as to solve the problem of lane line example segmentation, as shown in fig. 2, the original U-Net output channel is 2, in the actual lane line detection, a lane line equal to or larger than 5 and a lane line with unclear identification or shielding cannot be accurately detected, a detection result of the detection is mapped to a discontinuous lane line and cannot be accurately matched with the actual lane line, and a great potential safety hazard exists for the actual driving. Therefore, the final output channel of the lane line detection model in the application is set to be 5, multiple types of example channels are output under the condition that a network structure is not changed, according to the segmentation values of different lane lines, all pixel lane examples of each lane line can be extracted by the lane line detection model in the application, different types of lane lines are output, including the current lane line, the left lane line and the right lane line beside the current lane line and the like, the lane lines are accurately positioned, detection and identification of different lane lines are achieved, and the influence of unmarked lane lines and the influence of seriously sheltered lane lines on automatic driving of a vehicle are avoided.
In this embodiment, the detection model of the present invention is compared with the most advanced lane line detection algorithm on the TuSimple lane line data set, in this embodiment, there is a serious shielding situation in the original lane line, there are at least 5 lane lines, and there are unmarked actual lane lines, and the experimental comparison result is shown in fig. 5, where: a is a road input picture; b is a real marked lane line; c is a PINET (32x16) detection model; d is a PINET (64x32) detection model; e is Res18-Qin detection model; f is Res34-Qin detection model; g is an SCNN detection model; h is a SegNet detection model; i is an ENet-SAD detection model; j is a SegNet _ ConvLSTM detection model; k is a LaneNet detection model; l is a U-Net detection model; m is a U-Net _ ConvLSTM detection model; and n is the lane line detection model of the invention. The number of original lane lines is more than or equal to 5, and the situations of unclear identification and serious shielding exist, the lane lines are detected by using different detection models under the same detection condition, the obtained detection image shows that in the detection process of other detection models, for the lane lines with unclear identification and serious shielding, the lane lines detected by other models have the problems of fuzziness, discontinuity, superposition and the like to different degrees, and in the actual application process, vehicles cannot be accurately positioned in time, and potential safety hazards are easy to appear; and lane line detection model among this application for other detection models, more can the actual conditions of accurate detection lane line to can see out from the picture, in this application lane line detection model can also detect out unmarked lane line and seriously shelter from the lane line, solves the lane line example and cuts apart the problem, avoids detecting the potential safety hazard that leads to clearly because of the lane line in the practical application process.
Example 4
The present embodiment is developed on the basis of the above embodiments, and specifically, the detection model of the present invention is compared with the most advanced lane detection algorithm on the unsupervised LLAMAS lane line dataset, and unlike the TuSimple dataset, the unsupervised LLAMAS dataset is a very challenging lane detection dataset, and has the following characteristics:
unsupervised LLAMAS datasets are automatically labeled by creating high definition maps (including lidar-based lane markers) for autonomous driving, with various weather and lighting conditions in the dataset;
the number of positive pixels in each image is very small, about 2% of the total number of pixels, which obviously poses a challenge to a lane detection model trained and tested on an unsupervised LLAMAS dataset;
the position of the lane lines is more or less marked incorrectly and the distribution of the marked lane lines is sporadic, the far zones are relatively few labels, possibly only a few pixels, however, there may be more pixels near the marked lanes;
the types of lane lines, such as continuous, discontinuous, single line, double line, and cross-overlapped lane lines, are various, and the marked lane lines are discontinuous except for the lane lines at the boundary, which are marked in a long and narrow manner.
The comparative results are shown in fig. 6, in which: a is a road input picture; b is a real marked lane line; c is a PINET (32x16) detection model; d is a PINET (64x32) detection model; e is Res18-Qin detection model; f is Res34-Qin detection model; g is an SCNN detection model; h is a LaneNet detection model; i is an ENet detection model; j is an ENet-SAD detection model; k is a U-Net detection model; l is the lane line detection model of the present invention. For the own lane of the lane line in the image, lanonet, SCNN, ENet and our proposed model can basically detect most of the pixels on them. Pixel-level based segmentation models, including PINET (32x16), PINET (64x32), Res18-Qin, Res34-Qin, ENet-SAD, and U-Net, detect only a small fraction of lane lines. From careful observation, it was found that the SCNN and LaneNet predicted lane lines were wider and shorter than the ground truth, and that the predictions of our proposed model matched the ground truth lanes. Lane lines at distances and boundaries in the driving image are labeled with few pixels, which presents a great challenge to the prediction model, where Res18-Qin, Res34-Qin, and U-Net cannot accurately predict these lane lines. PINET (64x32), PINET (32x16), SCNN, LaneNet, ENet-SAD predict only a few boundary points and a few remote points. However, the network proposed by the inventor can predict most boundary points and distant points, and can effectively avoid the safety problem caused by unclear lane lines.
Example 5
The present embodiment is developed on the basis of the above embodiments, and discloses a specific parameter setting manner of the lane line detection model, where information on the number of channels to be reduced in each layer is shown in table 2:
table 2: the lane detection model parameter setting table
Note: the value in () in the table is the original U-Net output channel number.
As shown in table 2, the input picture size is 128 × 256, the convolution kernel size is 3 × 3, the step size is 1, the number of output channels of each convolution layer is reduced by the lane line detection model of the present application to reduce parameters, so as to improve the operation efficiency, and as the lane line detection model of the present application is at the encoder stage, the input image of the first layer (conv 1_1 in table 2) is 3 channels, and the corresponding output is 16 feature maps instead of 64 feature maps; if the number of output channels is reduced, also at the decoder stage, the input of layer 16 (conv 6_1 in table 2) is 192 profiles instead of 1024 profiles, and the corresponding output is 64 profiles instead of 512 profiles. In this embodiment, the second standard convolutional layer in the last encoder stage is replaced with a non-bottleneck residual block, i.e. the fourteenth layer (conv 5_2 in table 2) is replaced with a residual block as shown in table 2, which preferably is a non-bottleneck residual block, to alleviate the gradient vanishing or degradation problem and improve the accuracy of the proposed model. In addition, the provided model can solve the problem of lane example segmentation by setting the number of final output channels, accurately position lane lines, realize the detection and identification of different lane lines and avoid the influence of unmarked lane lines and seriously shielded lane lines on the automatic driving of the vehicle. As shown in the last row of the table, when the number of the final output channels is 5, the proposed model can output multiple types of example channels without changing the network structure, realize example segmentation of lane lines, perform accurate positioning of the lane lines, realize detection and identification of different lane lines, and avoid the influence of unmarked lane lines and seriously shielded lane lines on automatic driving of the vehicle.
Example 5
The embodiment is developed on the basis of the above embodiment, and specifically, as a better implementation scheme, the embodiment discloses that the size of an input image has a certain influence on the accuracy and the calculation efficiency of the lane line detection model, and for the lane line detection model in the present application, a smaller input size can achieve better performance, higher accuracy and faster calculation speed, and the effect of model weight reduction can be further improved while the accuracy is ensured. The detection results of the influence of different input picture sizes on the comprehensive performance of the model are shown in table 3:
table 3: performance comparison of different input sizes
As can be seen from table 3, when the input sizes are 128 × 256 and 352 × 640, the lane line detection model has significant differences in operation efficiency and accuracy, and when the input picture size is smaller, the overall performance is better and the calculation speed is faster, so that it can be found that the smaller input size is more suitable for the model to achieve better performance and lightweight results.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (5)
1. A lane line detection method based on U-Net improvement is characterized in that: the method comprises the following steps:
s1: establishing a lane line detection U-Net improved model; the improvement of U-Net comprises:
reducing the number of output channels of each layer in the U-Net model, and modifying the original U-Net architecture;
s2: and acquiring a target road image set with a lane line, and transmitting the target road image set to the trained lane line detection model for graphic processing to obtain a lane line detection result image.
2. The lane line detection method based on the U-Net improvement according to claim 1, wherein: the improvement of U-Net described in step S1 further includes: replacing at least one convolution layer with a residual block alleviates the gradient vanishing or degradation problem and improves the accuracy of the proposed model.
3. The lane line detection method based on the U-Net improvement according to claim 2, wherein: the residual block number is 1, the residual block replaces the second standard convolutional layer in the last encoder stage.
4. The lane line detection method based on the U-Net improvement according to claim 2, wherein: the residual block is a non-bottleneck residual block.
5. The lane line detection method based on the U-Net improvement according to claim 1, wherein: and the final output channel of the lane line detection model is set to be 5, so that the problem of lane line instance segmentation is solved.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111075470.0A CN113780189A (en) | 2021-09-14 | 2021-09-14 | Lane line detection method based on U-Net improvement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111075470.0A CN113780189A (en) | 2021-09-14 | 2021-09-14 | Lane line detection method based on U-Net improvement |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113780189A true CN113780189A (en) | 2021-12-10 |
Family
ID=78843779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111075470.0A Pending CN113780189A (en) | 2021-09-14 | 2021-09-14 | Lane line detection method based on U-Net improvement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113780189A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114821510A (en) * | 2022-05-26 | 2022-07-29 | 重庆长安汽车股份有限公司 | Lane line detection method and device based on improved U-Net network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109840471A (en) * | 2018-12-14 | 2019-06-04 | 天津大学 | A kind of connecting way dividing method based on improvement Unet network model |
CN112149535A (en) * | 2020-09-11 | 2020-12-29 | 华侨大学 | Lane line detection method and device combining SegNet and U-Net |
CN112183258A (en) * | 2020-09-16 | 2021-01-05 | 太原理工大学 | Remote sensing image road segmentation method based on context information and attention mechanism |
CN113158810A (en) * | 2021-03-24 | 2021-07-23 | 浙江工业大学 | ENet improvement-based light-weight real-time lane line segmentation method |
-
2021
- 2021-09-14 CN CN202111075470.0A patent/CN113780189A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109840471A (en) * | 2018-12-14 | 2019-06-04 | 天津大学 | A kind of connecting way dividing method based on improvement Unet network model |
CN112149535A (en) * | 2020-09-11 | 2020-12-29 | 华侨大学 | Lane line detection method and device combining SegNet and U-Net |
CN112183258A (en) * | 2020-09-16 | 2021-01-05 | 太原理工大学 | Remote sensing image road segmentation method based on context information and attention mechanism |
CN113158810A (en) * | 2021-03-24 | 2021-07-23 | 浙江工业大学 | ENet improvement-based light-weight real-time lane line segmentation method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114821510A (en) * | 2022-05-26 | 2022-07-29 | 重庆长安汽车股份有限公司 | Lane line detection method and device based on improved U-Net network |
CN114821510B (en) * | 2022-05-26 | 2024-06-14 | 重庆长安汽车股份有限公司 | Lane line detection method and device based on improved U-Net network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112132156B (en) | Image saliency target detection method and system based on multi-depth feature fusion | |
CN109753913B (en) | Multi-mode video semantic segmentation method with high calculation efficiency | |
WO2022134996A1 (en) | Lane line detection method based on deep learning, and apparatus | |
CN107274445B (en) | Image depth estimation method and system | |
CN113343778B (en) | Lane line detection method and system based on LaneSegNet | |
CN110622177A (en) | Instance partitioning | |
CN105989334B (en) | Road detection method based on monocular vision | |
CN111914698A (en) | Method and system for segmenting human body in image, electronic device and storage medium | |
CN116188999B (en) | Small target detection method based on visible light and infrared image data fusion | |
CN112861619A (en) | Model training method, lane line detection method, equipment and device | |
CN116453121B (en) | Training method and device for lane line recognition model | |
CN115372990A (en) | High-precision semantic map building method and device and unmanned vehicle | |
CN111709377B (en) | Feature extraction method, target re-identification method and device and electronic equipment | |
CN115861380A (en) | End-to-end unmanned aerial vehicle visual target tracking method and device in foggy low-light scene | |
CN115147328A (en) | Three-dimensional target detection method and device | |
CN115376089A (en) | Deep learning-based lane line detection method | |
CN111191482A (en) | Brake lamp identification method and device and electronic equipment | |
CN112288031A (en) | Traffic signal lamp detection method and device, electronic equipment and storage medium | |
Zhang et al. | Improved Lane Detection Method Based on Convolutional Neural Network Using Self-attention Distillation. | |
CN112699711A (en) | Lane line detection method, lane line detection device, storage medium, and electronic apparatus | |
CN113780189A (en) | Lane line detection method based on U-Net improvement | |
CN112529011B (en) | Target detection method and related device | |
CN112528994A (en) | Free-angle license plate detection method, license plate identification method and identification system | |
CN115690787A (en) | Semantic segmentation method, image processing apparatus, and computer-readable storage medium | |
CN111611942B (en) | Method for extracting and building database by perspective self-adaptive lane skeleton |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211210 |