CN110472508B - Lane line distance measurement method based on deep learning and binocular vision - Google Patents
Lane line distance measurement method based on deep learning and binocular vision Download PDFInfo
- Publication number
- CN110472508B CN110472508B CN201910636651.2A CN201910636651A CN110472508B CN 110472508 B CN110472508 B CN 110472508B CN 201910636651 A CN201910636651 A CN 201910636651A CN 110472508 B CN110472508 B CN 110472508B
- Authority
- CN
- China
- Prior art keywords
- lane line
- lane
- network
- binocular vision
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention relates to a lane line ranging method based on deep learning and binocular vision, which comprises the following steps: the system comprises an image acquisition module, a lane line detection module based on a convolutional neural network and a distance measurement module based on binocular vision. The lane line detection module based on the convolutional neural network is used for accurately identifying lane lines on two sides of a vehicle, a vgg network frame and a full convolutional network FCN are used for changing three full connection layers behind a vgg network into deconvolution layers, the deconvolution layers up-sample characteristic graphs of the convolutional layers, so that a prediction is generated for each pixel, the lane line prediction problem is converted into an image pixel level to be solved, and a binary graph and an example graph of a predicted lane line are obtained through the convolutional neural network; and the binocular vision-based ranging module extracts the positions of the obtained lane lines by using a binocular stereo matching algorithm SGBM algorithm, and performs depth calculation on the nearest points of the lane lines on two sides to the camera to realize real-time ranging.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a lane line detection and ranging method based on deep learning.
Background
The deep learning is a branch of machine learning, and aims to establish and simulate a neural network for analysis learning of a human brain, and read data through a mechanism for simulating the human brain.
Many vehicles today have functions for assisting the driving of the driver, such as a lane keeping function, a lane line warning function. This function enables the vehicle to remain in place between lanes, which is critical to track planning and decision making in potential lane departure or automatic driving. Traditional lane detection methods rely on highly defined, manual feature extraction and heuristic methods, which usually require post-processing techniques, which often make the calculation computationally intensive and unfavorable for application expansion in varying road scenarios. The distance measurement of the lane line is required to be dependent on the result of the lane line detection, the cost of using the laser radar to measure the distance is high, and the limitation condition of using monocular vision to measure the distance is more, so that the accuracy of using binocular vision to measure the distance of the lane line, which is a short-distance target, is high in cost and low in cost, and in a comprehensive view, the development of the lane line detection and distance measurement method based on deep learning and binocular vision is very necessary, and has great potential and value in the intelligent driving field.
Disclosure of Invention
The invention aims to provide a lane line ranging method with high accuracy and low cost. The technical proposal is as follows:
a lane line ranging method based on deep learning and binocular vision comprises the following steps: the system comprises an image acquisition module, a lane line detection module based on a convolutional neural network and a distance measurement module based on binocular vision. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the image acquisition module is a corresponding binocular camera and is used for acquiring real-time images in the running process of the vehicle;
the lane line detection module based on the convolutional neural network is used for accurately identifying lane lines on two sides of a vehicle, a vgg network frame and a full convolutional network FCN are used for changing a vgg network rear three-layer full connection layer into a deconvolution layer, the deconvolution layer carries out up-sampling on a characteristic image of the convolutional layer, so that a prediction is generated for each pixel, the lane line prediction problem is converted into an image pixel level to be solved, a binary image and an example image of a predicted lane line are obtained through the convolutional neural network, and the lane line predicted by the network is displayed on an image through post-processing and clustering;
the binocular vision-based ranging module extracts the positions of the obtained lane lines by using a binocular stereo matching algorithm SGBM algorithm, and performs depth calculation on the nearest points of the lane lines on two sides to the camera to realize real-time ranging;
the depth and the distance of all points in the view can be obtained, namely, the distance from the lane line to the nearest point of the camera can be obtained through the lane line detection result;
sixth step: according to the actual parameters such as the vehicle width, the camera mounting position and the like, the distance with application value can be obtained through physical calculation.
The lane line detection ranging method based on deep learning and binocular ranging can solve the problems of poor robustness, poor real-time performance and the like of the traditional lane line detection method, meanwhile, the binocular vision-based ranging module is added, so that the lane line detection and ranging can be realized in real time and at low cost, the precision of the lane line detection can reach more than 90% through structural design of a high-precision neural network and continuous training and learning of multiple scenes and large-range samples, the binocular vision distance is high in measurement precision of a short distance, and the lane line detection and ranging with high precision can be realized, so that the method has great help to the current intelligent driving field.
Drawings
FIG. 1 convolutional neural network vgg16+FCN structure
FIG. 2 is a graph of lane line detection and ranging effects based on deep learning and binocular vision
Detailed Description
The invention comprises the following steps: the system comprises an image acquisition module, a lane line detection module based on a convolutional neural network and a distance measurement module based on binocular vision. The image acquisition module is a corresponding binocular camera and is used for acquiring real-time images in the running process of the vehicle, and left-eye or right-eye images are selected as display; the lane line detection module based on the convolutional neural network is used for accurately identifying lane lines on two sides of a vehicle, a vgg network frame and a full convolutional network FCN are used for changing a vgg network rear three-layer full connection layer into a deconvolution layer, the deconvolution layer carries out up-sampling on a characteristic image of the convolutional layer, so that a prediction can be generated for each pixel, the problem of lane line prediction is converted into an image pixel level to be solved, a binary image and an example image of a predicted lane line can be obtained through the network, and the lane lines predicted by the network are displayed on an image through post-processing, clustering and other works; the binocular vision-based ranging module uses a traditional binocular stereo matching algorithm SGBM algorithm to extract the positions of the obtained lane lines, extracts the lane lines at two sides and performs depth calculation on the nearest point to the camera to realize real-time ranging.
The first step: and calibrating the used binocular camera to obtain the internal parameters and the external parameters of the vision acquisition module.
And a second step of: and combining the actual conditions to manufacture training set samples which are in accordance with the actual conditions. The data set Tusimple data set and the Culane data set disclosed in the field screen out nearly 15000 pictures which are pictures with 1280 multiplied by 720 without a headstock, and the network of vgg and FCN is trained by multiple scene samples such as daytime, black days, vehicle crowding, vehicle sparsity and the like, and the large data set is adopted to obtain better robustness, reduce the conditions of overfitting and the like.
And a third step of: the image acquisition module is used for acquiring real-time images of the vehicle running, the left view and the right view are stored frame by frame to be subjected to next detection and ranging, and the head part of the vehicle is not needed as much as possible during image acquisition, so that the accuracy of detection and ranging is ensured.
Fourth step: and optionally selecting a left view or a right view, and predicting each frame through a trained lane line detection module to obtain lane line detection results on two sides of the vehicle.
Fifth step: the depth and the distance of all points in the view can be obtained by utilizing a binocular stereo matching SGBM algorithm of the binocular vision ranging module, namely, the distance from the lane line to the nearest point of the camera can be obtained through the lane line detection result.
Sixth step: according to the actual parameters such as the vehicle width, the camera mounting position and the like, the distance with application value can be obtained through physical calculation.
Claims (1)
1. A lane line ranging method based on deep learning and binocular vision comprises the following steps: comprises an image acquisition module, a lane line detection module based on a convolutional neural network and a distance measurement module based on binocular vision, wherein,
the image acquisition module is a corresponding binocular camera and is used for acquiring real-time images in the running process of the vehicle;
the lane line detection module based on the convolutional neural network is used for accurately identifying lane lines on two sides of a vehicle, a vgg network frame and a full convolutional network FCN are used for changing a vgg network rear three-layer full connection layer into a deconvolution layer, the deconvolution layer carries out up-sampling on a characteristic image of the convolutional layer, so that a prediction is generated for each pixel, the lane line prediction problem is converted into an image pixel level to be solved, a binary image and an example image of a predicted lane line are obtained through the convolutional neural network, and the lane line predicted by the network is displayed on an image through post-processing and clustering;
the binocular vision-based ranging module extracts the nearest points of the lane lines on two sides from the camera according to the obtained lane line positions by using a binocular stereo matching algorithm SGBM algorithm to perform depth calculation so as to realize real-time ranging;
the depth and the distance of all points in the view are obtained, namely the distance from the lane line to the nearest point of the camera is obtained through the lane line detection result;
and obtaining the distance with application value through physical calculation according to the actual parameters of the vehicle width and the camera mounting position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910636651.2A CN110472508B (en) | 2019-07-15 | 2019-07-15 | Lane line distance measurement method based on deep learning and binocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910636651.2A CN110472508B (en) | 2019-07-15 | 2019-07-15 | Lane line distance measurement method based on deep learning and binocular vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110472508A CN110472508A (en) | 2019-11-19 |
CN110472508B true CN110472508B (en) | 2023-04-28 |
Family
ID=68508674
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910636651.2A Active CN110472508B (en) | 2019-07-15 | 2019-07-15 | Lane line distance measurement method based on deep learning and binocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110472508B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111309032A (en) * | 2020-04-08 | 2020-06-19 | 江苏盛海智能科技有限公司 | Autonomous obstacle avoidance method and control end of unmanned vehicle |
CN112613392A (en) * | 2020-12-18 | 2021-04-06 | 北京新能源汽车技术创新中心有限公司 | Lane line detection method, device and system based on semantic segmentation and storage medium |
CN115019278B (en) * | 2022-07-13 | 2023-04-07 | 北京百度网讯科技有限公司 | Lane line fitting method and device, electronic equipment and medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018058356A1 (en) * | 2016-09-28 | 2018-04-05 | 驭势科技(北京)有限公司 | Method and system for vehicle anti-collision pre-warning based on binocular stereo vision |
CN109084724A (en) * | 2018-07-06 | 2018-12-25 | 西安理工大学 | A kind of deep learning barrier distance measuring method based on binocular vision |
CN109635744A (en) * | 2018-12-13 | 2019-04-16 | 合肥工业大学 | A kind of method for detecting lane lines based on depth segmentation network |
-
2019
- 2019-07-15 CN CN201910636651.2A patent/CN110472508B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018058356A1 (en) * | 2016-09-28 | 2018-04-05 | 驭势科技(北京)有限公司 | Method and system for vehicle anti-collision pre-warning based on binocular stereo vision |
CN109084724A (en) * | 2018-07-06 | 2018-12-25 | 西安理工大学 | A kind of deep learning barrier distance measuring method based on binocular vision |
CN109635744A (en) * | 2018-12-13 | 2019-04-16 | 合肥工业大学 | A kind of method for detecting lane lines based on depth segmentation network |
Non-Patent Citations (2)
Title |
---|
张新钰 ; 高洪波 ; 赵建辉 ; 周沫 ; .基于深度学习的自动驾驶技术综述.清华大学学报(自然科学版).2018,第58卷(第4期),438-444. * |
林付春 ; 张荣芬 ; 刘宇红 ; .基于深度学习的智能辅助驾驶系统设计.贵州大学学报(自然科学版).2018,第35卷(第1期),73-77. * |
Also Published As
Publication number | Publication date |
---|---|
CN110472508A (en) | 2019-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210142095A1 (en) | Image disparity estimation | |
CN109685842B (en) | Sparse depth densification method based on multi-scale network | |
CN110472508B (en) | Lane line distance measurement method based on deep learning and binocular vision | |
CN111448478B (en) | System and method for correcting high-definition maps based on obstacle detection | |
CN108594244B (en) | Obstacle recognition transfer learning method based on stereoscopic vision and laser radar | |
CN111209780A (en) | Lane line attribute detection method and device, electronic device and readable storage medium | |
CN106446785A (en) | Passable road detection method based on binocular vision | |
CN113936139A (en) | Scene aerial view reconstruction method and system combining visual depth information and semantic segmentation | |
CN111325782A (en) | Unsupervised monocular view depth estimation method based on multi-scale unification | |
DE102020102725A1 (en) | METHOD AND DEVICE FOR A CONTEXT-DEPENDENT HIGH-RESOLUTION MAP FROM THE CROWD SOURCE | |
CN109115232B (en) | Navigation method and device | |
CN107220632B (en) | Road surface image segmentation method based on normal characteristic | |
CN117111085A (en) | Automatic driving automobile road cloud fusion sensing method | |
JP2016143364A (en) | Position identification equipment, position identification method, and program | |
CN112455465B (en) | Driving environment sensing method and device, electronic equipment and storage medium | |
CN111126363B (en) | Object recognition method and device for automatic driving vehicle | |
CN113706599B (en) | Binocular depth estimation method based on pseudo label fusion | |
CN103533255B (en) | Based on the video scene automatic division method that moving displacement curve is simplified | |
Wang et al. | An end-to-end auto-driving method based on 3D LiDAR | |
CN114428259A (en) | Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition | |
CN116917936A (en) | External parameter calibration method and device for binocular camera | |
CN114332187B (en) | Monocular target ranging method and device | |
CN112967332B (en) | Binocular depth estimation method and device based on gate control imaging and computer equipment | |
Lee et al. | Vehicle segmentation using evidential reasoning | |
CN117765525A (en) | Cross-modal distillation 3D target detection method and system based on monocular camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |