CN112364800A - Automatic driving deviation processing method based on artificial intelligence - Google Patents

Automatic driving deviation processing method based on artificial intelligence Download PDF

Info

Publication number
CN112364800A
CN112364800A CN202011298591.7A CN202011298591A CN112364800A CN 112364800 A CN112364800 A CN 112364800A CN 202011298591 A CN202011298591 A CN 202011298591A CN 112364800 A CN112364800 A CN 112364800A
Authority
CN
China
Prior art keywords
vehicle
driving
module
lane
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011298591.7A
Other languages
Chinese (zh)
Other versions
CN112364800B (en
Inventor
杨忠
李世华
杨俊�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinling Institute of Technology
Original Assignee
Jinling Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinling Institute of Technology filed Critical Jinling Institute of Technology
Priority to CN202011298591.7A priority Critical patent/CN112364800B/en
Publication of CN112364800A publication Critical patent/CN112364800A/en
Application granted granted Critical
Publication of CN112364800B publication Critical patent/CN112364800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

An automatic driving deviation processing method based on artificial intelligence comprises the following steps: step 1, obtaining vehicle driving data, step 2, simulating a severe driving environment, step 3, identifying lane lines and other non-road objects, step 4, calibrating vehicle yaw in real time, step 5, matching vehicle speed and safe distance, step 6, starting automatic driving interruption, and processing abnormal driving. The invention relates to an automatic driving deviation processing method based on artificial intelligence on the basis of an artificial intelligence algorithm. The invention improves the YOLOv3 algorithm to realize the detection of driving images, and provides a yaw correction model for the automatic driving of the vehicle, so as to reduce the influence of environmental noise on the data acquired by the sensor as much as possible.

Description

Automatic driving deviation processing method based on artificial intelligence
Technical Field
The invention relates to the field of artificial intelligence, in particular to an automatic driving deviation processing method based on artificial intelligence.
Background
Intelligent driving refers to a robot that assists a person in driving and, in special cases, completely replaces a person in driving. Intelligent driving is a part of key development of intelligent traffic systems in various countries, and is still in continuous exploration and experiments. The intelligent driving has a great effect on the economic and scientific development and the comprehensive national power promotion of various countries. Unmanned driving is the future development direction of the automobile industry, and has great significance as the core of intelligent driving. The unmanned driving is a technology for sensing and judging the surrounding environment of a running automobile by carrying various sensing devices such as advanced sensors and the like, thereby obtaining the state and the surrounding environment information of the automobile, automatically planning a driving route and controlling the automobile to reach a destination.
The unmanned automobile can correctly identify roads according to the visual information, and plan the driving route and monitor the driving safety; and lane line detection and identification are the core part of the whole system. The image information acquired based on the vision system needs a series of processing to ensure that a standard, rapid and safe driving path is planned. Therefore, the method has great significance in the research of core technologies such as lane line detection and identification.
Disclosure of Invention
In order to solve the problems, the invention provides an automatic driving deviation processing method based on artificial intelligence on the basis of the artificial intelligence. In order to improve the accuracy of object identification in the driving process, an improved YOLOv3 algorithm is provided. Meanwhile, the yaw and abnormal handling conditions of intelligent driving are modeled, and in order to achieve the purpose, the invention provides an automatic driving deviation handling method based on artificial intelligence, which comprises the following specific steps:
step 1, obtaining vehicle driving data, comprising: data such as driving images, altitude, vehicle speed, longitude, latitude, direction elevation and the like;
step 2, simulating a severe driving environment, superposing white Gaussian noise on the acquired data to increase the stability and robustness of an automatic driving model, and controlling the range of a signal-to-noise ratio to be 30-40 dB;
and 3, recognizing lane lines and other non-road objects, and dividing recognition targets in the driving image into: lane lines and other non-road objects, and detecting them by the modified YOLOv3 model;
step 4, calibrating vehicle yaw in real time, detecting and tracking the lane line through the steps 1 to 3, recording the center of the image of the lane line as the origin of coordinates, establishing a driving yaw angle model, and calibrating a driving route;
step 5, matching the vehicle speed and the safe distance, and matching the safe distance of the vehicle according to the vehicle speed monitored in real time and a detection result of YOLOv 3;
and 6, starting automatic driving interruption, starting an interruption processing mechanism by the system when abnormality occurs in the safe distance, simultaneously broadcasting early warning information to a driver, and recording the abnormal condition of the vehicle-mounted terminal by the log module.
Further, the process of simulating the harsh environment of the traveling crane in step 2 can be expressed as:
xs=xn+x (1)
wherein x is the original vehicle driving data, xnIs Gaussian white noise data, xsRepresenting the resulting data collected in a simulated noisy environment.
The signal-to-noise ratio is defined as follows:
Figure BDA0002786116870000021
in the formula, PsRepresenting the power of the signal, PnRepresenting the power of the noise.
Further, the process of identifying lane lines and other non-road objects in step 3 may be represented as:
establishing two YOLOv3 models, respectively detecting lane lines and other non-road objects, wherein each model comprises three parts: the system comprises a trunk module, a feature fusion module and a prediction module; to reduce the regression difficulty of the detection box, the clustering center on the training set is obtained by K-means, the anchor box of the original YOLOv3 network is rearranged, and the distance measure of the following formula is used:
d=1-IOU(b,a) (3)
b and a respectively represent a label and a clustering center frame, d represents the overlapping degree of the label frame and the clustering center frame, the smaller d represents the higher the overlapping degree of the label frame and the clustering center frame, the clustering center is used for setting all the YOLO anchor frames, the YOLO layer corresponding to the high-resolution feature diagram uses 2 smaller anchor frames, and the YOLO layer corresponding to the low-resolution feature diagram uses 3 larger anchor frames.
The method comprises the following steps of adding a multi-receptive field mechanism into a trunk module to learn rich features, improving the learning capability of the trunk module, respectively using CBLP modules with convolution kernels of 1 × 1 and 5 × 5 to be connected with an original CBLP module in parallel to obtain different receptive field features, and expressing the mapping relation between the CBLP module and the multi-receptive field module as follows:
xCBLP=H(xi) (4)
xmulti=H1(xi)+H3(xi)+H5(xi) (5)
wherein xCBLPAnd xmultiRespectively representing the output of the CBLP module and the multi-sensing-field module; h1(·)、H3(. and H)5(. h) respectively shows the convolution kernel sizes as: 1 × 1, 3 × 3, and 5 × 5 mappings; x is the number ofiRepresenting the input feature map, the YOLOv3 network was trained using a training sample set, resulting in two improved YOLOv3 networks that could detect lane lines and other non-road objects.
Further, the process of calibrating the driving route of the vehicle in step 4 can be expressed as:
step 4.1, establishing a lane line model, recording the center of the lane line image detected in the step 3 as a coordinate origin, taking a horizontal axis as an x axis and taking a vertical axis as a y axis, establishing the lane line model, and simultaneously expressing a left lane line and a right lane line in the driving image in a coordinate axis by using two straight line functions:
Figure BDA0002786116870000031
in the formula (x)1,y1)、(x2,y2) The coordinates of the left and right lane lines in the coordinate axis, k, respectively1、k2Respectively, the left and right lane line modelsRate, b1、b2Are the intercepts of the left and right lane line models, respectively.
Step 4.2, calculating a driving deviation angle value, taking the y axis in the coordinate system in the step 4.1 as the driving direction of the vehicle, and calculating the included angle between the driving direction of the vehicle and the angular bisector of the left lane and the right lane, wherein the slope k of the angular bisector of the left lane and the right lane3Comprises the following steps:
Figure BDA0002786116870000032
calculating the slope of the left and right lane angle bisectors by an inverse trigonometric function to obtain the deviation angle theta of the left and right angle bisectors and the vehicle running direction, and setting a threshold value theta1And theta2When the offset angle is smaller than theta1When the traveling crane deviates to the left, the traveling crane needs to be corrected to the right, and when the deviation angle is larger than theta2When the vehicle is inclined to the right, the vehicle needs to be corrected to the left;
and 4.3, after the yaw angle is corrected, continuing to execute the steps 4.1 and 4.2 until the vehicle reaches the destination or an abnormal condition occurs.
Further, the process of matching the safe distance of the vehicle in step 5 may be expressed as:
when the vehicle is in a running process, an object which is collided with the vehicle at the current speed is used as a safe distance reference object, the upper limit of the speed is set to be 100km/h when the distance between the object and the reference object is more than 100 meters, the upper limit of the speed is set to be 60km/h when the distance between the object and the reference object is more than 60 kilometers, and the calculation formula is as follows:
v≤d/10 (8)
where v is the maximum speed of the vehicle and d is the distance between the vehicle and the object.
The invention relates to an automatic driving deviation processing method based on artificial intelligence, which has the technical beneficial effects that:
1. the invention simulates the severe environment during driving, realizes the automatic driving function under the interference of the noise environment, and enhances the stability, reliability and robustness of the automatic driving function;
2. in the invention, in order to reduce the regression difficulty of the detection frame, the clustering center obtained by K-means training is combined to adjust the anchor frame of the YOLOv3 model;
3. according to the method, a multi-receptive-field mechanism is added into a trunk module of the YOLOv3 model to learn abundant characteristics, so that the learning capability of the trunk module is improved;
4. the invention provides an important technical means for artificial intelligent driving.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a model diagram of the driving deviation angle according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the invention provides an automatic driving deviation processing method based on artificial intelligence, and aims to realize artificial intelligence driving technology for replacing artificial driving with artificial intelligence. FIG. 1 is a flow chart of the present invention. The steps of the present invention will be described in detail with reference to the flow chart.
Step 1, obtaining vehicle driving data, comprising: data such as driving images, altitude, vehicle speed, longitude, latitude, direction elevation and the like;
step 2, simulating a severe driving environment, superposing white Gaussian noise on the acquired data to increase the stability and robustness of an automatic driving model, and controlling the range of a signal-to-noise ratio to be 30-40 dB;
the process of simulating the harsh environment of the traveling crane in step 2 can be represented as:
xs=xn+x (1)
wherein x is the original vehicle driving data, xnIs Gaussian white noise data, xsRepresenting the resulting data collected in a simulated noisy environment.
The signal-to-noise ratio is defined as follows:
Figure BDA0002786116870000041
in the formula, PsRepresenting the power of the signal, PnRepresenting the power of the noise.
And 3, recognizing lane lines and other non-road objects, and dividing recognition targets in the driving image into: lane lines and other non-road objects, and detecting them by the modified YOLOv3 model;
the process of identifying lane lines and other non-road objects in step 3 may be represented as:
establishing a driving image training data set through the data acquired in the step 1, establishing two YOLOv3 models, and respectively detecting lane lines and other non-road objects, wherein each model comprises three parts: the system comprises a trunk module, a feature fusion module and a prediction module; to reduce the regression difficulty of the detection box, the clustering center on the training set is obtained by K-means, the anchor box of the original YOLOv3 network is rearranged, and the distance measure of the following formula is used:
d=1-IOU(b,a) (3)
b and a respectively represent a label and a clustering center frame of the driving image training data, d represents the overlapping degree of the label frame and the clustering center frame, the smaller d represents the higher overlapping degree of the label frame and the clustering center frame of the driving image training data, the clustering center is used for setting all the YOLO anchor frames, the YOLO layer corresponding to the high-resolution feature diagram uses 2 smaller anchor frames, and the YOLO layer corresponding to the low-resolution feature diagram uses 3 larger anchor frames.
The method comprises the following steps of adding a multi-receptive field mechanism into a trunk module to learn rich features, improving the learning capability of the trunk module, respectively using CBLP modules with convolution kernels of 1 × 1 and 5 × 5 to be connected with an original CBLP module in parallel to obtain different receptive field features, and expressing the mapping relation between the CBLP module and the multi-receptive field module as follows:
xCBLP=H(xi) (4)
xmulti=H1(xi)+H3(xi)+H5(xi) (5)
wherein xCBLPAnd xmultiRespectively representing the output of the CBLP module and the multi-sensing-field module; h1(·)、H3(·) And H5(. h) respectively shows the convolution kernel sizes as: 1 × 1, 3 × 3, and 5 × 5 mappings; x is the number ofiRepresenting the incoming driving images, the YOLOv3 network was trained using a training sample set, resulting in two improved YOLOv3 networks that could detect lane lines and other non-road objects.
Step 4, calibrating vehicle yaw in real time, detecting and tracking the lane line through the steps 1 to 3, recording the center of the image of the lane line as the origin of coordinates, establishing a driving yaw angle model, and calibrating a driving route;
the process of calibrating the driving route of the vehicle in step 4 can be expressed as:
step 4.1, establishing a lane line model, recording the center of the lane line image detected by the YOLOv3 network in the step 3 as the origin of coordinates, wherein the horizontal axis is the x axis and the vertical axis is the y axis, as shown in fig. 2, L in the figure1Is the left lane line, L2Is the left lane line, L3The method comprises the following steps of establishing a lane line coordinate system and a model by using a left lane line and a right lane line angular bisector, and simultaneously expressing the left lane line and the right lane line in a driving image in a coordinate axis by using two straight line functions:
Figure BDA0002786116870000051
in the formula (x)1,y1)、(x2,y2) The coordinates of the left and right lane lines in the coordinate axis, k, respectively1、k2The slope of the left and right lane line models, b1、b2Are the intercepts of the left and right lane line models, respectively.
Step 4.2, calculating a driving deviation angle value, taking the y axis in the coordinate system in the step 4.1 as the driving direction of the vehicle, and calculating the included angle between the driving direction of the vehicle and the angular bisector of the left lane and the right lane, wherein the slope k of the angular bisector of the left lane and the right lane3Comprises the following steps:
Figure BDA0002786116870000052
inverse trigonometric function through left and right lane angular bisector slopeCalculating to obtain the left and right angle bisector and the deviation angle theta of the vehicle running direction, and setting a threshold value theta1And theta2When the offset angle is smaller than theta1When the traveling crane deviates to the left, the traveling crane needs to be corrected to the right, and when the deviation angle is larger than theta2When the vehicle is inclined to the right, the vehicle needs to be corrected to the left;
and 4.3, after the yaw angle is corrected, continuing to execute the steps 4.1 and 4.2 until the vehicle reaches the destination or an abnormal condition occurs.
Step 5, matching the vehicle speed and the safe distance, and matching the safe distance of the vehicle according to the vehicle speed monitored in real time and a detection result of YOLOv 3;
the process of matching the safe distance of the vehicle in step 5 may be expressed as:
when the vehicle is in a running process, an object which is collided with the vehicle at the current speed is used as a safe distance reference object, the upper limit of the speed is set to be 100km/h when the distance between the object and the reference object is more than 100 meters, the upper limit of the speed is set to be 60km/h when the distance between the object and the reference object is more than 60 kilometers, and the calculation formula is as follows:
v≤d/10 (8)
where v is the maximum speed of the vehicle and d is the distance between the vehicle and the object.
And 6, starting automatic driving interruption, starting an interruption processing mechanism by the system when abnormality occurs in the safe distance, simultaneously broadcasting early warning information to a driver, and recording the abnormal condition of the vehicle-mounted terminal by the log module.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, but any modifications or equivalent variations made according to the technical spirit of the present invention are within the scope of the present invention as claimed.

Claims (5)

1. An automatic driving deviation processing method based on artificial intelligence comprises the following specific steps:
step 1, obtaining vehicle driving data, comprising: data such as driving images, altitude, vehicle speed, longitude, latitude, direction elevation and the like;
step 2, simulating a severe driving environment, superposing white Gaussian noise on the acquired data to increase the stability and robustness of an automatic driving model, and controlling the range of a signal-to-noise ratio to be 30-40 dB;
and 3, recognizing lane lines and other non-road objects, and dividing recognition targets in the driving image into: lane lines and other non-road objects, and detecting them by the modified YOLOv3 model;
step 4, calibrating vehicle yaw in real time, detecting and tracking the lane line through the steps 1 to 3, recording the center of the image of the lane line as the origin of coordinates, establishing a driving yaw angle model, and calibrating a driving route;
step 5, matching the vehicle speed and the safe distance, and matching the safe distance of the vehicle according to the vehicle speed monitored in real time and a detection result of YOLOv 3;
and 6, starting automatic driving interruption, starting an interruption processing mechanism by the system when abnormality occurs in the safe distance, simultaneously broadcasting early warning information to a driver, and recording the abnormal condition of the vehicle-mounted terminal by the log module.
2. The automated driving deviation processing method based on artificial intelligence of claim 1, wherein: the process of simulating the harsh environment of a vehicle in step 2 can be expressed as:
xs=xn+x (1)
wherein x is the original vehicle driving data, xnIs Gaussian white noise data, xsRepresenting the acquired data in the simulated noise environment;
the signal-to-noise ratio is defined as follows:
Figure FDA0002786116860000011
in the formula, PsRepresenting the power of the signal, PnRepresenting the power of the noise.
3. The automated driving deviation processing method based on artificial intelligence of claim 1, wherein: the process of identifying lane lines and other non-road objects in step 3 may be represented as:
establishing two YOLOv3 models, respectively detecting lane lines and other non-road objects, wherein each model comprises three parts: the system comprises a trunk module, a feature fusion module and a prediction module; to reduce the regression difficulty of the detection box, the clustering center on the training set is obtained by K-means, the anchor box of the original YOLOv3 network is rearranged, and the distance measure of the following formula is used:
d=1-IOU(b,a) (3)
b and a respectively represent a label and a clustering center frame, d represents the overlapping degree of the label frame and the clustering center frame, the smaller d represents the higher the overlapping degree of the label frame and the clustering center frame, a clustering center is used for setting all YOLO anchor frames, a YOLO layer corresponding to a high-resolution feature map uses 2 smaller anchor frames, and a YOLO layer corresponding to a low-resolution feature map uses 3 larger anchor frames;
the method comprises the following steps of adding a multi-receptive field mechanism into a trunk module to learn rich features, improving the learning capability of the trunk module, respectively using CBLP modules with convolution kernels of 1 × 1 and 5 × 5 to be connected with an original CBLP module in parallel to obtain different receptive field features, and expressing the mapping relation between the CBLP module and the multi-receptive field module as follows:
xCBLP=H(xi) (4)
xmulti=H1(xi)+H3(xi)+H5(xi) (5)
wherein xCBLPAnd xmultiRespectively representing the output of the CBLP module and the multi-sensing-field module; h1(·)、H3(. and H)5(. h) respectively shows the convolution kernel sizes as: 1 × 1, 3 × 3, and 5 × 5 mappings; x is the number ofiRepresenting the input feature map, the YOLOv3 network was trained using a training sample set, resulting in two improved YOLOv3 networks that could detect lane lines and other non-road objects.
4. The automated driving deviation processing method based on artificial intelligence of claim 1, wherein: the process of calibrating the driving route of the vehicle in step 4 can be expressed as:
step 4.1, establishing a lane line model, recording the center of the lane line image detected in the step 3 as a coordinate origin, taking a horizontal axis as an x axis and taking a vertical axis as a y axis, establishing the lane line model, and simultaneously expressing a left lane line and a right lane line in the driving image in a coordinate axis by using two straight line functions:
Figure FDA0002786116860000021
in the formula, k1、k2The slope of the left and right lane line models, b1、b2Respectively are the intercept of the left lane line model and the intercept of the right lane line model;
step 4.2, calculating a driving deviation angle value, taking the y axis in the coordinate system in the step 4.1 as the driving direction of the vehicle, and calculating the included angle between the driving direction of the vehicle and the angular bisector of the left lane and the right lane, wherein the slope k of the angular bisector of the left lane and the right lane3Comprises the following steps:
Figure FDA0002786116860000022
calculating the slope of the left and right lane angle bisectors by an inverse trigonometric function to obtain the deviation angle theta of the left and right angle bisectors and the vehicle running direction, and setting a threshold value theta1And theta2When the offset angle is smaller than theta1When the traveling crane deviates to the left, the traveling crane needs to be corrected to the right, and when the deviation angle is larger than theta2When the vehicle is inclined to the right, the vehicle needs to be corrected to the left;
and 4.3, after the yaw angle is corrected, continuing to execute the steps 4.1 and 4.2 until the vehicle reaches the destination or an abnormal condition occurs.
5. The automated driving deviation processing method based on artificial intelligence of claim 1, wherein: the process of matching the safe distance of the vehicle in step 5 can be expressed as:
when the vehicle is in a running process, an object which is collided with the vehicle at the current speed is used as a safe distance reference object, the upper limit of the speed is set to be 100km/h when the distance between the object and the reference object is more than 100 meters, the upper limit of the speed is set to be 60km/h when the distance between the object and the reference object is more than 60 kilometers, and the calculation formula is as follows:
v≤d/10 (8)
where v is the maximum speed of the vehicle and d is the distance between the vehicle and the object.
CN202011298591.7A 2020-11-19 2020-11-19 Automatic driving deviation processing method based on artificial intelligence Active CN112364800B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011298591.7A CN112364800B (en) 2020-11-19 2020-11-19 Automatic driving deviation processing method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011298591.7A CN112364800B (en) 2020-11-19 2020-11-19 Automatic driving deviation processing method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN112364800A true CN112364800A (en) 2021-02-12
CN112364800B CN112364800B (en) 2023-07-14

Family

ID=74532976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011298591.7A Active CN112364800B (en) 2020-11-19 2020-11-19 Automatic driving deviation processing method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN112364800B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112118A (en) * 2014-06-26 2014-10-22 大连民族学院 Lane departure early-warning system-based lane line detection method
US20180089515A1 (en) * 2016-09-12 2018-03-29 Kennesaw State University Research And Service Foundation, Inc. Identification and classification of traffic conflicts using live video images
US20180261095A1 (en) * 2017-03-08 2018-09-13 GM Global Technology Operations LLC Method and apparatus of networked scene rendering and augmentation in vehicular environments in autonomous driving systems
CN108537197A (en) * 2018-04-18 2018-09-14 吉林大学 A kind of lane detection prior-warning device and method for early warning based on deep learning
CN110443208A (en) * 2019-08-08 2019-11-12 南京工业大学 A kind of vehicle target detection method, system and equipment based on YOLOv2
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN111259706A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Lane line pressing judgment method and system for vehicle
CN111582083A (en) * 2020-04-25 2020-08-25 华南理工大学 Lane line detection method based on vanishing point estimation and semantic segmentation
CN111898491A (en) * 2020-07-15 2020-11-06 上海高德威智能交通系统有限公司 Method and device for identifying reverse driving of vehicle and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112118A (en) * 2014-06-26 2014-10-22 大连民族学院 Lane departure early-warning system-based lane line detection method
US20180089515A1 (en) * 2016-09-12 2018-03-29 Kennesaw State University Research And Service Foundation, Inc. Identification and classification of traffic conflicts using live video images
US20180261095A1 (en) * 2017-03-08 2018-09-13 GM Global Technology Operations LLC Method and apparatus of networked scene rendering and augmentation in vehicular environments in autonomous driving systems
CN108537197A (en) * 2018-04-18 2018-09-14 吉林大学 A kind of lane detection prior-warning device and method for early warning based on deep learning
CN111259706A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Lane line pressing judgment method and system for vehicle
CN110443208A (en) * 2019-08-08 2019-11-12 南京工业大学 A kind of vehicle target detection method, system and equipment based on YOLOv2
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN111582083A (en) * 2020-04-25 2020-08-25 华南理工大学 Lane line detection method based on vanishing point estimation and semantic segmentation
CN111898491A (en) * 2020-07-15 2020-11-06 上海高德威智能交通系统有限公司 Method and device for identifying reverse driving of vehicle and electronic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DEEPIKA SIROHI 等: "Convolutional neural networks for 5G-enabled Intelligent Transportation System : A systematic review", 《COMPUTER COMMUNICATIONS》, pages 459 - 498 *
ZHISHUAI ZHANG: "ROBUST DEEP LEARNING FRAMEWORKS FOR RECOGNIZING AND LOCALIZING OBJECTS ACCURATELY AND RELIABLY", 《JOHNS HOPKINS SHERIDAN LIBRARIES》, pages 1 - 174 *
张剑锋: "基于深度学习的车道线检测与车道偏离预警系统研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》, no. 2020, pages 035 - 383 *
李文琳: "基于车载自组织网络的智能停车导航系统研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》, no. 2016, pages 034 - 645 *

Also Published As

Publication number Publication date
CN112364800B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN112700470B (en) Target detection and track extraction method based on traffic video stream
CN108537197B (en) Lane line detection early warning device and method based on deep learning
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
Li et al. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection
CN106935035B (en) Parking offense vehicle real-time detection method based on SSD neural network
CN111310583B (en) Vehicle abnormal behavior identification method based on improved long-term and short-term memory network
WO2022141910A1 (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN106845547A (en) A kind of intelligent automobile positioning and road markings identifying system and method based on camera
CN105512623A (en) Foggy-day driving visual enhancement and visibility early warning system and method based on multiple sensors
WO2020253010A1 (en) Method and apparatus for positioning parking entrance in parking positioning, and vehicle-mounted terminal
CN113885062A (en) Data acquisition and fusion equipment, method and system based on V2X
CN112419773A (en) Vehicle-road cooperative unmanned control system based on cloud control platform
CN111524365A (en) Method for classifying vehicle types by using multiple geomagnetic sensors
CN111649740A (en) Method and system for high-precision positioning of vehicle based on IMU
CN111160132B (en) Method and device for determining lane where obstacle is located, electronic equipment and storage medium
Kamble et al. Lane departure warning system for advanced drivers assistance
Chen et al. Vehicle detection based on multifeature extraction and recognition adopting RBF neural network on ADAS system
CN112009491B (en) Deep learning automatic driving method and system based on traffic element visual enhancement
Ijaz et al. Automatic steering angle and direction prediction for autonomous driving using deep learning
CN113532499B (en) Sensor security detection method and device for unmanned system and storage medium
CN112419345A (en) Patrol car high-precision tracking method based on echo state network
CN112654998A (en) Lane line detection method and device
CN112364800B (en) Automatic driving deviation processing method based on artificial intelligence
CN114842660B (en) Unmanned lane track prediction method and device and electronic equipment
CN116486359A (en) All-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant