CN107972662A - To anti-collision warning method before a kind of vehicle based on deep learning - Google Patents
To anti-collision warning method before a kind of vehicle based on deep learning Download PDFInfo
- Publication number
- CN107972662A CN107972662A CN201710975371.5A CN201710975371A CN107972662A CN 107972662 A CN107972662 A CN 107972662A CN 201710975371 A CN201710975371 A CN 201710975371A CN 107972662 A CN107972662 A CN 107972662A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- distance
- camera
- image
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000013135 deep learning Methods 0.000 title claims abstract description 19
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 11
- 238000004364 calculation method Methods 0.000 claims abstract description 8
- 230000008859 change Effects 0.000 claims abstract description 6
- 238000001514 detection method Methods 0.000 claims description 10
- 238000005259 measurement Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 4
- 238000010923 batch production Methods 0.000 claims description 3
- 238000009795 derivation Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims 1
- 206010039203 Road traffic accident Diseases 0.000 abstract description 3
- 238000000605 extraction Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000035484 reaction time Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/14—Adaptive cruise control
- B60W30/16—Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Evolutionary Computation (AREA)
- Transportation (AREA)
- General Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses before a kind of vehicle based on deep learning to anti-collision warning method, including:Obtain vehicle front image information in real time by vehicle-mounted camera, and image is pre-processed;Using the vehicle characteristics in multiple dimensioned depth convolutional neural networks extraction image, the identification and positioning of vehicle target are realized;According to vehicle location in image, the distance that current vehicle and front vehicles are calculated with camera parameters is projected based on geometrical relationship;Relative velocity is calculated by the change of main car and the real-time range of front truck;Collision distance is calculated by the relative velocity and distance in two workshops, and the danger classes of vehicle is judged according to result of calculation, and then transfers corresponding anti-collision warning strategy.This method can obtain front vehicles information, and accurately estimation and front vehicles distance and collision time in real time, make appropriate Forewarning Measures, ensure safe driving, reduce traffic accident.
Description
Technical Field
The invention relates to the technical field of pattern recognition and the technical field of automobile active safety, in particular to a vehicle forward collision early warning method based on deep learning.
Background
With the gradual improvement of living standard of people, more and more people buy the car as the instrument of riding instead of walk, and the quantity of car is also increasing day by day. Traffic safety becomes a more and more serious problem and is also more and more valued by people. The frequent occurrence of traffic accidents is caused by carelessness of drivers and insufficient safety performance of automobiles.
The auxiliary driving system is an active safety system, can monitor road information and the state of a driver in real time, can judge whether the distance between the vehicle and a pedestrian in front is possible to collide with each other or not, and even can take over the driving right of the vehicle at an emergency to actively brake, so that accidents are avoided. Now, what people need is a system capable of preventing accidents, not only a system capable of reducing injuries after the accidents happen, but also a vehicle forward collision early warning system, which has very important research significance and development prospect.
At present, technologies such as radar, laser, ultrasonic and the like are commonly used in an active safety system, but the equipment required by the technologies is expensive, so that the technologies are not beneficial to popularization and use on all vehicles. More than 80% of environmental information required by driving the automobile is obtained through vision, and compared with other sensors, the vision sensor can better simulate the behavior of a driver and has incomparable advantages.
The existing vision-based vehicle forward early warning technology comprises license plate distance measurement, vehicle bottom shadow distance measurement and the like, and vehicles are identified mainly by manually extracting vehicle characteristics such as color characteristics, texture characteristics, contour characteristics, geometric characteristics and the like, so that the distance measurement is carried out. The method for manually extracting the vehicle features is time-consuming and labor-consuming, cannot fully utilize vehicle information, cannot achieve higher detection accuracy when the vehicle is blocked or the image is blurred due to bad weather, and is poor in robustness.
Deep learning realizes complex function approximation and input data representation by learning a deep nonlinear network structure, and shows the learning capability of the essential characteristics of a powerful data set. The convolutional neural network, a typical deep learning method, is a multi-layer perceptron specifically designed for two-dimensional image processing. The convolutional neural network does not need to artificially participate in the selection process of the features, and can automatically learn the target features in a large number of data sets. The weight sharing and local connection mechanism of the method ensures that the method has the advantages over the traditional technology: the method has certain invariance to geometric transformation, deformation and illumination, and has good fault-tolerant capability, parallel processing capability and self-learning capability. The advantages enable the convolutional neural network to have great advantages when the problems of complex environmental information and uncertain inference rules are processed, and the problems of scale change, rotation deformation and the like of the vehicle can be tolerated.
R-CNN (Regions with proportional Neural Network Features) obtains suggested Regions with different sizes through a region of interest extraction step, and then the suggested Regions are input into a Convolutional Neural Network after being scaled to a fixed size, but the method is large in calculation amount and cannot meet the real-time requirement. The fast-RCNN realizes end-to-end detection by using a region-of-interest generation network, however, the method generates region suggestions of multi-scale targets by sliding a group of convolution kernels on a fixed feature map, so that contradiction between variable target size and fixed feature map receptive field size is caused, and the method cannot adapt to diversity of target sizes in a real environment. Therefore, a convolutional neural network adaptive to a multi-scale target is needed for solving the problem of end-to-end automatic detection of a vehicle in a vehicle forward collision early warning system, ensuring that the vehicle forward collision early warning system accurately and stably obtains vehicle information, and realizing real-time and accurate collision early warning.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a vehicle forward collision early warning method based on deep learning, which improves the accuracy of vehicle detection, ensures the reliability of actual distance measurement, effectively realizes real-time collision early warning and reduces the occurrence of traffic accidents.
The purpose of the invention is realized by the following technical scheme: a vehicle forward collision early warning method based on deep learning comprises the following steps:
s1, acquiring image information in front of a vehicle in real time through a vehicle-mounted camera; in the image acquisition process, a CCD camera is arranged in the front of the inner face of the vehicle, and the road image in front of the vehicle is acquired at a fixed frequency f;
s2, extracting vehicle features in the image by using a multi-scale depth convolution neural network, realizing the identification and positioning of a vehicle target, and marking a front vehicle by using a rectangular frame;
s3, calculating the distance between the current vehicle and the front vehicle based on the geometric relation projection and the camera parameters according to the vehicle position in the image;
s4, calculating the relative speed according to the change of the real-time distance between the main vehicle and the front vehicle;
and S5, calculating the collision distance according to the relative speed and the distance between the two vehicles, and judging the danger level of the current vehicle state according to the calculation result.
Preferably, in step S2, the overall framework of the multi-scale deep convolutional neural network is: the VGG model is a main line, and is added with a plurality of area suggestion sub-networks for extracting the interested areas and a detection sub-network for classification and position refinement.
Furthermore, the VGG model comprises 13 convolution layers, 3 full-connection layers and 16 layers in total; the area suggestion sub-network extends and branches from a convolutional layer 4-3, a convolutional layer 5-3, a convolutional layer 6 and a maximum pooling layer 6 of the VGG network respectively; each region suggestion sub-network simultaneously predicts whether the object is the object or not and regresses the target boundary, and outputs n interested suggestion frames in total; the detection subnetwork firstly reduces redundancy of the extracted n interesting suggestion boxes by using a non-maximum suppression method with a threshold value of 0.7, and each image is left with about 2k suggestion areas; and (3) downsampling each recommended region to a uniform size through an ROI posing layer, and finally performing softmax classification and position refinement respectively after passing through a full connection layer.
Preferably, in step S2, the off-line training process is as follows:
1. model weights are initialized by a VGG network pre-trained on ImageNet; training only the region suggestion subnetwork, and iterating 10000 times at the learning rate of 0.00005 to generate a model;
2. the generated model is used for initializing the second stage, iteration is carried out at an initial learning rate of 0.00005, the learning rate is reduced by 10 times in 10000 iterations, and the iteration is carried out for 25000 times;
the learning process of the two stages realizes stable multi-task training; a multitask penalty is employed, wherein the penalty function is:
where i is an index of proposed regions in a batch process, N cls Normalized coefficient of classification layer, N reg Is the normalized coefficient of the regression layer, λ is the equilibrium weight, p i Is the predicted probability of the vehicle target,as a true tag, i.e. if the candidate region is positiveIf the candidate region is negativet i Is a vector, representing the 4 parameterized coordinates of the predicted bounding box,a coordinate vector of a real bounding box corresponding to the positive candidate region;
L cls for the log loss of classification:
L reg is a regressionLogarithmic cost:wherein R is a Smooth L1 error,
t i andthe calculation of (c) is as follows:
t x =(x-x a )/w a ,t y =(y-y a )/h a ,t w =log(w/w a ),t h =log(hh a ),
wherein x, y, w, h respectively represent the center coordinate, width and height x of the bounding box a ,y a ,w a ,h a Respectively representing the center coordinates, width and height, x, of the candidate region * ,y * ,w * ,h * Respectively representing the predicted bounding box center coordinates, width and height.
Preferably, in step S3, a geometric distance measurement method based on fixed parameters of a camera is adopted, and the specific steps include:
s31: fixing a monocular camera with an effective focal length f in front of the vehicle, and measuring the height h of the monocular camera from the ground;
s32: the geometric distance measurement algorithm based on camera projection and parameter setting is used for calculating the distance between vehicles by combining the geometric coordinates formed by the road surface on which the vehicles run and the vehicle body with the parameters preset by the CCD camera; when the automobile runs on a horizontal road surface, the projection model is an ideal geometric model, the camera is assumed to be parallel to the ground, and the distance between the current vehicle and the front vehicle can be obtained through geometric relationship and proportion derivation:where f denotes an effective focal length of the camera, h denotes a mounting height of the camera, and (x) 0 ,y 0 ) The intersection point of the optical axis and the image plane is shown, and (x, y) the coordinate of a point p on the road surface on the image plane is shown.
Preferably, in step S4, the speed of the ith frame is estimated from the average speed 1 second before the frame, and the average speed is calculated by a difference-by-difference method:wherein the video frame rate is n, the distance between the detected front vehicle and the vehicle in the ith frame isWhereinIs as followsThe time interval between frame and ith frame, obviously t =0.5s; the method avoids the influence of the error of the distance measurement of a certain frame on the TTC, ensures that the value of the TTC does not fluctuate violently, and increases the accuracy of the early warning system.
Preferably, in step S5, the time to collision TTC is calculated as:where d represents the distance between the two cars, as obtained in step S3,representing the relative speed, obtained in step S4.
Preferably, in the step S6, the determination of the current relative driving state of the vehicle is implemented according to the time to collision TTC, and a corresponding warning is given; the specific vehicle early warning mode is as follows: TTC >3s is a safe state; sending out prompt early warning when 1.5s-TsTTC-3s are used; and when the TTC is less than 1.5s, giving out emergency early warning, and simultaneously adopting an active braking measure.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. in the invention, the vehicle anti-collision early warning processing can be carried out only by adopting the camera (such as a camera), and compared with the technologies of radar, laser, ultrasonic and the like, the hardware cost is saved.
2. The invention adopts the multi-scale deep convolution neural network, realizes a vehicle forward collision early warning framework using the multi-scale deep convolution neural network, is suitable for vehicle forward collision early warning with high precision and robustness in the environments of size diversity, form diversity, illumination change diversity, background diversity and the like of vehicle targets, and improves the stability of the algorithm and the user experience compared with the traditional method.
Drawings
Fig. 1 is a schematic flow chart of a vehicle forward collision warning method based on deep learning according to an embodiment.
FIG. 2 is a block diagram of an embodiment of a multi-scale deep convolutional neural network.
FIG. 3 is a schematic diagram of a region suggestion subnetwork of the multi-scale deep convolutional neural network of the embodiment.
FIG. 4 is a diagram of a ranging geometry model according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Example 1
Fig. 1 is a schematic flow chart of a vehicle forward collision warning method based on deep learning. In this embodiment, the vehicle forward collision early warning method based on deep learning includes:
s1, acquiring image information in front of a vehicle in real time through a vehicle-mounted camera; in the image acquisition process, a CCD camera is adopted to be arranged in the front of the interior of the vehicle, and the road image in front of the vehicle is acquired at a fixed frequency f.
S2, extracting the vehicle features in the image by using a multi-scale depth convolution neural network, realizing the identification and positioning of a vehicle target, and marking a front vehicle by using a rectangular frame.
And S3, calculating the distance between the current vehicle and the front vehicle based on the geometrical relation projection and the camera parameters according to the vehicle position in the image.
And S4, calculating the relative speed according to the change of the real-time distance between the main vehicle and the front vehicle.
And S5, calculating a collision distance (TTC) according to the relative speed and the distance between the two vehicles, and judging the danger level of the current vehicle state according to the calculation result.
In step S2, an overall framework of the multi-scale deep convolutional neural network is shown in fig. 2: the VGG model is a main line, and is added with a plurality of area suggestion sub-networks for extracting the interested areas and a detection sub-network for classification and position refinement.
The VGG model comprises 13 convolution layers, 3 full-connection layers and 16 layers in total;
the area recommendation sub-networks are shown in FIG. 3 and extend and branch from convolutional layers 4-3, 5-3, 6 and 6 of the VGG network respectively. Each regional suggestion subnetwork simultaneously predicts whether the object is the object or not and regresses the target boundary, and n interested suggestion boxes are output in total.
The detection subnetwork firstly reduces redundancy of the extracted n interesting suggestion boxes by using a non-maximum suppression method with a threshold value of 0.7, and each graph is left with about 2k suggestion areas. And (3) downsampling each recommended region to a uniform size through an ROI posing layer, and finally performing softmax classification and position refinement respectively after passing through a full connection layer.
In step S2, the off-line training process is as follows:
1. model weights are initialized by the VGG network pre-trained on ImageNet. Only the region suggestion subnetwork was trained, and the model was generated with a learning rate of 0.00005 for 10000 iterations.
2. The generated model is used to initialize the second stage, iterating at an initial learning rate of 0.00005, with 10000 iterations of learning rate reduction by a factor of 10, for a total of 25000 iterations.
The two-phase learning process achieves stable multi-task training. A multitask penalty is employed, wherein the penalty function is:
where i is an index of proposed regions in a batch process, N cls Normalized coefficient of classification layer, N reg Is the normalized coefficient of the regression layer, λ is the equilibrium weight, p i Is the predicted probability of the vehicle target,is a true tag, i.e. if the candidate region is positiveIf the candidate region is negativet i Is a vector, representing the 4 parameterized coordinates of the predicted bounding box,the coordinate vector of the real bounding box corresponding to the positive candidate area is obtained;
L cls for the log loss of classification:
L reg to regression log cost:wherein R is Smooth L1 errorThe difference is that the number of the first and second,
t i andthe calculation method of (c) is as follows:
t x =(x-x a )/w a ,t y =(y-y a )/h a ,t w =log(w/w a ),t h =log(hh a ),
wherein x, y, w, h respectively represent the center coordinate, width and height x of the bounding box a ,y a ,w a ,h a Respectively representing the center coordinates, width and height, x, of the candidate region * ,y * ,w * ,h * Respectively representing the predicted bounding box center coordinates, width and height.
In the step S3, a geometric distance measurement method based on the fixed parameters of the camera is adopted. The method comprises the following specific steps:
s31: and fixing the monocular camera with the effective focal length f in front of the vehicle, and measuring the height h of the monocular camera from the ground.
S32: the geometric distance measurement algorithm based on camera projection and parameter setting calculates the distance between vehicles by combining the geometric coordinates formed by the running road surface of the automobile and the automobile body with the parameters set in advance by the CCD camera. When the automobile runs on a horizontal road surface, the projection model is an ideal geometric model, and the camera is assumed to be parallel to the ground, and a schematic diagram of the vehicle distance estimation under the condition is shown in fig. 4. The distance between the current vehicle and the front vehicle can be obtained through geometrical relation and proportion derivation:where f denotes the effective focal length of the camera and h denotes the mounting height of the cameraDegree (x) 0 ,y 0 ) The intersection point of the optical axis and the image plane is shown, and (x, y) the coordinate of a point p on the road surface on the image plane is shown.
In step S4, the speed of the ith frame is estimated from the average speed 1 second before the frame, and the average speed is calculated by the difference-by-difference method:wherein the video frame rate is n, the distance between the detected front vehicle and the vehicle in the ith frame isWhereinIs as followsThe time interval between the frame and the ith frame, obviously t =0.5s. The method avoids the influence of the error of the distance measurement of a certain frame on the TTC, ensures that the value of the TTC does not fluctuate violently, and increases the accuracy of the early warning system.
In step S5, the time to collision TTC is calculated as:where d represents the distance between the two cars, as obtained in step S3,representing the relative speed, obtained in step S4.
In the step S6, the determination of the current relative driving state of the vehicle is implemented according to the time to collision TTC, and a corresponding warning is given. Relevant researches show that when a warning is given 2.5 seconds in advance for the collision of the front vehicle, the vehicle can be basically braked and stopped by considering the reaction time of people and the braking distance, so that accidents are prevented. In view of this, the alarm time is continuously relaxed to be set at 3 seconds. The specific vehicle early warning mode is as follows: TTC >3s is a safe state; 1.5 s-TTC-3 s, and sending out prompt early warning; and when the TTC is less than 1.5s, giving out emergency early warning, and simultaneously adopting an active braking measure.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (8)
1. A vehicle forward collision early warning method based on deep learning is characterized by comprising the following steps:
s1, acquiring image information in front of a vehicle in real time through a vehicle-mounted camera; in the image acquisition process, a CCD camera is arranged in the front of the inner face of the vehicle, and the road image in front of the vehicle is acquired at a fixed frequency f;
s2, extracting vehicle features in the image by using a multi-scale depth convolution neural network, realizing the identification and positioning of a vehicle target, and marking a front vehicle by using a rectangular frame;
s3, calculating the distance between the current vehicle and the front vehicle based on the geometrical relation projection and the camera parameters according to the vehicle position in the image;
s4, calculating the relative speed according to the change of the real-time distance between the main vehicle and the front vehicle;
and S5, calculating the collision distance TTC according to the relative speed and the distance between the two vehicles, and judging the danger level of the current vehicle state according to the calculation result.
2. The deep learning-based vehicle forward collision warning method according to claim 1, wherein in the step S2, the overall framework of the multi-scale deep convolutional neural network is as follows: the VGG model is a main line, and is added with a plurality of area suggestion sub-networks for extracting the interested areas and a detection sub-network for classification and position refinement.
3. The deep learning-based vehicle forward collision warning method as claimed in claim 2, wherein the VGG model comprises 13 convolution layers, 3 full-connected layers, and 16 layers in total; the area suggestion sub-network extends and branches from a convolutional layer 4-3, a convolutional layer 5-3, a convolutional layer 6 and a maximum pooling layer 6 of the VGG network respectively; each regional suggestion subnetwork simultaneously predicts whether the regional suggestion subnetwork is an object or not and regresses the target boundary, and n interested suggestion boxes are output in total; the detection subnetwork firstly reduces redundancy of the extracted n interesting suggestion boxes by using a non-maximum suppression method with a threshold value of 0.7, and each image is left with about 2k suggestion areas; and (3) downsampling each recommended region to a uniform size through an ROI posing layer, and finally performing softmax classification and position refinement respectively after passing through a full connection layer.
4. The deep learning-based vehicle forward collision warning method according to claim 2, wherein in step S2, the off-line training process is as follows:
(1) Initializing the model weight by a VGG network pre-trained on ImageNet; training only the region suggestion subnetwork, and iterating 10000 times at the learning rate of 0.00005 to generate a model;
(2) The generated model is used for initializing the second stage, iteration is carried out at an initial learning rate of 0.00005, the learning rate is reduced by 10 times in 10000 iterations, and 25000 iterations are carried out in total;
the learning process of the two stages realizes stable multi-task training; a multitask penalty is employed, wherein the penalty function is:
where i is an index of proposed regions in a batch process, N cls Normalized coefficient of classification layer, N reg Is the normalization coefficient of the regression layer, λ is the balance weight, p i Is the predicted probability of the vehicle target,as a true tag, i.e. if the candidate region is positiveIf the candidate region is negativet i Is a vector, representing the 4 parameterized coordinates of the predicted bounding box,a coordinate vector of a real bounding box corresponding to the positive candidate region;
L cls for the log loss of classification:
L reg to regression log cost:wherein R is a Smooth L1 error,
t i andthe calculation method of (c) is as follows:
t x =(x-x a )/w a ,t y =(y-y a )/h a ,t w =log(w/w a ),t h =log(hh a ),
wherein x, y, w, h respectively represent the center coordinate, width and height x of the bounding box a ,y a ,w a ,h a Respectively represent candidatesArea center coordinates, width and height, x * ,y * ,w * ,h * Respectively representing the predicted bounding box center coordinates, width and height.
5. The vehicle forward collision early warning method based on deep learning of claim 1, wherein in step S3, a geometric ranging method based on camera fixed parameters is adopted, and the specific steps include:
s31: fixing a monocular camera with an effective focal length f in front of a vehicle, and measuring the height h of the monocular camera from the ground;
s32: the geometric distance measurement algorithm based on camera projection and parameter setting is used for calculating the distance between vehicles by combining the geometric coordinates formed by the road surface on which the vehicles run and the vehicle body with the parameters preset by the CCD camera; when the automobile runs on a horizontal road surface, the projection model is an ideal geometric model, the camera is assumed to be parallel to the ground, and the distance between the current vehicle and the front vehicle can be obtained through geometric relation and proportion derivation:where f denotes an effective focal length of the camera, h denotes a mounting height of the camera, and (x) 0 ,y 0 ) The intersection point of the optical axis and the image plane is shown, and (x, y) the coordinate of a point p on the road surface on the image plane is shown.
6. The deep learning-based vehicle forward collision warning method as claimed in claim 1, wherein in step S4, the speed of the ith frame is estimated by the average speed 1 second before the frame, and the average speed is calculated by a difference-by-difference method:wherein the video frame rate is n, the distance between the detected front vehicle and the vehicle in the ith frame isWherein ^ t is the secondThe time interval between frame and ith frame, obviously t =0.5s; the method avoids the influence of the error of the distance measurement of a certain frame on the TTC, ensures that the value of the TTC does not fluctuate violently, and increases the accuracy of the early warning system.
7. The deep learning-based vehicle forward collision warning method according to claim 5 or 6, wherein in the step S5, the collision time TTC is calculated as:where d represents the inter-vehicle distance, as obtained in step S3, and v represents the relative velocity, as obtained in step S4.
8. The deep learning-based vehicle forward collision early warning method according to claim 1, wherein in step S6, the determination of the relative driving state of the current vehicle is implemented according to the time to collision TTC, and a corresponding early warning is given; the specific vehicle early warning mode is as follows: TTC >3s is a safe state; 1.5 s-TTC-3 s, and sending out prompt early warning; and when the TTC is less than 1.5s, giving out emergency early warning, and simultaneously adopting an active braking measure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710975371.5A CN107972662B (en) | 2017-10-16 | 2017-10-16 | Vehicle forward collision early warning method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710975371.5A CN107972662B (en) | 2017-10-16 | 2017-10-16 | Vehicle forward collision early warning method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107972662A true CN107972662A (en) | 2018-05-01 |
CN107972662B CN107972662B (en) | 2019-12-10 |
Family
ID=62012502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710975371.5A Active CN107972662B (en) | 2017-10-16 | 2017-10-16 | Vehicle forward collision early warning method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107972662B (en) |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108639000A (en) * | 2018-06-05 | 2018-10-12 | 上海擎感智能科技有限公司 | Vehicle, vehicle device equipment, car accident prior-warning device and method |
CN108759667A (en) * | 2018-05-29 | 2018-11-06 | 福州大学 | Front truck distance measuring method based on monocular vision and image segmentation under vehicle-mounted camera |
CN108783846A (en) * | 2018-06-13 | 2018-11-13 | 苏州若依玫信息技术有限公司 | A kind of intelligent knapsack control method with function of safety protection |
CN108875641A (en) * | 2018-06-21 | 2018-11-23 | 南京信息工程大学 | The long-term driving alongside recognition methods of highway and system |
CN108932471A (en) * | 2018-05-23 | 2018-12-04 | 浙江科技学院 | A kind of vehicle checking method |
CN108944940A (en) * | 2018-06-25 | 2018-12-07 | 大连大学 | Driving behavior modeling method neural network based |
CN109084724A (en) * | 2018-07-06 | 2018-12-25 | 西安理工大学 | A kind of deep learning barrier distance measuring method based on binocular vision |
CN109109859A (en) * | 2018-08-07 | 2019-01-01 | 庄远哲 | A kind of electric car and its distance detection method |
CN109190591A (en) * | 2018-09-20 | 2019-01-11 | 辽宁工业大学 | A kind of front truck identification prior-warning device and identification method for early warning based on camera |
CN109284699A (en) * | 2018-09-04 | 2019-01-29 | 广东翼卡车联网服务有限公司 | A kind of deep learning method being applicable in vehicle collision |
CN109334563A (en) * | 2018-08-31 | 2019-02-15 | 江苏大学 | A kind of anticollision method for early warning based on road ahead pedestrian and bicyclist |
CN109693672A (en) * | 2018-12-28 | 2019-04-30 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling pilotless automobile |
CN109787821A (en) * | 2019-01-04 | 2019-05-21 | 华南理工大学 | A kind of Large-scale Mobile customer traffic consumption intelligent Forecasting |
CN109782320A (en) * | 2019-03-13 | 2019-05-21 | 中国人民解放军海军工程大学 | Transport queue localization method and system |
CN109910865A (en) * | 2019-02-26 | 2019-06-21 | 辽宁工业大学 | A kind of vehicle early warning brake method based on Internet of Things |
CN109951686A (en) * | 2019-03-21 | 2019-06-28 | 山推工程机械股份有限公司 | A kind of engineer machinery operation method for safety monitoring and its monitoring system |
CN110060298A (en) * | 2019-03-21 | 2019-07-26 | 径卫视觉科技(上海)有限公司 | A kind of vehicle location and attitude and heading reference system based on image and corresponding method |
CN110297232A (en) * | 2019-05-24 | 2019-10-01 | 合刃科技(深圳)有限公司 | Monocular distance measuring method, device and electronic equipment based on computer vision |
CN110556024A (en) * | 2019-07-18 | 2019-12-10 | 华瑞新智科技(北京)有限公司 | Anti-collision auxiliary driving method and system and computer readable storage medium |
CN110796103A (en) * | 2019-11-01 | 2020-02-14 | 邵阳学院 | Target based on fast-RCNN and distance detection method thereof |
CN110979318A (en) * | 2019-11-20 | 2020-04-10 | 苏州智加科技有限公司 | Lane information acquisition method and device, automatic driving vehicle and storage medium |
CN111241948A (en) * | 2020-01-02 | 2020-06-05 | 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) | Method and system for identifying ship in all weather |
WO2020125138A1 (en) * | 2018-12-16 | 2020-06-25 | 华为技术有限公司 | Object collision prediction method and device |
CN111507126A (en) * | 2019-01-30 | 2020-08-07 | 杭州海康威视数字技术股份有限公司 | Alarming method and device of driving assistance system and electronic equipment |
CN111508124A (en) * | 2019-01-11 | 2020-08-07 | 百度在线网络技术(北京)有限公司 | Authority verification method and device |
CN111950488A (en) * | 2020-08-18 | 2020-11-17 | 山西大学 | Improved fast-RCNN remote sensing image target detection method |
CN111950483A (en) * | 2020-08-18 | 2020-11-17 | 北京理工大学 | Vision-based vehicle front collision prediction method |
CN111975769A (en) * | 2020-07-16 | 2020-11-24 | 华南理工大学 | Mobile robot obstacle avoidance method based on meta-learning |
CN112183370A (en) * | 2020-09-29 | 2021-01-05 | 爱动超越人工智能科技(北京)有限责任公司 | Fork truck anti-collision early warning system and method based on AI vision |
CN112242058A (en) * | 2020-05-29 | 2021-01-19 | 北京新能源汽车技术创新中心有限公司 | Target abnormity detection method and device based on traffic monitoring video and storage medium |
CN112265546A (en) * | 2020-10-26 | 2021-01-26 | 吉林大学 | Networked automobile speed prediction method based on time-space sequence information |
CN112417953A (en) * | 2020-10-12 | 2021-02-26 | 腾讯科技(深圳)有限公司 | Road condition detection and map data updating method, device, system and equipment |
CN112550272A (en) * | 2020-12-14 | 2021-03-26 | 重庆大学 | Intelligent hybrid electric vehicle hierarchical control method based on visual perception and deep reinforcement learning |
CN112668361A (en) * | 2019-10-15 | 2021-04-16 | 北京地平线机器人技术研发有限公司 | Alarm accuracy determination method, device and computer readable storage medium |
CN112908034A (en) * | 2021-01-15 | 2021-06-04 | 中山大学南方学院 | Intelligent bus safe driving behavior auxiliary supervision system and control method |
CN112896042A (en) * | 2021-03-02 | 2021-06-04 | 广州通达汽车电气股份有限公司 | Vehicle driving early warning method, device, equipment and storage medium |
CN111422190B (en) * | 2020-04-03 | 2021-08-31 | 北京四维智联科技有限公司 | Forward collision early warning method and system for rear car loader |
CN113408320A (en) * | 2020-03-16 | 2021-09-17 | 上海博泰悦臻网络技术服务有限公司 | Method, electronic device, and computer storage medium for vehicle collision avoidance |
CN113643355A (en) * | 2020-04-24 | 2021-11-12 | 广州汽车集团股份有限公司 | Method and system for detecting position and orientation of target vehicle and storage medium |
CN113792598A (en) * | 2021-08-10 | 2021-12-14 | 西安电子科技大学广州研究院 | Vehicle-mounted camera-based vehicle collision prediction system and method |
CN114228614A (en) * | 2021-12-29 | 2022-03-25 | 阿波罗智联(北京)科技有限公司 | Vehicle alarm method and device, electronic equipment and storage medium |
CN114802225A (en) * | 2022-03-04 | 2022-07-29 | 湖北国际物流机场有限公司 | Airplane guide vehicle control method and system and electronic equipment |
CN117734683A (en) * | 2024-02-19 | 2024-03-22 | 中国科学院自动化研究所 | Underground vehicle anti-collision safety early warning decision-making method |
CN112668361B (en) * | 2019-10-15 | 2024-06-04 | 北京地平线机器人技术研发有限公司 | Alarm accuracy determining method, device and computer readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105740910A (en) * | 2016-02-02 | 2016-07-06 | 北京格灵深瞳信息技术有限公司 | Vehicle object detection method and device |
CN105844222A (en) * | 2016-03-18 | 2016-08-10 | 上海欧菲智能车联科技有限公司 | System and method for front vehicle collision early warning based on visual sense |
CN106250812A (en) * | 2016-07-15 | 2016-12-21 | 汤平 | A kind of model recognizing method based on quick R CNN deep neural network |
CN106781692A (en) * | 2016-12-01 | 2017-05-31 | 东软集团股份有限公司 | The method of vehicle collision prewarning, apparatus and system |
CN106904143A (en) * | 2015-12-23 | 2017-06-30 | 上海汽车集团股份有限公司 | The guard method of a kind of pedestrian and passenger, system and controller |
CN107169421A (en) * | 2017-04-20 | 2017-09-15 | 华南理工大学 | A kind of car steering scene objects detection method based on depth convolutional neural networks |
-
2017
- 2017-10-16 CN CN201710975371.5A patent/CN107972662B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106904143A (en) * | 2015-12-23 | 2017-06-30 | 上海汽车集团股份有限公司 | The guard method of a kind of pedestrian and passenger, system and controller |
CN105740910A (en) * | 2016-02-02 | 2016-07-06 | 北京格灵深瞳信息技术有限公司 | Vehicle object detection method and device |
CN105844222A (en) * | 2016-03-18 | 2016-08-10 | 上海欧菲智能车联科技有限公司 | System and method for front vehicle collision early warning based on visual sense |
CN106250812A (en) * | 2016-07-15 | 2016-12-21 | 汤平 | A kind of model recognizing method based on quick R CNN deep neural network |
CN106781692A (en) * | 2016-12-01 | 2017-05-31 | 东软集团股份有限公司 | The method of vehicle collision prewarning, apparatus and system |
CN107169421A (en) * | 2017-04-20 | 2017-09-15 | 华南理工大学 | A kind of car steering scene objects detection method based on depth convolutional neural networks |
Cited By (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108932471A (en) * | 2018-05-23 | 2018-12-04 | 浙江科技学院 | A kind of vehicle checking method |
CN108932471B (en) * | 2018-05-23 | 2020-06-26 | 浙江科技学院 | Vehicle detection method |
CN108759667A (en) * | 2018-05-29 | 2018-11-06 | 福州大学 | Front truck distance measuring method based on monocular vision and image segmentation under vehicle-mounted camera |
CN108639000A (en) * | 2018-06-05 | 2018-10-12 | 上海擎感智能科技有限公司 | Vehicle, vehicle device equipment, car accident prior-warning device and method |
CN108783846A (en) * | 2018-06-13 | 2018-11-13 | 苏州若依玫信息技术有限公司 | A kind of intelligent knapsack control method with function of safety protection |
CN108875641A (en) * | 2018-06-21 | 2018-11-23 | 南京信息工程大学 | The long-term driving alongside recognition methods of highway and system |
CN108875641B (en) * | 2018-06-21 | 2021-10-19 | 南京信息工程大学 | Long-term parallel driving identification method and system for expressway |
CN108944940A (en) * | 2018-06-25 | 2018-12-07 | 大连大学 | Driving behavior modeling method neural network based |
CN109084724A (en) * | 2018-07-06 | 2018-12-25 | 西安理工大学 | A kind of deep learning barrier distance measuring method based on binocular vision |
CN109109859A (en) * | 2018-08-07 | 2019-01-01 | 庄远哲 | A kind of electric car and its distance detection method |
CN109334563A (en) * | 2018-08-31 | 2019-02-15 | 江苏大学 | A kind of anticollision method for early warning based on road ahead pedestrian and bicyclist |
CN109334563B (en) * | 2018-08-31 | 2021-06-22 | 江苏大学 | Anti-collision early warning method based on pedestrians and riders in front of road |
CN109284699A (en) * | 2018-09-04 | 2019-01-29 | 广东翼卡车联网服务有限公司 | A kind of deep learning method being applicable in vehicle collision |
CN109190591A (en) * | 2018-09-20 | 2019-01-11 | 辽宁工业大学 | A kind of front truck identification prior-warning device and identification method for early warning based on camera |
WO2020125138A1 (en) * | 2018-12-16 | 2020-06-25 | 华为技术有限公司 | Object collision prediction method and device |
US11842545B2 (en) | 2018-12-16 | 2023-12-12 | Huawei Technologies Co., Ltd. | Object collision prediction method and apparatus |
CN109693672B (en) * | 2018-12-28 | 2020-11-06 | 百度在线网络技术(北京)有限公司 | Method and device for controlling an unmanned vehicle |
CN109693672A (en) * | 2018-12-28 | 2019-04-30 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling pilotless automobile |
CN109787821B (en) * | 2019-01-04 | 2020-06-19 | 华南理工大学 | Intelligent prediction method for large-scale mobile client traffic consumption |
CN109787821A (en) * | 2019-01-04 | 2019-05-21 | 华南理工大学 | A kind of Large-scale Mobile customer traffic consumption intelligent Forecasting |
CN111508124A (en) * | 2019-01-11 | 2020-08-07 | 百度在线网络技术(北京)有限公司 | Authority verification method and device |
CN111507126B (en) * | 2019-01-30 | 2023-04-25 | 杭州海康威视数字技术股份有限公司 | Alarm method and device of driving assistance system and electronic equipment |
CN111507126A (en) * | 2019-01-30 | 2020-08-07 | 杭州海康威视数字技术股份有限公司 | Alarming method and device of driving assistance system and electronic equipment |
CN109910865A (en) * | 2019-02-26 | 2019-06-21 | 辽宁工业大学 | A kind of vehicle early warning brake method based on Internet of Things |
CN109782320A (en) * | 2019-03-13 | 2019-05-21 | 中国人民解放军海军工程大学 | Transport queue localization method and system |
CN110060298B (en) * | 2019-03-21 | 2023-06-20 | 径卫视觉科技(上海)有限公司 | Image-based vehicle position and posture determining system and corresponding method |
CN109951686A (en) * | 2019-03-21 | 2019-06-28 | 山推工程机械股份有限公司 | A kind of engineer machinery operation method for safety monitoring and its monitoring system |
CN110060298A (en) * | 2019-03-21 | 2019-07-26 | 径卫视觉科技(上海)有限公司 | A kind of vehicle location and attitude and heading reference system based on image and corresponding method |
CN110297232A (en) * | 2019-05-24 | 2019-10-01 | 合刃科技(深圳)有限公司 | Monocular distance measuring method, device and electronic equipment based on computer vision |
CN110556024A (en) * | 2019-07-18 | 2019-12-10 | 华瑞新智科技(北京)有限公司 | Anti-collision auxiliary driving method and system and computer readable storage medium |
CN112668361B (en) * | 2019-10-15 | 2024-06-04 | 北京地平线机器人技术研发有限公司 | Alarm accuracy determining method, device and computer readable storage medium |
CN112668361A (en) * | 2019-10-15 | 2021-04-16 | 北京地平线机器人技术研发有限公司 | Alarm accuracy determination method, device and computer readable storage medium |
CN110796103A (en) * | 2019-11-01 | 2020-02-14 | 邵阳学院 | Target based on fast-RCNN and distance detection method thereof |
CN110979318A (en) * | 2019-11-20 | 2020-04-10 | 苏州智加科技有限公司 | Lane information acquisition method and device, automatic driving vehicle and storage medium |
CN110979318B (en) * | 2019-11-20 | 2021-06-04 | 苏州智加科技有限公司 | Lane information acquisition method and device, automatic driving vehicle and storage medium |
CN111241948B (en) * | 2020-01-02 | 2023-10-31 | 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) | Method and system for all-weather ship identification |
CN111241948A (en) * | 2020-01-02 | 2020-06-05 | 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) | Method and system for identifying ship in all weather |
CN113408320A (en) * | 2020-03-16 | 2021-09-17 | 上海博泰悦臻网络技术服务有限公司 | Method, electronic device, and computer storage medium for vehicle collision avoidance |
CN111422190B (en) * | 2020-04-03 | 2021-08-31 | 北京四维智联科技有限公司 | Forward collision early warning method and system for rear car loader |
CN113643355B (en) * | 2020-04-24 | 2024-03-29 | 广州汽车集团股份有限公司 | Target vehicle position and orientation detection method, system and storage medium |
CN113643355A (en) * | 2020-04-24 | 2021-11-12 | 广州汽车集团股份有限公司 | Method and system for detecting position and orientation of target vehicle and storage medium |
CN112242058A (en) * | 2020-05-29 | 2021-01-19 | 北京新能源汽车技术创新中心有限公司 | Target abnormity detection method and device based on traffic monitoring video and storage medium |
CN111975769A (en) * | 2020-07-16 | 2020-11-24 | 华南理工大学 | Mobile robot obstacle avoidance method based on meta-learning |
CN111950488B (en) * | 2020-08-18 | 2022-07-19 | 山西大学 | Improved Faster-RCNN remote sensing image target detection method |
CN111950488A (en) * | 2020-08-18 | 2020-11-17 | 山西大学 | Improved fast-RCNN remote sensing image target detection method |
CN111950483A (en) * | 2020-08-18 | 2020-11-17 | 北京理工大学 | Vision-based vehicle front collision prediction method |
CN112183370A (en) * | 2020-09-29 | 2021-01-05 | 爱动超越人工智能科技(北京)有限责任公司 | Fork truck anti-collision early warning system and method based on AI vision |
CN112417953A (en) * | 2020-10-12 | 2021-02-26 | 腾讯科技(深圳)有限公司 | Road condition detection and map data updating method, device, system and equipment |
CN112265546A (en) * | 2020-10-26 | 2021-01-26 | 吉林大学 | Networked automobile speed prediction method based on time-space sequence information |
CN112550272A (en) * | 2020-12-14 | 2021-03-26 | 重庆大学 | Intelligent hybrid electric vehicle hierarchical control method based on visual perception and deep reinforcement learning |
CN112908034A (en) * | 2021-01-15 | 2021-06-04 | 中山大学南方学院 | Intelligent bus safe driving behavior auxiliary supervision system and control method |
CN112896042A (en) * | 2021-03-02 | 2021-06-04 | 广州通达汽车电气股份有限公司 | Vehicle driving early warning method, device, equipment and storage medium |
CN113792598A (en) * | 2021-08-10 | 2021-12-14 | 西安电子科技大学广州研究院 | Vehicle-mounted camera-based vehicle collision prediction system and method |
CN114228614A (en) * | 2021-12-29 | 2022-03-25 | 阿波罗智联(北京)科技有限公司 | Vehicle alarm method and device, electronic equipment and storage medium |
CN114802225A (en) * | 2022-03-04 | 2022-07-29 | 湖北国际物流机场有限公司 | Airplane guide vehicle control method and system and electronic equipment |
CN114802225B (en) * | 2022-03-04 | 2024-01-12 | 湖北国际物流机场有限公司 | Control method and system for aircraft guided vehicle and electronic equipment |
CN117734683A (en) * | 2024-02-19 | 2024-03-22 | 中国科学院自动化研究所 | Underground vehicle anti-collision safety early warning decision-making method |
CN117734683B (en) * | 2024-02-19 | 2024-05-24 | 中国科学院自动化研究所 | Underground vehicle anti-collision safety early warning decision-making method |
Also Published As
Publication number | Publication date |
---|---|
CN107972662B (en) | 2019-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107972662B (en) | Vehicle forward collision early warning method based on deep learning | |
CN107609522B (en) | Information fusion vehicle detection system based on laser radar and machine vision | |
Kim et al. | Robust lane detection based on convolutional neural network and random sample consensus | |
RU2767955C1 (en) | Methods and systems for determining the presence of dynamic objects by a computer | |
CN110745140B (en) | Vehicle lane change early warning method based on continuous image constraint pose estimation | |
CN108638999B (en) | Anti-collision early warning system and method based on 360-degree look-around input | |
CN109919074B (en) | Vehicle sensing method and device based on visual sensing technology | |
CN112154455B (en) | Data processing method, equipment and movable platform | |
CN113044059A (en) | Safety system for a vehicle | |
CN111292366B (en) | Visual driving ranging algorithm based on deep learning and edge calculation | |
CN111986128A (en) | Off-center image fusion | |
CN108830131B (en) | Deep learning-based traffic target detection and ranging method | |
CN110969064A (en) | Image detection method and device based on monocular vision and storage equipment | |
CN114495064A (en) | Monocular depth estimation-based vehicle surrounding obstacle early warning method | |
CN112116809A (en) | Non-line-of-sight vehicle anti-collision method and device based on V2X technology | |
CN114119955A (en) | Method and device for detecting potential dangerous target | |
CN113870246A (en) | Obstacle detection and identification method based on deep learning | |
EP4148599A1 (en) | Systems and methods for providing and using confidence estimations for semantic labeling | |
CN116587978A (en) | Collision early warning method and system based on vehicle-mounted display screen | |
CN113895439B (en) | Automatic driving lane change behavior decision method based on probability fusion of vehicle-mounted multisource sensors | |
KR20200140979A (en) | Method, Apparatus for controlling vehicle, and system including it | |
Lai et al. | Sensor fusion of camera and MMW radar based on machine learning for vehicles | |
CN114282776A (en) | Method, device, equipment and medium for cooperatively evaluating automatic driving safety of vehicle and road | |
CN116635919A (en) | Object tracking device and object tracking method | |
CN114049542A (en) | Fusion positioning method based on multiple sensors in dynamic scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |