CN107972662B - Vehicle forward collision early warning method based on deep learning - Google Patents

Vehicle forward collision early warning method based on deep learning Download PDF

Info

Publication number
CN107972662B
CN107972662B CN201710975371.5A CN201710975371A CN107972662B CN 107972662 B CN107972662 B CN 107972662B CN 201710975371 A CN201710975371 A CN 201710975371A CN 107972662 B CN107972662 B CN 107972662B
Authority
CN
China
Prior art keywords
vehicle
distance
camera
early warning
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710975371.5A
Other languages
Chinese (zh)
Other versions
CN107972662A (en
Inventor
周智恒
曹前
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201710975371.5A priority Critical patent/CN107972662B/en
Publication of CN107972662A publication Critical patent/CN107972662A/en
Application granted granted Critical
Publication of CN107972662B publication Critical patent/CN107972662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/14Adaptive cruise control
    • B60W30/16Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The invention discloses a vehicle forward collision early warning method based on deep learning, which comprises the following steps: acquiring image information in front of a vehicle in real time through a vehicle-mounted camera, and preprocessing the image; extracting vehicle features in the image by using a multi-scale deep convolution neural network to realize the identification and positioning of a vehicle target; calculating the distance between the current vehicle and the front vehicle based on the geometric relation projection and the camera parameters according to the vehicle position in the image; calculating the relative speed according to the change of the real-time distance between the main vehicle and the front vehicle; and calculating the collision distance according to the relative speed and the distance between the two vehicles, judging the danger level of the vehicle according to the calculation result, and then invoking a corresponding collision early warning strategy. The method can acquire the information of the front vehicle in real time, accurately estimate the distance to the front vehicle and the collision time, make appropriate early warning measures, ensure safe driving and reduce traffic accidents.

Description

vehicle forward collision early warning method based on deep learning
Technical Field
The invention relates to the technical field of pattern recognition and the technical field of automobile active safety, in particular to a vehicle forward collision early warning method based on deep learning.
background
With the gradual improvement of living standard of people, more and more people buy the car as the instrument of riding instead of walk, and the quantity of car is also increasing day by day. Traffic safety becomes a more and more serious problem and is also more and more valued by people. The frequent occurrence of traffic accidents is caused by carelessness of drivers and insufficient safety performance of automobiles.
The auxiliary driving system is an active safety system, can monitor road information and the state of a driver in real time, can judge whether the distance between the vehicle and a pedestrian in front is possible to collide with each other or not, and even can take over the driving right of the vehicle at an emergency to actively brake, so that accidents are avoided. Now, what people need is a system capable of preventing accidents, not only a system for reducing injuries after the accidents occur, but also a forward collision early warning system of an automobile has very important research significance and development prospect.
at present, technologies such as radar, laser, ultrasonic waves and the like are commonly used in active safety systems, but the technologies require expensive equipment and are not beneficial to popularization and use in all vehicles. More than 80% of the environmental information required by driving the automobile is obtained through vision, and compared with other sensors, the vision sensor can better simulate the behavior of the driver and has incomparable advantages.
The existing vision-based vehicle forward early warning technology comprises license plate distance measurement, vehicle bottom shadow distance measurement and the like, and vehicles are identified mainly by manually extracting vehicle characteristics such as color characteristics, texture characteristics, contour characteristics, geometric characteristics and the like, so that the distance measurement is carried out. The method for manually extracting the vehicle features is time-consuming and labor-consuming, cannot fully utilize vehicle information, cannot achieve higher detection accuracy when the vehicle is blocked or the image is blurred due to bad weather, and is poor in robustness.
Deep learning realizes complex function approximation and input data representation by learning a deep nonlinear network structure, and shows the learning capability of strong essential characteristics of a data set. The convolutional neural network, a typical deep learning method, is a multi-layer perceptron specifically designed for two-dimensional image processing. The convolutional neural network does not need to artificially participate in the feature selection process, and can automatically learn the target features in a large number of data sets. The weight sharing and local connection mechanism of the method has the advantages over the traditional technology: the method has certain invariance to geometric transformation, deformation and illumination, and simultaneously has good fault-tolerant capability, parallel processing capability and self-learning capability. The advantages enable the convolutional neural network to have great advantages when the problems of complex environmental information and uncertain inference rules are processed, and the problems of scale change, rotation deformation and the like of the vehicle can be tolerated.
R-CNN (regions with proportional Neural Network features) obtains suggested regions with different sizes through a region of interest extraction step, and then the suggested regions are input into a Convolutional Neural Network after being scaled to a fixed size, but the method is large in calculation amount and cannot meet the real-time requirement. The fast-RCNN realizes end-to-end detection by using a region-of-interest generation network, however, the method generates region suggestions of multi-scale targets by sliding a group of convolution kernels on a fixed feature map, so that contradiction between variable target size and fixed feature map receptive field size is caused, and the method cannot adapt to diversity of target sizes in a real environment. Therefore, a convolutional neural network adaptive to a multi-scale target is needed for solving the problem of end-to-end automatic detection of a vehicle in a vehicle forward collision early warning system, ensuring that the vehicle forward collision early warning system accurately and stably obtains vehicle information, and realizing real-time and accurate collision early warning.
Disclosure of Invention
the invention aims to overcome the defects of the prior art and provide a vehicle forward collision early warning method based on deep learning, which improves the accuracy of vehicle detection, ensures the reliability of actual distance measurement, effectively realizes real-time collision early warning and reduces the occurrence of traffic accidents.
The purpose of the invention is realized by the following technical scheme: a vehicle forward collision early warning method based on deep learning comprises the following steps:
S1, acquiring image information in front of the vehicle in real time through the vehicle-mounted camera; in the image acquisition process, a CCD camera is arranged in the front of the inner face of the vehicle, and the road image in front of the vehicle is acquired at a fixed frequency f;
S2, extracting vehicle features in the image by using a multi-scale depth convolution neural network, realizing the identification and positioning of a vehicle target, and marking a front vehicle by using a rectangular frame;
s3, calculating the distance between the current vehicle and the front vehicle based on the geometric relation projection and the camera parameters according to the vehicle position in the image;
S4, calculating the relative speed according to the change of the real-time distance between the main vehicle and the front vehicle;
And S5, calculating the collision time according to the relative speed and the distance between the two vehicles, and judging the danger level of the current vehicle state according to the calculation result.
Preferably, in step S2, the overall framework of the multi-scale deep convolutional neural network is: the VGG model is a main line, and is added with a plurality of area suggestion sub-networks for extracting the interested areas and a detection sub-network for classification and position refinement.
Furthermore, the VGG model comprises 13 convolution layers, 3 full-connection layers and 16 layers in total; the area suggestion sub-network extends and branches from a convolutional layer 4-3, a convolutional layer 5-3, a convolutional layer 6 and a maximum pooling layer 6 of the VGG network respectively; each region suggestion sub-network simultaneously predicts whether the object is the object or not and regresses the target boundary, and outputs n interested suggestion frames in total; the detection subnetwork firstly reduces redundancy of the extracted n interesting suggestion boxes by using a non-maximum suppression method with a threshold value of 0.7, and each image is left with about 2k suggestion areas; and (3) downsampling each recommended region to a uniform size through an ROI posing layer, and finally performing softmax classification and position refinement respectively after passing through a full connection layer.
preferably, in step S2, the offline training process is as follows:
1. model weights are initialized by a VGG network pre-trained on ImageNet; training only the region suggestion subnetwork, and iterating 10000 times at the learning rate of 0.00005 to generate a model;
2. the generated model is used for initializing the second stage, iteration is carried out at an initial learning rate of 0.00005, the learning rate is reduced by 10 times in 10000 iterations, and 25000 iterations are carried out in total;
the learning process of the two stages realizes stable multi-task training; a multitask penalty is employed, wherein the penalty function is:
where i is an index of proposed regions in a batch process, NclsNormalized coefficient of classification layer, NregIs the normalized coefficient of the regression layer, λ is the equilibrium weight, piis the predicted probability of the vehicle target,as authentic labelsi.e. if the candidate region is positiveIf the candidate region is negativetiIs a vector, representing the 4 parameterized coordinates of the predicted bounding box,A coordinate vector of a real bounding box corresponding to the positive candidate region;
LclsFor the log loss of classification:
LregTo regression log cost:wherein R is a Smooth L1 error,
tiAndthe calculation method of (c) is as follows:
tx=(x-xa)/wa,ty=(y-ya)/ha,tw=log(w/wa),th=log(h/ha),
Wherein x, y, w, h respectively represent the center coordinate, width and height of the bounding box, and xa,ya,wa,harespectively representing the center coordinates, width and height, x, of the candidate region*,y*,w*,h*representing predicted bounding boxes, respectivelycenter coordinates, width and height.
Preferably, in step S3, a geometric distance measurement method based on camera fixed parameters is adopted, and the specific steps include:
s31: fixing a monocular camera with an effective focal length f in front of a vehicle, and measuring the height h of the monocular camera from the ground;
S32: the geometric distance measurement algorithm based on camera projection and parameter setting is used for calculating the distance between vehicles by combining the geometric coordinates formed by the road surface on which the vehicles run and the vehicle body with the parameters preset by the CCD camera; when the automobile runs on a horizontal road surface, the projection model is an ideal geometric model, the camera is assumed to be parallel to the ground, and the distance between the current vehicle and the front vehicle can be obtained through geometric relationship and proportion derivation:Where f denotes an effective focal length of the camera, h denotes a mounting height of the camera, and (x)0,y0) The intersection point of the optical axis and the image plane is shown, and (x, y) the coordinate of a point p on the road surface on the image plane is shown.
preferably, in step S4, the speed of the ith frame is estimated from the average speed 1 second before the frame, and the average speed is calculated by a difference-by-difference method:wherein the video frame rate is n, the distance between the detected front vehicle and the vehicle in the ith frame isWherein ^ t is the firstThe time interval between the frame and the ith frame, obviously t is 0.5 s; the method avoids the influence of the error of the distance measurement of a certain frame on the TTC, ensures that the value of the TTC does not fluctuate violently, and increases the accuracy of the early warning system.
preferably, in step S5, the time to collision TTC is calculated as:Where d represents the inter-vehicle distance, obtained in step S3, and v represents the relative velocity, obtained in step S4.
Preferably, in the step S6, the determination of the current relative driving state of the vehicle is implemented according to the time to collision TTC, and a corresponding warning is given; the specific vehicle early warning mode is as follows: TTC >3s is a safe state; when 1.5s < TTC <3s, a prompt early warning is sent out; and when the TTC is less than 1.5s, giving out emergency early warning, and simultaneously adopting an active braking measure.
compared with the prior art, the invention has the following advantages and beneficial effects:
1. In the invention, the vehicle anti-collision early warning processing can be carried out only by adopting the camera equipment (such as a camera), and compared with the technologies of radar, laser, ultrasonic wave and the like, the hardware cost is saved.
2. the invention adopts the multi-scale deep convolution neural network, realizes a vehicle forward collision early warning framework using the multi-scale deep convolution neural network, is suitable for vehicle forward collision early warning with high precision and robustness in the environments of size diversity, form diversity, illumination change diversity, background diversity and the like of vehicle targets, and improves the stability of the algorithm and the user experience compared with the traditional method.
drawings
Fig. 1 is a schematic flow chart of a vehicle forward collision warning method based on deep learning according to an embodiment.
FIG. 2 is a block diagram of an embodiment of a multi-scale deep convolutional neural network.
FIG. 3 is a schematic diagram of a region suggestion subnetwork of the multi-scale deep convolutional neural network of the embodiment.
FIG. 4 is a diagram of a ranging geometry model according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Example 1
Fig. 1 is a schematic flow chart of a vehicle forward collision warning method based on deep learning. In this embodiment, the vehicle forward collision early warning method based on deep learning includes:
S1, acquiring image information in front of the vehicle in real time through the vehicle-mounted camera; in the image acquisition process, a CCD camera is adopted to be arranged in the front of the interior of the vehicle, and the road image in front of the vehicle is acquired at a fixed frequency f.
And S2, extracting the vehicle features in the image by using a multi-scale depth convolution neural network, realizing the recognition and positioning of the vehicle target, and marking the front vehicle by using a rectangular frame.
And S3, calculating the distance between the current vehicle and the front vehicle based on the geometric relation projection and the camera parameters according to the vehicle position in the image.
And S4, calculating the relative speed according to the change of the real-time distance between the main vehicle and the front vehicle.
And S5, calculating the Time To Collision (TTC) according to the relative speed and the distance between the two vehicles, and judging the danger level of the current vehicle state according to the calculation result.
In the step S2, the overall framework of the multi-scale deep convolutional neural network is as shown in fig. 2: the VGG model is a main line, and is added with a plurality of area suggestion sub-networks for extracting the interested areas and a detection sub-network for classification and position refinement.
The VGG model comprises 13 convolution layers, 3 full-connection layers and 16 layers in total;
The area recommendation sub-networks are shown in FIG. 3 and extend and branch from convolutional layers 4-3, 5-3, 6 and 6 of the VGG network respectively. Each regional suggestion subnetwork simultaneously predicts whether the object is the object or not and regresses the target boundary, and n interested suggestion boxes are output in total.
The detection subnetwork firstly reduces redundancy of the extracted n interesting suggestion boxes by using a non-maximum suppression method with a threshold value of 0.7, and each graph is left with about 2k suggestion areas. And (3) downsampling each recommended region to a uniform size through an ROI posing layer, and finally performing softmax classification and position refinement respectively after passing through a full connection layer.
in step S2, the off-line training process is as follows:
1. Model weights are initialized by the VGG network pre-trained on ImageNet. Only the region suggestion subnetwork was trained, and the model was generated with a learning rate of 0.00005 for 10000 iterations.
2. The generated model is used to initialize the second stage, iterating at an initial learning rate of 0.00005, with 10000 iterations of learning rate reduction by a factor of 10, for a total of 25000 iterations.
The two-phase learning process achieves stable multi-task training. A multitask penalty is employed, wherein the penalty function is:
Where i is an index of proposed regions in a batch process, NclsNormalized coefficient of classification layer, NregIs the normalized coefficient of the regression layer, λ is the equilibrium weight, piIs the predicted probability of the vehicle target,as a true tag, i.e. if the candidate region is positiveif the candidate region is negativetiIs a vector, representing the 4 parameterized coordinates of the predicted bounding box,A coordinate vector of a real bounding box corresponding to the positive candidate region;
Lclsfor the log loss of classification:
Lregto regression log cost:Wherein R is a Smooth L1 error,
tiandthe calculation method of (c) is as follows:
tx=(x-xa)/wa,ty=(y-ya)/ha,tw=log(w/wa),th=log(h/ha),
Wherein x, y, w, h respectively represent the center coordinate, width and height of the bounding box, and xa,ya,wa,haRespectively representing the center coordinates, width and height, x, of the candidate region*,y*,w*,h*Respectively representing the predicted bounding box center coordinates, width and height.
in step S3, a geometric distance measurement method based on the fixed parameters of the camera is used. The method comprises the following specific steps:
S31: and fixing the monocular camera with the effective focal length f in front of the vehicle, and measuring the height h of the monocular camera from the ground.
s32: the geometric distance measurement algorithm based on camera projection and parameter setting is used for calculating the distance between vehicles by combining the geometric coordinates formed by the road surface on which the vehicles run and the vehicle body with the parameters preset by the CCD camera. When the automobile runs on a horizontal road surface, the projection model is an ideal geometric model, and the camera is assumed to be parallel to the ground, and a schematic diagram of the vehicle distance estimation under the condition is shown in fig. 4. The distance between the current vehicle and the front vehicle can be obtained through geometrical relation and proportion derivation:Where f denotes an effective focal length of the camera, h denotes a mounting height of the camera, and (x)0,y0) The intersection point of the optical axis and the image plane is shown, and (x, y) the coordinate of a point p on the road surface on the image plane is shown.
In step S4, the velocity of the ith frame is estimated from the average velocity 1 second before the frame, and the average velocity is calculated by the difference-by-difference method:wherein the video frame rate is n, the distance between the detected front vehicle and the vehicle in the ith frame isWherein ^ t is the firstthe time interval between the frame and the ith frame, obviously t, is 0.5 s. The method avoids the influence of the error of the distance measurement of a certain frame on the TTC, ensures that the value of the TTC does not fluctuate violently, and increases the accuracy of the early warning system.
In step S5, the time to collision TTC is calculated as:where d represents the inter-vehicle distance, obtained in step S3, and v represents the relative velocity, obtained in step S4.
in the step S6, the current relative driving state of the vehicle is determined according to the time to collision TTC, and a corresponding warning is given. Relevant researches show that for the front vehicle collision, when a warning is given 2.5 seconds in advance, the vehicle can be braked and stopped basically by considering the reaction time of people and the braking distance, and accidents are prevented. In view of this, the alarm time is continuously relaxed to be set at 3 seconds. The specific vehicle early warning mode is as follows: TTC >3s is a safe state; when 1.5s < TTC <3s, a prompt early warning is sent out; and when the TTC is less than 1.5s, giving out emergency early warning, and simultaneously adopting an active braking measure.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (6)

1. A vehicle forward collision early warning method based on deep learning is characterized by comprising the following steps:
S1, acquiring image information in front of the vehicle in real time through the vehicle-mounted camera; in the image acquisition process, a CCD camera is arranged in the front of the inner face of the vehicle, and the road image in front of the vehicle is acquired at a fixed frequency f;
S2, extracting vehicle features in the image by using a multi-scale depth convolution neural network, realizing the identification and positioning of a vehicle target, and marking a front vehicle by using a rectangular frame;
The overall framework of the multi-scale deep convolutional neural network is as follows: the VGG model is a main line, and a plurality of area suggestion sub-networks for extracting the interested areas and a detection sub-network for classification and position refinement are added on the basis of the main line; the VGG model comprises 13 convolution layers, 3 full-connection layers and 16 layers in total; the area suggestion sub-network extends and branches from a convolutional layer 4-3, a convolutional layer 5-3, a convolutional layer 6 and a maximum pooling layer 6 of the VGG network respectively; each region suggestion sub-network simultaneously predicts whether the object is the object or not and regresses the target boundary, and outputs n interested suggestion frames in total; the detection subnetwork firstly reduces redundancy of the extracted n interesting suggestion boxes by using a non-maximum suppression method with a threshold value of 0.7, and each image is left with about 2k suggestion areas; for each proposed region, performing ROI pooling layer downsampling to a uniform size, and finally performing softmax classification and position refinement respectively after passing through a full connection layer;
s3, calculating the distance between the current vehicle and the front vehicle based on the geometric relation projection and the camera parameters according to the vehicle position in the image;
s4, calculating the relative speed according to the change of the real-time distance between the main vehicle and the front vehicle;
And S5, calculating the time to collision TTC according to the relative speed and the distance between the two vehicles, and judging the danger level of the current vehicle state according to the calculation result.
2. The deep learning-based vehicle forward collision warning method as claimed in claim 1, wherein in step S2, the off-line training process is as follows:
(1) Initializing the model weight by a VGG network pre-trained on ImageNet; training only the region suggestion subnetwork, and iterating 10000 times at the learning rate of 0.00005 to generate a model;
(2) The generated model is used for initializing the second stage, iteration is carried out at an initial learning rate of 0.00005, the learning rate is reduced by 10 times in 10000 iterations, and 25000 iterations are carried out in total;
The learning process of the two stages realizes stable multi-task training; a multitask penalty is employed, wherein the penalty function is:
Where i is an index of proposed regions in a batch process, Nclsnormalized coefficient of classification layer, NregIs the normalized coefficient of the regression layer, λ is the equilibrium weight, piIs the predicted probability of the vehicle target,as a true tag, i.e. if the candidate region is positiveIf the candidate region is negativetiIs a vector, representing the 4 parameterized coordinates of the predicted bounding box,a coordinate vector of a real bounding box corresponding to the positive candidate region;
LclsFor the log loss of classification:
LregTo regression log cost:wherein R is a Smooth L1 error,
tiandThe calculation method of (c) is as follows:
tx=(x-xa)/wa,ty=(y-ya)/ha,tw=log(w/wa),th=log(hha),
wherein x, y, w, h respectively represent the center coordinate, width and height of the bounding box, and xa,ya,wa,harespectively representing the center coordinates, width and height, x, of the candidate region*,y*,w*,h*respectively representing the predicted bounding box center coordinates, width and height.
3. the vehicle forward collision warning method based on deep learning of claim 1, wherein in step S3, a geometric ranging method based on camera fixed parameters is adopted, and the specific steps include:
s31: fixing a monocular camera with an effective focal length f in front of a vehicle, and measuring the height h of the monocular camera from the ground;
s32: the geometric distance measurement algorithm based on camera projection and parameter setting is used for calculating the distance between vehicles by combining the geometric coordinates formed by the road surface on which the vehicles run and the vehicle body with the parameters preset by the CCD camera; when the automobile runs on a horizontal road surface, the projection model is an ideal geometric model, the camera is assumed to be parallel to the ground, and the distance between the current vehicle and the front vehicle can be obtained through geometric relationship and proportion derivation:Where f denotes an effective focal length of the camera, h denotes a mounting height of the camera, and (x)0,y0) The intersection point of the optical axis and the image plane is shown, and (x, y) the coordinate of a point p on the road surface on the image plane is shown.
4. the deep learning-based vehicle forward collision warning method as claimed in claim 1, wherein in step S4, the speed of the ith frame is estimated by the average speed 1 second before the frame, and the average speed is calculated by a difference-by-difference method:wherein the video frame rate is n, the distance between the detected front vehicle and the vehicle in the ith frame iswhereinis as followsthe time interval between the frame and the ith frame, obviously t is 0.5 s; the method avoids the influence of the error of the distance measurement of a certain frame on the TTC, ensures that the value of the TTC does not fluctuate violently, and increases the accuracy of the early warning system.
5. The deep learning-based vehicle forward collision warning method according to claim 3 or 4, wherein in the step S5, the time to collision TTC is calculated as:where d represents the two-car distance, as obtained in step S3,representing the relative velocity, is obtained in step S4.
6. the deep learning-based vehicle forward collision warning method according to claim 1, wherein in step S6, the determination of the current relative driving state of the vehicle is implemented according to the time to collision TTC, and a corresponding warning is given; the specific vehicle early warning mode is as follows: TTC >3s is a safe state; when 1.5s < TTC <3s, a prompt early warning is sent out; and when the TTC is less than 1.5s, giving out emergency early warning, and simultaneously adopting an active braking measure.
CN201710975371.5A 2017-10-16 2017-10-16 Vehicle forward collision early warning method based on deep learning Active CN107972662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710975371.5A CN107972662B (en) 2017-10-16 2017-10-16 Vehicle forward collision early warning method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710975371.5A CN107972662B (en) 2017-10-16 2017-10-16 Vehicle forward collision early warning method based on deep learning

Publications (2)

Publication Number Publication Date
CN107972662A CN107972662A (en) 2018-05-01
CN107972662B true CN107972662B (en) 2019-12-10

Family

ID=62012502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710975371.5A Active CN107972662B (en) 2017-10-16 2017-10-16 Vehicle forward collision early warning method based on deep learning

Country Status (1)

Country Link
CN (1) CN107972662B (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932471B (en) * 2018-05-23 2020-06-26 浙江科技学院 Vehicle detection method
CN108759667B (en) * 2018-05-29 2019-11-12 福州大学 Front truck distance measuring method under vehicle-mounted camera based on monocular vision and image segmentation
CN108639000A (en) * 2018-06-05 2018-10-12 上海擎感智能科技有限公司 Vehicle, vehicle device equipment, car accident prior-warning device and method
CN108783846A (en) * 2018-06-13 2018-11-13 苏州若依玫信息技术有限公司 A kind of intelligent knapsack control method with function of safety protection
CN108875641B (en) * 2018-06-21 2021-10-19 南京信息工程大学 Long-term parallel driving identification method and system for expressway
CN108944940B (en) * 2018-06-25 2020-05-19 大连大学 Driver behavior modeling method based on neural network
CN109084724A (en) * 2018-07-06 2018-12-25 西安理工大学 A kind of deep learning barrier distance measuring method based on binocular vision
CN109109859A (en) * 2018-08-07 2019-01-01 庄远哲 A kind of electric car and its distance detection method
CN109334563B (en) * 2018-08-31 2021-06-22 江苏大学 Anti-collision early warning method based on pedestrians and riders in front of road
CN109284699A (en) * 2018-09-04 2019-01-29 广东翼卡车联网服务有限公司 A kind of deep learning method being applicable in vehicle collision
CN109190591A (en) * 2018-09-20 2019-01-11 辽宁工业大学 A kind of front truck identification prior-warning device and identification method for early warning based on camera
CN109472251B (en) 2018-12-16 2022-04-05 华为技术有限公司 Object collision prediction method and device
CN109693672B (en) * 2018-12-28 2020-11-06 百度在线网络技术(北京)有限公司 Method and device for controlling an unmanned vehicle
CN109787821B (en) * 2019-01-04 2020-06-19 华南理工大学 Intelligent prediction method for large-scale mobile client traffic consumption
CN111508124A (en) * 2019-01-11 2020-08-07 百度在线网络技术(北京)有限公司 Authority verification method and device
CN111507126B (en) * 2019-01-30 2023-04-25 杭州海康威视数字技术股份有限公司 Alarm method and device of driving assistance system and electronic equipment
CN109910865B (en) * 2019-02-26 2021-05-28 临沂合力电子有限公司 Vehicle early warning braking method based on Internet of things
CN109782320A (en) * 2019-03-13 2019-05-21 中国人民解放军海军工程大学 Transport queue localization method and system
CN110060298B (en) * 2019-03-21 2023-06-20 径卫视觉科技(上海)有限公司 Image-based vehicle position and posture determining system and corresponding method
CN109951686A (en) * 2019-03-21 2019-06-28 山推工程机械股份有限公司 A kind of engineer machinery operation method for safety monitoring and its monitoring system
CN110297232A (en) * 2019-05-24 2019-10-01 合刃科技(深圳)有限公司 Monocular distance measuring method, device and electronic equipment based on computer vision
CN110556024B (en) * 2019-07-18 2021-02-23 华瑞新智科技(北京)有限公司 Anti-collision auxiliary driving method and system and computer readable storage medium
CN110796103A (en) * 2019-11-01 2020-02-14 邵阳学院 Target based on fast-RCNN and distance detection method thereof
CN110979318B (en) * 2019-11-20 2021-06-04 苏州智加科技有限公司 Lane information acquisition method and device, automatic driving vehicle and storage medium
CN111241948B (en) * 2020-01-02 2023-10-31 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) Method and system for all-weather ship identification
CN113408320A (en) * 2020-03-16 2021-09-17 上海博泰悦臻网络技术服务有限公司 Method, electronic device, and computer storage medium for vehicle collision avoidance
CN111422190B (en) * 2020-04-03 2021-08-31 北京四维智联科技有限公司 Forward collision early warning method and system for rear car loader
CN113643355B (en) * 2020-04-24 2024-03-29 广州汽车集团股份有限公司 Target vehicle position and orientation detection method, system and storage medium
CN112242058B (en) * 2020-05-29 2022-04-26 北京国家新能源汽车技术创新中心有限公司 Target abnormity detection method and device based on traffic monitoring video and storage medium
CN111975769A (en) * 2020-07-16 2020-11-24 华南理工大学 Mobile robot obstacle avoidance method based on meta-learning
CN111950488B (en) * 2020-08-18 2022-07-19 山西大学 Improved Faster-RCNN remote sensing image target detection method
CN111950483A (en) * 2020-08-18 2020-11-17 北京理工大学 Vision-based vehicle front collision prediction method
CN112183370A (en) * 2020-09-29 2021-01-05 爱动超越人工智能科技(北京)有限责任公司 Fork truck anti-collision early warning system and method based on AI vision
CN112417953B (en) * 2020-10-12 2022-07-19 腾讯科技(深圳)有限公司 Road condition detection and map data updating method, device, system and equipment
CN112265546B (en) * 2020-10-26 2021-11-02 吉林大学 Networked automobile speed prediction method based on time-space sequence information
CN112550272B (en) * 2020-12-14 2021-07-30 重庆大学 Intelligent hybrid electric vehicle hierarchical control method based on visual perception and deep reinforcement learning
CN112908034A (en) * 2021-01-15 2021-06-04 中山大学南方学院 Intelligent bus safe driving behavior auxiliary supervision system and control method
CN112896042A (en) * 2021-03-02 2021-06-04 广州通达汽车电气股份有限公司 Vehicle driving early warning method, device, equipment and storage medium
CN113792598B (en) * 2021-08-10 2023-04-14 西安电子科技大学广州研究院 Vehicle-mounted camera-based vehicle collision prediction system and method
CN114228614A (en) * 2021-12-29 2022-03-25 阿波罗智联(北京)科技有限公司 Vehicle alarm method and device, electronic equipment and storage medium
CN114802225B (en) * 2022-03-04 2024-01-12 湖北国际物流机场有限公司 Control method and system for aircraft guided vehicle and electronic equipment
CN117734683A (en) * 2024-02-19 2024-03-22 中国科学院自动化研究所 Underground vehicle anti-collision safety early warning decision-making method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740910A (en) * 2016-02-02 2016-07-06 北京格灵深瞳信息技术有限公司 Vehicle object detection method and device
CN105844222A (en) * 2016-03-18 2016-08-10 上海欧菲智能车联科技有限公司 System and method for front vehicle collision early warning based on visual sense
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106781692A (en) * 2016-12-01 2017-05-31 东软集团股份有限公司 The method of vehicle collision prewarning, apparatus and system
CN106904143A (en) * 2015-12-23 2017-06-30 上海汽车集团股份有限公司 The guard method of a kind of pedestrian and passenger, system and controller
CN107169421A (en) * 2017-04-20 2017-09-15 华南理工大学 A kind of car steering scene objects detection method based on depth convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106904143A (en) * 2015-12-23 2017-06-30 上海汽车集团股份有限公司 The guard method of a kind of pedestrian and passenger, system and controller
CN105740910A (en) * 2016-02-02 2016-07-06 北京格灵深瞳信息技术有限公司 Vehicle object detection method and device
CN105844222A (en) * 2016-03-18 2016-08-10 上海欧菲智能车联科技有限公司 System and method for front vehicle collision early warning based on visual sense
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106781692A (en) * 2016-12-01 2017-05-31 东软集团股份有限公司 The method of vehicle collision prewarning, apparatus and system
CN107169421A (en) * 2017-04-20 2017-09-15 华南理工大学 A kind of car steering scene objects detection method based on depth convolutional neural networks

Also Published As

Publication number Publication date
CN107972662A (en) 2018-05-01

Similar Documents

Publication Publication Date Title
CN107972662B (en) Vehicle forward collision early warning method based on deep learning
CN107609522B (en) Information fusion vehicle detection system based on laser radar and machine vision
Kim et al. Robust lane detection based on convolutional neural network and random sample consensus
JP7140922B2 (en) Multi-sensor data fusion method and apparatus
CN105892471B (en) Automatic driving method and apparatus
RU2767955C1 (en) Methods and systems for determining the presence of dynamic objects by a computer
CN110745140B (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
CN109919074B (en) Vehicle sensing method and device based on visual sensing technology
CN108638999B (en) Anti-collision early warning system and method based on 360-degree look-around input
CN113056749A (en) Future object trajectory prediction for autonomous machine applications
Labayrade et al. In-vehicle obstacles detection and characterization by stereovision
CN111292366B (en) Visual driving ranging algorithm based on deep learning and edge calculation
CN111369541A (en) Vehicle detection method for intelligent automobile under severe weather condition
CN108830131B (en) Deep learning-based traffic target detection and ranging method
CN110969064A (en) Image detection method and device based on monocular vision and storage equipment
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
Sakic et al. Camera-LIDAR object detection and distance estimation with application in collision avoidance system
CN111612818A (en) Novel binocular vision multi-target tracking method and system
EP4148599A1 (en) Systems and methods for providing and using confidence estimations for semantic labeling
CN114282776A (en) Method, device, equipment and medium for cooperatively evaluating automatic driving safety of vehicle and road
CN114049542A (en) Fusion positioning method based on multiple sensors in dynamic scene
CN116635919A (en) Object tracking device and object tracking method
Lai et al. Sensor fusion of camera and MMW radar based on machine learning for vehicles
US20230194700A1 (en) Fuzzy Labeling of Low-Level Electromagnetic Sensor Data
EP4145352A1 (en) Systems and methods for training and using machine learning models and algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant