CN112349144B - Monocular vision-based vehicle collision early warning method and system - Google Patents
Monocular vision-based vehicle collision early warning method and system Download PDFInfo
- Publication number
- CN112349144B CN112349144B CN202011243170.4A CN202011243170A CN112349144B CN 112349144 B CN112349144 B CN 112349144B CN 202011243170 A CN202011243170 A CN 202011243170A CN 112349144 B CN112349144 B CN 112349144B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- target
- collision
- pedestrian
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000001514 detection method Methods 0.000 claims abstract description 66
- 230000001133 acceleration Effects 0.000 claims abstract description 20
- 230000008569 process Effects 0.000 claims abstract description 20
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000005259 measurement Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/143—Alarm means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a vehicle collision early warning method and system based on monocular vision, which are used for acquiring front view image data and carrying out target detection on the acquired image data; setting a collision risk area range; filtering the targets in the collision risk area to obtain the nearest target distance estimation, and estimating the time required by the vehicle to collide with the targets by combining the speed information and the acceleration information of the vehicle; and integrating the estimated value of the distance between the nearest target and the estimated value of the time required for the vehicle to collide with the target, and performing auxiliary early warning on the vehicle collision condition which possibly occurs in the vehicle driving process. The invention effectively reduces the accidents of vehicle collision, vehicle rear-end collision and the like in the driving process, can automatically give collision early warning prompts according to the driving state of the vehicle and the minimum distance of the vehicle/pedestrian in the front driving area obtained based on monocular vision detection, effectively gives driver reminding and reduces the accident occurrence probability.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a vehicle collision early warning method and system based on monocular vision.
Background
The self-driving trip has become a common traffic mode for people's daily trip, and driving safety in the driving process has important influence on people's normal life, has received more and more attention from people, and based on this, the driving auxiliary system takes place at the mercy.
Through search, the following results are found:
chinese patent application No. 201810686185.4, having an application date of 2018, 6 and 28, discloses a driving assistance method, a driving assistance device, and a storage medium, wherein the method includes: acquiring a first image in front of the automobile through a binocular camera, acquiring a second image in front of the automobile through a monocular camera, and outputting and displaying the second image on a vehicle-mounted display screen; extracting at least one image feature information of a first image; judging whether target image characteristic information in at least one image characteristic information meets a preset early warning condition or not; when the target image characteristic information accords with the preset early warning condition, the early warning prompt tone is sent out through the vehicle-mounted sound box, and the associated information is highlighted on the vehicle-mounted display screen. According to the invention, the first image shot by the binocular camera is analyzed, and when the early warning information exists in the current driving process of the automobile, the reminding information is sent out to remind a driver of avoiding in time, so that the probability of safety accidents is reduced, and the images for analysis and output are shot separately, so that the accuracy of the analysis result is further improved. Compared with the target detection method, the technology has the following problems:
1. the technology adopts a binocular camera and a monocular camera at the same time, hardware cost is high, and the technology is based on distance calculation through a binocular parallax method.
2. The technology does not set a collision risk range, and false alarm is easy to cause.
In summary, the existing driving assistance technology cannot well meet the requirement of people on vehicle driving assistance early warning in the driving process, and no explanation or report similar to the technology of the invention is found at present, and similar data at home and abroad is not collected.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a vehicle collision early warning method and system based on monocular vision.
The invention is realized by the following technical scheme.
According to one aspect of the invention, a vehicle collision early warning method based on monocular vision is provided, and comprises the following steps:
acquiring front view image data, and carrying out target detection on the acquired image data;
setting a collision risk area range;
filtering the targets in the collision risk area to obtain the nearest target distance estimation, and estimating the time required by the vehicle to collide with the targets by combining the speed information and the acceleration information of the vehicle;
and (3) integrating the estimated value of the distance between the nearest target and/or the estimated value of the time required for the vehicle to collide with the target, and performing auxiliary early warning on the vehicle collision condition which possibly occurs in the vehicle driving process.
Preferably, the acquiring of the image data of the front view and the target detection of the acquired image data include:
acquiring an image of a road in a traveling direction in real time based on monocular vision;
on the basis of the acquired image, a pedestrian/vehicle target detection model is established by combining a deep neural network;
and detecting the pedestrian and/or the vehicle by using the established pedestrian/vehicle target detection model to obtain a pedestrian and/or vehicle target detection result.
Preferably, the method of establishing a pedestrian/vehicle object detection model comprises:
calibrating images of pedestrians and/or vehicles in a large number of pictures of actual road conditions to form a training data set;
and pre-training the deep neural network by adopting a training data set based on the deep neural network to obtain a pedestrian/vehicle target detection model.
Preferably, the pedestrian/vehicle object detection model includes: the device comprises a feature extraction module and a detection frame regression module; wherein:
the feature extraction module adopts a residual network structure, the input width and height of the residual network structure is 480 x 288 images, the down-sampling multiple is 32, and feature maps with the width and height of 15 x 9 and 30 x 18 are output respectively;
and taking the feature graph output by the feature extraction module as the input of the detection frame regression module, defining n anchor boxes for each feature graph, taking the n anchor boxes as the reference of the detection frame regression module, and outputting the detection result of the pedestrian and/or vehicle target.
Preferably, the method for obtaining the detection result of the pedestrian and/or vehicle target comprises the following steps:
setting:
the predicted coordinate offset of each bounding box is denoted as tx,ty,tw,th;
The coordinate of the upper left corner of the cell offset image in the feature map is recorded as cx,cy;
The size of each Anchor box is denoted as pw,ph;
Coordinates of the prediction box are noted as bx,by,bw,bh;
The coordinates of the real box are denoted as gx,gy,gw,gh;
Then
bx=σ(tx)+cx
by=σ(ty)+cy
And obtaining accurate pedestrian and/or vehicle detection results.
Preferably, the method for setting the range of the collision risk area includes:
presetting four points according to the distance change characteristics of the visual angle, defining a trapezoidal area surrounded by the four points as an area with collision risk, and recording the area as areariskThen areariskComprises the following steps:
(xi,yi)s.t.i∈[1,2,3,4]。
and when the detected target is in the risk area, performing risk collision prediction on the target.
Preferably, the camera resolution is 1920 × 1080, and the coordinates of the four points surrounding the collision risk area are preset as follows:
(680,940),(680,980),(1080,120),(1080,1800)。
preferably, the method of obtaining a closest target distance estimate and/or estimating the time required for a vehicle to collide with a target comprises:
-performing distance estimation on the obtained target detection results located in the collision risk area using a similar triangle rule, comprising:
placing an object with the width W at a position away from the camera D, and if the pixel width of the object in the image is P, calculating to obtain the focal length F of the camera:
F=(P*D)/W
and (3) moving the object away from or close to the camera, and since the focal length F of the camera is not changed, only measuring the width P 'of the object in the image at the moment, and then estimating the distance D' from the object to the camera at the moment:
D’=(F*W)/P’
the average value of the width of the conventional car is taken as the reference vehicle width, and 170cm is taken as the reference measurement when the distance of the pedestrian is estimated, so that the distance estimation value of the pedestrian and/or the vehicle target is obtained;
-the method of estimating the time required for a vehicle to collide with a target, comprising:
acquiring speed information and acceleration information of a vehicle, which are respectively marked as vtAnd at;
The pedestrian and/or vehicle target estimate is obj;
combined with the vehicle speed v at the current momenttAnd acceleration atAnd estimating the time t required for the vehicle to collide with the target by a Newton motion algorithm as follows:
obj.dist=vt*t+0.5*at 2。
preferably, the method for performing auxiliary early warning on vehicle collision during vehicle driving by integrating the closest target distance estimation and the estimation of the time required for the vehicle to collide with the target includes:
and setting a threshold value closest to the target distance and a threshold value of the shortest time for the vehicle to collide with the target, judging to obtain the vehicle collision condition possibly occurring in the vehicle driving process, and performing auxiliary early warning.
According to another aspect of the present invention, there is provided a vehicle collision warning system based on monocular vision, comprising:
the system comprises an external image acquisition camera, a road monitoring camera and a road monitoring camera, wherein the external image acquisition camera is used for acquiring an image of a road in a traveling direction in real time;
the target detection module is used for carrying out target detection on pedestrians and/or vehicles in the image data according to the acquired road image in the advancing direction;
the driving state acquisition module is used for acquiring speed information and acceleration information of the vehicle;
the collision risk area range setting module is used for setting a collision risk area range;
the risk calculation module filters the pedestrian and/or vehicle targets in the collision risk area to obtain the nearest target distance estimation, and estimates the time required for the vehicle to collide with the target by combining the speed information and the acceleration information of the vehicle;
and the auxiliary early warning module integrates the driving state information, the nearest target distance estimation and the estimation of the time required for the vehicle to collide with the target, and performs auxiliary early warning on the vehicle collision condition possibly occurring in the vehicle driving process through a set threshold value.
Preferably, the vehicle exterior image acquisition camera is arranged at the position right above the front window of the vehicle.
Due to the adoption of the scheme, compared with the prior art, the invention has the following beneficial effects:
the vehicle collision early warning method and system based on monocular vision provided by the invention can effectively reduce accidents such as vehicle/pedestrian collision, vehicle rear-end collision and the like in the driving process of the vehicle, can automatically give collision early warning prompts according to the driving state of the vehicle and the minimum distance of the vehicle/pedestrian in the front driving area obtained based on monocular vision detection, effectively give drivers reminding and reduce the accident occurrence probability.
The vehicle collision early warning method and system based on the monocular vision provided by the invention are based on the monocular vision and utilize a similar triangle method to carry out distance estimation.
According to the monocular vision-based vehicle collision early warning method and system, the specific target detection neural network is designed, the advantages of big data are combined, the false detection/missing detection conditions are greatly reduced, the scope of collision risk is preset, the range is defined, and the false alarm condition is avoided.
It is not necessary for any product that embodies the invention to achieve all of the above-described advantages simultaneously.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flowchart illustrating a vehicle collision warning method based on monocular vision in accordance with a preferred embodiment of the present invention;
FIG. 2 is a schematic view of a driving area in front of a vehicle that can be sensed by a camera according to a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of the operation of setting the collision risk range according to a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of a pedestrian/vehicle object detection model according to a preferred embodiment of the present invention;
FIG. 5 is a diagram illustrating the operation of the pedestrian/vehicle object detection model in detecting pedestrians and vehicles in accordance with a preferred embodiment of the present invention;
FIG. 6 is a schematic representation of the operation of pedestrian/vehicle distance estimation in accordance with a preferred embodiment of the present invention;
fig. 7 is a flowchart illustrating an early warning decision process according to a preferred embodiment of the present invention.
Detailed Description
The following examples illustrate the invention in detail: the embodiment is implemented on the premise of the technical scheme of the invention, and a detailed implementation mode and a specific operation process are given. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.
The embodiment of the invention provides a vehicle collision early warning method based on monocular vision, which is used for acquiring driving state data and front view image data in real time, acquiring the nearest target distance estimation through a set collision risk area range, and further performing auxiliary early warning on vehicle collision in the vehicle driving process.
The vehicle collision early warning method based on monocular vision provided by the embodiment comprises the following steps:
step S1, acquiring front view image data, and carrying out target detection on the acquired image data;
step S2, setting a collision risk area range;
step S3, filtering the targets in the collision risk area to obtain the distance estimation of the nearest target, and estimating the time required for the vehicle to collide with the target by combining the speed information and the acceleration information of the vehicle;
and step S4, integrating the estimation of the distance between the nearest target and/or the estimation of the time required by the vehicle to collide with the target, and performing auxiliary early warning on the vehicle collision condition which may occur in the driving process of the vehicle.
In the present embodiment, the execution order of step S1 and step S2 may be interchanged.
As a preferred embodiment, step S1 includes the following steps:
step S11, acquiring the image of the road in the advancing direction in real time based on monocular vision;
step S12, establishing a pedestrian/vehicle target detection model by combining a deep neural network on the basis of the acquired image;
and step S13, detecting the pedestrian and/or the vehicle by using the established pedestrian/vehicle target detection model to obtain a pedestrian and/or vehicle target detection result.
In step S11, an image of the road in the traveling direction is acquired in real time by a camera provided at a position right above the front window.
As a preferred embodiment, in step S12, the method for creating a pedestrian/vehicle object detection model includes:
calibrating images of pedestrians and/or vehicles in a large number of pictures of actual road conditions to form a training data set;
and pre-training the deep neural network by adopting a training data set based on the deep neural network to obtain a pedestrian/vehicle target detection model.
As a preferred embodiment, the pedestrian/vehicle target detection model comprises a feature extraction module and a detection frame regression module; wherein:
the feature extraction module adopts a residual network structure, inputs an image with the width and the height of 480 and 288, has a down-sampling multiple of 32, and outputs feature maps with the width and the height of 15 and 30 and 18 respectively;
and taking the feature graph output by the feature extraction module as the input of the detection frame regression module, defining n anchor boxes for each feature graph, taking the n anchor boxes as the reference of the detection frame regression module, and outputting the detection result of the pedestrian and/or vehicle target.
As a preferred embodiment, in step S13, the method for obtaining pedestrian and/or vehicle target detection results includes:
setting:
the predicted coordinate offset of each bounding box is denoted as tx,ty,tw,th;
The coordinate of the upper left corner of the cell offset image in the feature map is recorded as cx,cy;
The size of each Anchor box is denoted as pw,ph;
Coordinates of the prediction box are noted as bx,by,bw,bh;
The coordinates of the real box are denoted as gx,gy,gw,gh;
Then
bx=σ(tx)+cx
by=σ(ty)+cy
And obtaining accurate pedestrian and/or vehicle detection results.
As a preferred embodiment, in step S2, the method for setting the collision risk area range includes:
presetting four points according to the distance change characteristics of the visual angle, defining a trapezoidal area surrounded by the four points as an area with collision risk, and recording the area as areariskThen areariskComprises the following steps:
(xi,yi)s.t.i∈[1,2,3,4];
and when the detected target is in the risk area, performing risk collision prediction on the target.
As a preferred embodiment, the resolution of the camera is 1920 × 1080, and the coordinates of the four preset points for enclosing the range of the collision risk area are:
(680,940),(680,980),(1080,120),(1080,1800)。
as a preferred embodiment, in step S3, the distance estimation is performed on the obtained detection results of the objects located in the collision risk area according to the similar triangle rule, including the following steps:
placing an object with the width W at a position away from the camera D, and if the pixel width of the object in the image is P, calculating to obtain the focal length F of the camera:
F=(P*D)/W
and (3) moving the object away from or close to the camera, and since the focal length F of the camera is not changed, only measuring the width P 'of the object in the image at the moment, and then estimating the distance D' from the object to the camera at the moment:
D’=(F*W)/P’
and (3) integrating the average value of the width of the conventional car as the reference vehicle width, and taking 170cm as the reference measurement when the distance of the pedestrian is estimated, so as to obtain the distance estimation value of the pedestrian and/or the vehicle target.
As a preferred embodiment, the method for estimating a time required for a collision of the vehicle with the target in step S3 includes:
acquiring speed information and acceleration information of a vehicle, which are respectively marked as vtAnd at;
The pedestrian and/or vehicle target estimate is obj;
combined with the vehicle speed v at the current momenttAnd acceleration atAnd estimating the time t required for the vehicle to collide with the target by a Newton motion algorithm as follows:
obj.dist=vt*t+0.5*at 2
as a preferred embodiment, in step S4, a threshold value of the shortest distance between the target and a threshold value of the shortest time for the vehicle to collide with the target are set, and a collision condition of the vehicle that may occur during the driving of the vehicle is determined, and an auxiliary warning is performed.
The technical solutions provided by the above embodiments of the present invention are further described in detail below with reference to the accompanying drawings.
As shown in fig. 1, it is a flowchart of a vehicle collision warning method based on monocular vision according to the above embodiment of the present invention.
As shown in fig. 2, the vehicle-mounted camera is fixed or embedded at the middle position of the upper part of the window, so that the camera can sense the driving area in front of the vehicle.
As shown in fig. 3, is a schematic diagram of the operation of setting the collision risk range.
Since the fixed position of the camera is on the central axis of the vehicle and is directed to the position right ahead, the vehicle position itself is also in the middle of the image in the captured image frame. In the process of vehicle advancing, the front of the vehicle is an area with collision risk, and a collision risk range is preset by selecting four points and is recorded as areariskIn areariskThe method comprises the following steps:
(xi,yi)s.t.i∈[1,2,3,4]
in the calculation of the collision risk range, the resolution of the camera is 1920 × 1080, and the preset collision risk range is a trapezoid surrounded by the following four points:
(680,940),(680,980),(1080,120),(1080,1800)。
only when the vehicle or the pedestrian is in the risk area, the risk collision prediction is carried out on the vehicle or the pedestrian.
Fig. 4 is a schematic structural diagram of a pedestrian/vehicle object detection model.
A pedestrian/vehicle target detection model is established by calibrating a large amount of data and combining a deep neural network.
The pedestrian/vehicle target detection network structure is evolved based on yolov3, the network input size is 480 × 288, the resolution size of a camera sensor is better adapted to 1920 × 1080, and image distortion caused by a resize process is avoided. The feature extraction (backbone) adopts a residual error network structure. The downsampling multiple is 32. Meanwhile, in order to deal with a small target at a distance, multi-scale features are adopted to predict the target position. There is no pooling layer and full connectivity layer in the overall network structure. Thus, an image of 488 × 288 is input and 32 times down-sampled to obtain a feature map of 15 × 9, and a further feature map of 30 × 18 is obtained by fusing multi-scale features to detect small objects. Based on the YOLOV3 algorithm, 3 anchor boxes are defined for each feature map respectively as a reference for regression of the final box.
The predicted coordinate offset for each bounding box is denoted as tx,ty,tw,th;
The coordinate of the upper left corner of the cell offset image in Feature map is recorded as cx,cy;
The size of the Anchor box is denoted as pw,ph;
Coordinates of the prediction box are noted as bx,by,bw,bh;
The coordinates of the real box are denoted as gx,gy,gw,gh;
Then:
bx=σ(tx)+cx
by=σ(ty)+cy
the trained detection model can accurately detect pedestrians and vehicles, as shown in fig. 5.
As shown in fig. 6, a schematic diagram of the operation of the pedestrian/vehicle distance estimation is shown.
In the system provided by the embodiment of the invention, the distance between the front vehicle/pedestrian and the current vehicle needs to be roughly estimated. Calculated according to the similar triangle rule. Assuming that an object with a width W is placed at a position away from the camera D, if the pixel width of the object in the image is P, the focal length F of the camera can be calculated:
F=(P*D)/W
if the object is far away from or close to the camera, the distance D 'from the object to the camera at this time can be estimated as long as the width P' of the object in the image at this time is measured, because the focal length F of the camera is not changed:
D’=(F*W)/P’
in the system provided by the embodiment of the invention, the average value of the width of a common car in the market at the present stage is taken as the reference vehicle width, and 170cm is taken as the reference measurement in the pedestrian distance estimation. As shown in fig. 6, f represents the focal length of the camera, ww represents the average width of the car, and wp represents the corresponding pixel width of the car in the image. Then the distance from the car to the camera can be obtained by the principle of similar triangle:
dist=(f*ww)/wp
the principle of distance estimation for pedestrians is similar.
Obtaining a minimum distance of a preceding vehicle and pedestrian within the collision risk area:
and sequentially judging whether all the positions of the pedestrians and the vehicles obtained by target detection are located in the collision risk area, and calculating to obtain the vehicle or the pedestrian closest to the vehicle and the corresponding distance estimation. The algorithm flow is as follows:
and (3) carrying out collision early warning decision by combining vehicle state information:
when collision early warning decision is made, speed information and acceleration information of the vehicle are mainly combined and are respectively recorded as vtAnd at。
Recording the calculated distance vehicleThe nearest forward target (pedestrian/vehicle) is obj, then the vehicle speed v is integrated with the current timetAnd acceleration atThe time t required for the vehicle to collide with the target can be estimated by the newton's equation (below), as follows:
obj.dist=vt*t+0.5*at 2(Newton's equation of motion)
And when the estimated collision time t is less than 3s, giving an early warning prompt by the system. Meanwhile, in order to avoid excessive early warning, the current v is limitedtWhen the time is less than 15, the system does not alarm.
The early warning decision process is shown in fig. 7.
Another embodiment of the present invention provides a vehicle collision warning system based on monocular vision, including:
the vehicle exterior image acquisition camera is used for acquiring an image of a road in a traveling direction in real time;
the target detection module is used for carrying out target detection on pedestrians and/or vehicles in the image data according to the acquired road image in the advancing direction;
the driving state acquisition module is used for acquiring speed information and acceleration information of the vehicle;
the collision risk area range setting module is used for setting a collision risk area range;
the risk calculation module is used for filtering the pedestrian and/or vehicle targets in the collision risk area to obtain the distance estimation of the nearest target, and estimating the time required by the vehicle to collide with the target by combining the speed information and the acceleration information of the vehicle;
and the auxiliary early warning module integrates the estimated value of the distance between the nearest target and/or the estimated value of the time required for the vehicle to collide with the target, and performs auxiliary early warning on the vehicle collision condition possibly occurring in the vehicle driving process through a set threshold value.
As a preferred embodiment, the vehicle exterior image acquisition camera is arranged at the position right above the front window of the vehicle.
The vehicle collision early warning method and system based on monocular vision provided by the embodiments of the present invention can effectively reduce accidents such as vehicle/pedestrian collision, vehicle rear-end collision, etc. during the driving process of the vehicle, can automatically give a collision early warning prompt according to the vehicle driving state and the minimum distance of the vehicle/pedestrian in the front driving area obtained based on the monocular vision detection, effectively give a driver prompt, and reduce the accident occurrence probability.
It should be noted that, the steps in the method provided by the present invention may be implemented by using corresponding modules, devices, units, and the like in the system, and those skilled in the art may implement the composition of the system by referring to the technical solution of the method, that is, the embodiment in the method may be understood as a preferred example for constructing the system, and will not be described herein again.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices provided by the present invention in purely computer readable program code means, the method steps can be fully programmed to implement the same functions by implementing the system and its various devices in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices thereof provided by the present invention can be regarded as a hardware component, and the devices included in the system and various devices thereof for realizing various functions can also be regarded as structures in the hardware component; means for performing the functions may also be regarded as structures within both software modules and hardware components for performing the methods.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.
Claims (8)
1. A vehicle collision early warning method based on monocular vision is characterized by comprising the following steps:
acquiring front view image data, and carrying out target detection on the acquired image data;
setting a collision risk area range in front of the running vehicle according to the resolution of the camera;
filtering the targets in the collision risk area to obtain the nearest target distance estimation, and estimating the time required by the vehicle to collide with the targets by combining the speed information and the acceleration information of the vehicle;
the method comprises the steps of integrating a nearest target distance estimation value and/or a time estimation value required by a vehicle to collide with a target, and performing auxiliary early warning on a vehicle collision condition possibly occurring in the vehicle driving process;
the method for setting the range of the collision risk area comprises the following steps:
presetting four points according to the distance change characteristics of the visual angle, defining a trapezoidal area surrounded by the four points as an area with collision risk, and recording the area as areariskThen areariskComprises the following steps:
(xi,yi)s.t.i∈[1,2,3,4];
when the detected target is located in the risk area, performing risk collision prediction on the target;
the method for obtaining a closest target distance estimate and/or estimating a time required for a vehicle to collide with a target includes:
-performing distance estimation on the obtained target detection results located in the collision risk area using a similar triangle rule, comprising:
placing an object with the width W at a position away from the camera D, and if the pixel width of the object in the image is P, calculating to obtain the focal length F of the camera:
F=(P*D)/W
and (3) moving the object away from or close to the camera, and since the focal length F of the camera is not changed, only measuring the width P 'of the object in the image at the moment, and then estimating the distance D' from the object to the camera at the moment:
D’=(F*W)/P’
the average value of the width of the conventional car is taken as the reference vehicle width, and 170cm is taken as the reference height measurement when the distance of the pedestrian is estimated, so that the distance estimation value of the pedestrian and/or the vehicle target is obtained;
-the method of estimating the time required for a vehicle to collide with a target, comprising:
acquiring speed information and acceleration information of a vehicle, which are respectively marked as vtAnd at;
The pedestrian and/or vehicle target estimate is obj;
combined with the vehicle speed v at the current momenttAnd acceleration atAnd estimating the time t required for the vehicle to collide with the target by a Newton motion algorithm as follows:
obj.dist=vt*t+0.5*at 2。
2. the monocular vision based vehicle collision warning method according to claim 1, wherein the acquiring of the front view image data and the target detection of the acquired image data comprise:
acquiring an image of a road in a traveling direction in real time based on monocular vision;
on the basis of the acquired image, a pedestrian/vehicle target detection model is established by combining a deep neural network;
and detecting the pedestrian and/or the vehicle by using the established pedestrian/vehicle target detection model to obtain a pedestrian and/or vehicle target detection result.
3. The monocular vision based vehicle collision warning method of claim 2, wherein the method of establishing a pedestrian/vehicle target detection model comprises:
calibrating images of pedestrians and/or vehicles in a large number of pictures of actual road conditions to form a training data set;
and training the deep neural network by adopting a training data set based on the deep neural network to obtain a pedestrian/vehicle target detection model.
4. The monocular vision based vehicle collision warning method of claim 2, wherein the pedestrian/vehicle target detection model comprises: the device comprises a feature extraction module and a detection frame regression module; wherein:
the feature extraction module adopts a residual error network structure, inputs an image with the width and the height of 480 and 288, has a down-sampling multiple of 32, and outputs feature maps with the width and the height of 15 and 30 and 18 respectively;
and taking the feature graph output by the feature extraction module as the input of the detection frame regression module, defining n anchor boxes for each feature graph, taking the n anchor boxes as the reference of the detection frame regression module, and outputting the detection result of the pedestrian and/or vehicle target.
5. The monocular vision based vehicle collision warning method according to claim 2, wherein the method of obtaining pedestrian and/or vehicle target detection results comprises:
setting:
the predicted coordinate offset of each bounding box is denoted as tx,ty,tw,th;
The coordinate of the upper left corner of the cell offset image in the feature map is recorded as cx,cy;
The size of each Anchor box is denoted as pw,ph;
Coordinates of the prediction box are noted as bx,by,bw,bh;
The coordinates of the real box are denoted as gx,gy,gw,gh;
Then
bx=σ(tx)+cx
by=σ(ty)+cy
And obtaining accurate pedestrian and/or vehicle detection results.
6. The monocular vision based vehicle collision warning method of claim 1, wherein the camera resolution is 1920 x 1080, and coordinates of four points for enclosing a collision risk area range are preset as follows:
(680,940),(680,980),(1080,120),(1080,1800)。
7. a vehicle collision warning system based on monocular vision, comprising:
the system comprises an external image acquisition camera, a road monitoring camera and a road monitoring camera, wherein the external image acquisition camera is used for acquiring an image of a road in a traveling direction in real time;
the target detection module is used for carrying out target detection on pedestrians and/or vehicles in the image data according to the acquired road image in the advancing direction;
the driving state acquisition module is used for acquiring speed information and acceleration information of the vehicle;
the collision risk area range setting module is used for setting a collision risk area range in front of the running of the vehicle according to the resolution of the camera; the method for setting the range of the collision risk area comprises the following steps:
presetting four points according to the distance change characteristics of the visual angle, defining a trapezoidal area surrounded by the four points as an area with collision risk, and recording the area as areariskThen areariskComprises the following steps:
(xi,yi)s.t.i∈[1,2,3,4];
when the detected target is located in the risk area, performing risk collision prediction on the target;
the risk calculation module filters the pedestrian and/or vehicle targets in the collision risk area to obtain the nearest target distance estimation, and estimates the time required for the vehicle to collide with the target by combining the speed information and the acceleration information of the vehicle; wherein the method of obtaining a closest target distance estimate and/or estimating the time required for a vehicle to collide with a target comprises:
-performing distance estimation on the obtained target detection results located in the collision risk area using a similar triangle rule, comprising:
placing an object with the width W at a position away from the camera D, and if the pixel width of the object in the image is P, calculating to obtain the focal length F of the camera:
F=(P*D)/W
and (3) moving the object away from or close to the camera, and since the focal length F of the camera is not changed, only measuring the width P 'of the object in the image at the moment, and then estimating the distance D' from the object to the camera at the moment:
D’=(F*W)/P’
the average value of the width of the conventional car is taken as the reference vehicle width, and 170cm is taken as the reference height measurement when the distance of the pedestrian is estimated, so that the distance estimation value of the pedestrian and/or the vehicle target is obtained;
-the method of estimating the time required for a vehicle to collide with a target, comprising:
acquiring speed information and acceleration information of a vehicle, which are respectively marked as vtAnd at;
The pedestrian and/or vehicle target estimate is obj;
combined with the vehicle speed v at the current momenttAnd acceleration atAnd estimating the time t required for the vehicle to collide with the target by a Newton motion algorithm as follows:
obj.dist=vt*t+0.5*at 2;
and the auxiliary early warning module integrates the driving state information, the nearest target distance estimation and the estimation of the time required for the vehicle to collide with the target, and performs auxiliary early warning on the vehicle collision condition in the vehicle driving process through a set threshold value.
8. The monocular vision based vehicle collision warning system of claim 7, wherein the off-board image capturing camera is disposed at a position right in the middle above a front window of the vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011243170.4A CN112349144B (en) | 2020-11-10 | 2020-11-10 | Monocular vision-based vehicle collision early warning method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011243170.4A CN112349144B (en) | 2020-11-10 | 2020-11-10 | Monocular vision-based vehicle collision early warning method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112349144A CN112349144A (en) | 2021-02-09 |
CN112349144B true CN112349144B (en) | 2022-04-19 |
Family
ID=74362331
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011243170.4A Active CN112349144B (en) | 2020-11-10 | 2020-11-10 | Monocular vision-based vehicle collision early warning method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112349144B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112507850A (en) * | 2020-12-03 | 2021-03-16 | 湖南湘江智能科技创新中心有限公司 | Reminding method for preventing vehicle collision based on computer vision |
CN112836663A (en) * | 2021-02-15 | 2021-05-25 | 苏州优它科技有限公司 | Rail transit vehicle vision laser ranging comparison detection anti-collision method |
WO2022170633A1 (en) * | 2021-02-15 | 2022-08-18 | 苏州优它科技有限公司 | Rail transit vehicle collision avoidance detection method based on vision and laser ranging |
CN113112866B (en) * | 2021-04-14 | 2022-06-03 | 深圳市旗扬特种装备技术工程有限公司 | Intelligent traffic early warning method and intelligent traffic early warning system |
CN113306566B (en) * | 2021-06-16 | 2023-12-12 | 上海大学 | Vehicle pedestrian collision early warning method and device based on sniffing technology |
CN113792598B (en) * | 2021-08-10 | 2023-04-14 | 西安电子科技大学广州研究院 | Vehicle-mounted camera-based vehicle collision prediction system and method |
CN113753041B (en) * | 2021-09-29 | 2023-06-23 | 合肥工业大学 | Mobile camera ranging early warning method and early warning device |
CN114228614B (en) * | 2021-12-29 | 2024-08-09 | 阿波罗智联(北京)科技有限公司 | Vehicle alarm method and device, electronic equipment and storage medium |
CN115966102B (en) * | 2022-12-30 | 2024-10-08 | 中国科学院长春光学精密机械与物理研究所 | Early warning braking method based on deep learning |
CN116953680B (en) * | 2023-09-15 | 2023-11-24 | 成都中轨轨道设备有限公司 | Image-based real-time ranging method and system for target object |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862287A (en) * | 2017-11-08 | 2018-03-30 | 吉林大学 | A kind of front zonule object identification and vehicle early warning method |
CN108674413A (en) * | 2018-05-18 | 2018-10-19 | 广州小鹏汽车科技有限公司 | Traffic crash protection method and system |
CN110264783A (en) * | 2019-06-19 | 2019-09-20 | 中设设计集团股份有限公司 | Vehicle collision avoidance early warning system and method based on bus or train route collaboration |
CN110276988A (en) * | 2019-06-26 | 2019-09-24 | 重庆邮电大学 | A kind of DAS (Driver Assistant System) based on collision warning algorithm |
CN111098815A (en) * | 2019-11-11 | 2020-05-05 | 武汉市众向科技有限公司 | ADAS front vehicle collision early warning method based on monocular vision fusion millimeter waves |
CN111292366A (en) * | 2020-02-17 | 2020-06-16 | 华侨大学 | Visual driving ranging algorithm based on deep learning and edge calculation |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101391589A (en) * | 2008-10-30 | 2009-03-25 | 上海大学 | Vehicle intelligent alarming method and device |
JP5150527B2 (en) * | 2009-02-03 | 2013-02-20 | 株式会社日立製作所 | Vehicle collision avoidance support device |
US20140176714A1 (en) * | 2012-12-26 | 2014-06-26 | Automotive Research & Test Center | Collision prevention warning method and device capable of tracking moving object |
CN106156725A (en) * | 2016-06-16 | 2016-11-23 | 江苏大学 | A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist |
CN109334563B (en) * | 2018-08-31 | 2021-06-22 | 江苏大学 | Anti-collision early warning method based on pedestrians and riders in front of road |
-
2020
- 2020-11-10 CN CN202011243170.4A patent/CN112349144B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862287A (en) * | 2017-11-08 | 2018-03-30 | 吉林大学 | A kind of front zonule object identification and vehicle early warning method |
CN108674413A (en) * | 2018-05-18 | 2018-10-19 | 广州小鹏汽车科技有限公司 | Traffic crash protection method and system |
CN110264783A (en) * | 2019-06-19 | 2019-09-20 | 中设设计集团股份有限公司 | Vehicle collision avoidance early warning system and method based on bus or train route collaboration |
CN110276988A (en) * | 2019-06-26 | 2019-09-24 | 重庆邮电大学 | A kind of DAS (Driver Assistant System) based on collision warning algorithm |
CN111098815A (en) * | 2019-11-11 | 2020-05-05 | 武汉市众向科技有限公司 | ADAS front vehicle collision early warning method based on monocular vision fusion millimeter waves |
CN111292366A (en) * | 2020-02-17 | 2020-06-16 | 华侨大学 | Visual driving ranging algorithm based on deep learning and edge calculation |
Non-Patent Citations (1)
Title |
---|
基于单目视觉的前向车辆检测、跟踪与测距;赵轩;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20180715(第7期);第1-104页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112349144A (en) | 2021-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112349144B (en) | Monocular vision-based vehicle collision early warning method and system | |
CN106485233B (en) | Method and device for detecting travelable area and electronic equipment | |
JP5421072B2 (en) | Approaching object detection system | |
CN106611512B (en) | Method, device and system for processing starting of front vehicle | |
US6690011B2 (en) | Infrared image-processing apparatus | |
EP2928178B1 (en) | On-board control device | |
US8050459B2 (en) | System and method for detecting pedestrians | |
CN112329552A (en) | Obstacle detection method and device based on automobile | |
EP2026246A1 (en) | Method and apparatus for evaluating an image | |
EP1562147A1 (en) | Mobile body surrounding surveillance | |
JP2008027309A (en) | Collision determination system and collision determination method | |
Aytekin et al. | Increasing driving safety with a multiple vehicle detection and tracking system using ongoing vehicle shadow information | |
CN101135558A (en) | Vehicle anti-collision early warning method and apparatus based on machine vision | |
EP3364336B1 (en) | A method and apparatus for estimating a range of a moving object | |
JP2007249841A (en) | Image recognition device | |
CN111351474B (en) | Vehicle moving target detection method, device and system | |
JP2002314989A (en) | Peripheral monitor for vehicle | |
JPH1062162A (en) | Detector for obstacle | |
WO2012014972A1 (en) | Vehicle behavior analysis device and vehicle behavior analysis program | |
JP2011103058A (en) | Erroneous recognition prevention device | |
JP3916930B2 (en) | Approach warning device | |
JP4176558B2 (en) | Vehicle periphery display device | |
TWI621073B (en) | Road lane detection system and method thereof | |
CN116524454A (en) | Object tracking device, object tracking method, and storage medium | |
JP5957182B2 (en) | Road surface pattern recognition method and vehicle information recording apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |