CN112200131A - Vision-based vehicle collision detection method, intelligent terminal and storage medium - Google Patents

Vision-based vehicle collision detection method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN112200131A
CN112200131A CN202011169031.1A CN202011169031A CN112200131A CN 112200131 A CN112200131 A CN 112200131A CN 202011169031 A CN202011169031 A CN 202011169031A CN 112200131 A CN112200131 A CN 112200131A
Authority
CN
China
Prior art keywords
vehicle
vehicles
rectangular
vision
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011169031.1A
Other languages
Chinese (zh)
Inventor
黄超
徐勇
张正
王耀威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peng Cheng Laboratory
Original Assignee
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peng Cheng Laboratory filed Critical Peng Cheng Laboratory
Priority to CN202011169031.1A priority Critical patent/CN112200131A/en
Publication of CN112200131A publication Critical patent/CN112200131A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a vision-based vehicle collision detection method, an intelligent terminal and a storage medium, wherein the method comprises the following steps: detecting vehicles in the traffic monitoring video, framing the vehicles in the traffic monitoring video through a rectangular frame, and outputting a rectangular frame and vehicle numbers of the vehicles; tracking the detected vehicles, judging whether the rectangular frames of the vehicles between the vehicles are overlapped or not, and extracting the acceleration, the track and the direction change of the vehicles; when the rectangular frames of the vehicles are overlapped, obtaining an abnormal index according to the extracted acceleration, track and direction change of the vehicles, and judging whether the abnormal index is larger than a preset threshold value or not; and when the abnormal index is larger than a preset threshold value, judging that the vehicle is a vehicle collision accident. The invention comprehensively considers the factors of the vehicle in three aspects of acceleration abnormity, track abnormity and direction abnormity to achieve the purpose of vehicle collision accident detection, and can quickly and accurately detect the vehicle collision accident in the scene.

Description

Vision-based vehicle collision detection method, intelligent terminal and storage medium
Technical Field
The invention relates to the technical field of vehicle detection, in particular to a vehicle collision detection method based on vision, an intelligent terminal and a storage medium.
Background
With the increase of urban vehicles and the increase of traffic flow density, the situation of urban road supervision is more severe. The traditional non-automatic traffic supervision mode needs to consume a large amount of manpower, is limited by the attributes of resolving power and fatigue and the like of human eyes, and has unsatisfactory accident troubleshooting effect and low supervision efficiency.
In recent years, with the rapid development of computer vision technology, an urban intelligent traffic system based on vision comes into play. By monitoring the characteristics of traffic flow, speed, vehicle track and the like in the road in real time, the intelligent traffic system can analyze the traffic condition in the scene, detect abnormal events occurring in the road and further take corresponding scheduling and early warning measures. Compared with the traditional manpower troubleshooting mode, the vehicle abnormal behavior detection based on deep learning not only saves manpower and material resources, but also improves the accuracy of accident troubleshooting.
In many traffic abnormal event types, the collision of vehicles often represents the occurrence of a traffic accident, and if the vehicle can timely react and take corresponding measures such as rescue, evacuation and isolation after the occurrence of the accident, the survival rate of accident victims and the traffic condition of the accident scene are greatly influenced. Therefore, automatically, rapidly and accurately detecting the collision accident in the traffic monitoring video is one of the indispensable core capabilities of the intelligent traffic system. In addition, a huge amount of traffic monitoring video data is generated every year, and accident searching and analyzing of the data is very heavy. By intelligently searching and detecting the part of the video where the accident occurs, the task amount of manually analyzing the video can be greatly reduced.
In the prior art, a subsequent frame video of continuous four-frame video is predicted by utilizing a generated countermeasure network (GAN), and a normal video sample training generator is utilized during training to well predict a normal video; during testing, the generator generates a prediction frame of a normal video with a small error from an original video, but the generator cannot well predict an abnormal video, and the prediction frame of the generator has a large error from the original frame. Therefore, the error between the predicted frame and the original frame is utilized to detect abnormal behavior. The drawbacks of this method are: the algorithm is designed on the assumption that the generator for limited training of normal samples can predict all normal samples well. However, it is not possible to collect all types of normal behavior in reality, and the effect is not ideal for atypical normal samplers. In addition, the method is an end-to-end abnormal behavior detection method based on deep learning, and the complex deep neural network structure makes the algorithm unable to meet the real-time requirement in practical application.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The invention mainly aims to provide a vehicle collision detection method based on vision, an intelligent terminal and a storage medium, and aims to solve the problem that a vehicle collision accident in a scene cannot be detected quickly and accurately in the prior art.
In order to achieve the above object, the present invention provides a vision-based vehicle collision detection method, including the steps of:
detecting vehicles in the traffic monitoring video, framing the vehicles in the traffic monitoring video through a rectangular frame, and outputting a rectangular frame and vehicle numbers of the vehicles;
tracking the detected vehicles, judging whether the rectangular frames of the vehicles between the vehicles are overlapped or not, and extracting the acceleration, the track and the direction change of the vehicles;
when the rectangular frames of the vehicles are overlapped, obtaining an abnormal index according to the extracted acceleration, track and direction change of the vehicles, and judging whether the abnormal index is larger than a preset threshold value or not;
and when the abnormal index is larger than a preset threshold value, judging that the vehicle is a vehicle collision accident.
Optionally, the vision-based vehicle collision detection method, wherein the detecting a vehicle in the traffic monitoring video specifically includes:
and detecting vehicles in the traffic monitoring video by using a Mask R-CNN deep neural network.
Optionally, the vision-based vehicle collision detection method, wherein the tracking the detected vehicle specifically includes:
calculating to obtain a centroid coordinate of the vehicle rectangular frame according to the width and the height of the vehicle rectangular frame;
calculating the Euclidean distance between the centroid coordinate of the vehicle rectangular frame of the newly detected vehicle and the centroid coordinate of the existing vehicle rectangular frame;
updating the centroid coordinates of the existing vehicle based on the current set of centroids and the previously stored shortest euclidean distances of the centroids;
allocating a new vehicle number to the newly detected vehicle, and storing the centroid coordinate;
and logging out the vehicle information which disappears in the current frame.
Optionally, the method for detecting a vehicle collision based on vision, where the determining whether a rectangular frame of a vehicle between vehicles overlaps includes:
respectively obtaining a vehicle rectangular frame a and a vehicle rectangular frame B of a vehicle A and a vehicle B, wherein x and y respectively represent the barycenter coordinates of the vehicle rectangular frames, and alpha and beta respectively represent the width and height of the vehicle rectangular frames;
judging whether (2 x) is satisfieda-xb|<αab)∧(2×|ya-yb|<βab) And thus, whether the rectangular vehicle frames between the vehicle a and the vehicle B overlap is determined.
Optionally, the method for detecting a vehicle collision based on vision, wherein the determining whether a vehicle rectangular frame between the vehicle a and the vehicle B overlaps includes:
when (2 x) is satisfieda-xb|<αab)∧(2×|ya-yb|<βab) Then the vehicle rectangular frames between the vehicle A and the vehicle B are overlapped;
extracting the difference value between the centroids of the rectangular frames of the vehicles of the same vehicle every interval of the first preset number of frame images as a motion direction vector mu of the vehicle, wherein the modulus is
Figure BDA0002746704740000041
With a normalized direction of motion vector of
Figure BDA0002746704740000042
When | μ | is greater than a threshold, storing the normalized direction vector of the vehicle in each frame; otherwise, the data is not stored;
setting mu1And mu2For the directional vectors of two vehicles with overlapping rectangular borders of the vehicles, the angle θ between the tracks of the two vehicles is
Figure BDA0002746704740000043
Setting the playing frame rate of the video to be FR, the time interval between two frames of video is
Figure BDA0002746704740000044
Setting the mass center positions of the vehicle rectangular frame of the vehicle in two frames of images separated by the distance I frame as c1And c2The speed of the vehicle is
Figure BDA0002746704740000051
Setting the height of the image as H and the height of the rectangular detection frame of the vehicle as H, then the normalized speed of the vehicle is
Figure BDA0002746704740000052
Setting the speed of the vehicle in two frames of images separated by the distance I frame as
Figure BDA0002746704740000053
And
Figure BDA0002746704740000054
the acceleration of the vehicle is
Figure BDA0002746704740000055
Optionally, the vision-based vehicle collision detection method, wherein obtaining an abnormality index according to the extracted acceleration, trajectory, and direction change of the vehicle, and determining whether the abnormality index is greater than a preset threshold specifically includes:
calculating the average acceleration A of a second preset number of frames before the rectangular frames of the vehicle are overlappedFront sideAnd the maximum acceleration A of the second preset number of frames after the overlappingRear endThen A isRear endAnd AFront sideComparing the difference value with a preset condition to determine an acceleration abnormal factor alpha;
if the rectangular frames of the vehicles are overlapped, judging whether the included angle of the vehicle track meets theta epsilon (theta is equal to theta)LH) If yes, determining the track abnormal factor beta according to a condition predefined by the theta value;
calculating the angle (theta) of the vehicle relative to the track of the vehicle in a first preset number of frame intervals, and determining the change of the angle abnormal gamma according to a predefined condition on the basis of the angle of each vehicle;
fitting three separately determined abnormality factors into an abnormality function f (alpha, beta, gamma), generating a score between 0 and 1 by the abnormality function f (alpha, beta, gamma), and judging whether the score is larger than a preset threshold value.
Optionally, the vision-based vehicle collision detection method, wherein when the abnormality index is greater than a preset threshold, it is determined that a vehicle collision accident occurs, specifically:
and if the score is larger than a preset threshold value, judging that the vehicle is a vehicle collision accident.
Optionally, the vision-based vehicle collision detection method is implemented, wherein the determining whether the included angle of the vehicle trajectory satisfies θ e (θ ∈)LH) And then further comprising:
when not, β is determined by the distance of the intersection of θ with the trajectory from the set of predefined conditions.
In addition, to achieve the above object, the present invention further provides an intelligent terminal, wherein the intelligent terminal includes: a memory, a processor and a vision-based vehicle collision detection program stored on the memory and executable on the processor, the vision-based vehicle collision detection program when executed by the processor implementing the steps of the vision-based vehicle collision detection method as described above.
Further, to achieve the above object, the present invention also provides a storage medium storing a vision-based vehicle collision detection program that realizes the steps of the vision-based vehicle collision detection method as described above when executed by a processor.
The method comprises the steps of detecting vehicles in a traffic monitoring video, framing the vehicles in the traffic monitoring video through a rectangular frame, and outputting rectangular frames and vehicle numbers of the vehicles; tracking the detected vehicles, judging whether the rectangular frames of the vehicles between the vehicles are overlapped or not, and extracting the acceleration, the track and the direction change of the vehicles; when the rectangular frames of the vehicles are overlapped, obtaining an abnormal index according to the extracted acceleration, track and direction change of the vehicles, and judging whether the abnormal index is larger than a preset threshold value or not; and when the abnormal index is larger than a preset threshold value, judging that the vehicle is a vehicle collision accident. The invention achieves the purpose of detecting the vehicle collision accident by extracting and analyzing the acceleration, the track and the direction characteristics of the vehicle and comprehensively considering the factors of the vehicle in the aspects of acceleration abnormity, track abnormity and direction abnormity, and can quickly and accurately detect the vehicle collision accident in a scene.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a vision-based vehicle collision detection method of the present invention;
FIG. 2 is a flow chart of vehicle accident detection in a preferred embodiment of the vision-based vehicle collision detection method of the present invention;
fig. 3 is a schematic operating environment diagram of an intelligent terminal according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the method for detecting a vehicle collision based on vision according to a preferred embodiment of the present invention includes the following steps:
and step S10, detecting the vehicles in the traffic monitoring video, framing the vehicles in the traffic monitoring video through the rectangular frame, and outputting the rectangular frame of the vehicles and the vehicle numbers.
Specifically, the vehicle in the traffic monitoring video is detected by using a Mask R-CNN deep Neural network, wherein the Mask R-CNN deep Neural network is a regional Convolutional Neural network which applies deep learning to target detection, the R-CNN is based on a Convolutional Neural network (CNN, which is a feedforward Neural network containing Convolutional calculation and having a deep structure, and the Convolutional Neural network has a characteristic learning capability and can carry out translation invariant classification on input information according to a hierarchical structure thereof, so that the Convolutional Neural network is also called as a translation invariant artificial Neural network), linear regression, a Support Vector Machine (SVM) and other algorithms, and the target detection technology is realized.
Further, after the vehicle in the traffic monitoring video is detected, the vehicle in the traffic monitoring video is framed out through the rectangular frame (i.e., the vehicle in the video image is completely framed out through the rectangular frame), and then the vehicle rectangular frame (i.e., the rectangular frame in which the vehicle is already framed) and the vehicle number are output.
Step S20, the detected vehicles are tracked, whether or not the rectangular frames of the vehicles overlap each other is determined, and the acceleration, trajectory, and change in direction of the vehicles are extracted.
Specifically, as shown in fig. 2, when the detected vehicle is tracked, the centroid coordinates (which are the positions of points in the graph with respect to the vertices in the geometric structure) of the rectangular frame of the vehicle are obtained from the width and height of the rectangular frame of the vehicle detected in step S10; calculating the Euclidean distance between the centroid coordinate of the vehicle rectangular frame of the newly detected vehicle and the centroid coordinate of the existing vehicle rectangular frame (which is a commonly adopted distance definition and refers to the real distance between two points in an m-dimensional space or the natural length of a vector (namely the distance from the point to an original point), wherein the Euclidean distance in two-dimensional and three-dimensional spaces is the actual distance between the two points); updating the centroid coordinates of the existing vehicle based on the current set of centroids and the previously stored shortest euclidean distances of the centroids; assigning new vehicle numbers to the newly detected vehicles and storing their centroid coordinates; the vehicle information (i.e., the vehicle target information) that has disappeared in the current frame is cancelled.
Further, assuming that the vehicle a and the vehicle B exist, a vehicle rectangular frame a and a vehicle rectangular frame B of the vehicle a and the vehicle B are respectively obtained, x and y respectively represent barycentric coordinates of the vehicle rectangular frame, and α and β respectively represent a width and a height of the vehicle rectangular frame; judging whether (2 x) is satisfieda-xb|<αab)∧(2×|ya-yb|<βab) And thus, whether the rectangular vehicle frames between the vehicle a and the vehicle B overlap is determined.
When (2 x) is satisfieda-xb|<αab)∧(2×|ya-yb|<βab) Then the vehicle rectangular frames between the vehicle A and the vehicle B are overlapped; otherwise, the vehicle rectangular frames of vehicle a and vehicle B do not overlap.
If the rectangular frames of the vehicles overlap, the difference value between the centroids of the rectangular frames of the vehicles of the same vehicle every other frames of images with a first preset number (for example, the first preset number is 5, namely every other 5 frames of images) is extracted as the difference valueVector μ of direction of motion of the vehicle, modulo
Figure BDA0002746704740000091
With a normalized direction of motion vector of
Figure BDA0002746704740000092
When | μ | is greater than a threshold, storing the normalized direction vector of the vehicle in each frame; otherwise, the data is not stored; setting mu1And mu2For the directional vectors of two vehicles with overlapping rectangular borders of the vehicles, the angle θ between the tracks of the two vehicles is
Figure BDA0002746704740000093
Setting the playing frame rate of the video to be FR, the time interval between two frames of video is
Figure BDA0002746704740000094
Setting the mass center positions of the vehicle rectangular frame of the vehicle in two frames of images separated by the distance I frame as c1And c2The speed of the vehicle is
Figure BDA0002746704740000095
Setting the height of the image as H and the height of the rectangular detection frame of the vehicle as H, then the normalized speed of the vehicle is
Figure BDA0002746704740000096
Setting the speed of the vehicle in two frames of images separated by the distance I frame as
Figure BDA0002746704740000097
And
Figure BDA0002746704740000098
the acceleration of the vehicle is
Figure BDA0002746704740000099
And step S30, when the rectangular frames of the vehicles are overlapped, obtaining an abnormal index according to the extracted acceleration, track and direction change of the vehicles, and judging whether the abnormal index is larger than a preset threshold value.
Specifically, the average acceleration a of a second preset number of frames (for example, the second preset number is 15) before the rectangular frame of the vehicle is overlapped is calculatedFront sideAnd a maximum acceleration A of a second preset number of frames after the overlap has occurred (for example, the second preset number is 15)Rear end(i.e. calculating the average acceleration A of the rectangular detection frame of the vehicle in the first 15 frames before the overlapFront sideAnd the maximum acceleration A of the overlapped 15 framesRear end) Then A isRear endAnd AFront sideComparing the difference value with a preset condition to determine an acceleration abnormal factor alpha; if the rectangular frames of the vehicles are overlapped, judging whether the included angle of the vehicle track meets theta epsilon (theta is equal to theta)LH) If yes, determining the track abnormal factor beta by a condition predefined by the value of theta, and if not, determining the beta by the distance between the intersection point of theta and the track and a predefined condition set; calculating the angle (theta) of the vehicle relative to the self track within a first preset number of frames (for example, the first preset number is 5), and determining the change of the angle anomaly gamma according to a predefined condition based on the angle of each vehicle; three separately determined abnormality factors are fitted to an abnormality function f (α, β, γ) which generates a score between 0 and 1, and it is determined whether the score is greater than a preset threshold (e.g., T in fig. 2).
And step S40, judging the vehicle collision accident when the abnormality index is larger than a preset threshold value.
Specifically, the abnormality function f (α, β, γ) generates a score between 0 and 1, and if the score is greater than a preset threshold value (T), it is determined that the vehicle collision accident is present, otherwise it is normal.
The method uses Mask R-CNN to detect the vehicle target in the complex traffic scene, provides a basis for the subsequent vehicle motion characteristic extraction and the collision accident detection, respectively extracts and analyzes the acceleration, track and direction characteristics of the vehicle, comprehensively considers the factors of the vehicle in the aspects of acceleration abnormity, track abnormity and direction abnormity to achieve the purpose of vehicle collision accident detection, and can quickly and accurately detect the vehicle collision accident in the scene.
Further, as shown in fig. 3, based on the above vision-based vehicle collision detection method, the present invention also provides an intelligent terminal, which includes a processor 10, a memory 20 and a display 30. Fig. 3 shows only some of the components of the smart terminal, but it should be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
The memory 20 may be an internal storage unit of the intelligent terminal in some embodiments, such as a hard disk or a memory of the intelligent terminal. The memory 20 may also be an external storage device of the Smart terminal in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the Smart terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the smart terminal. The memory 20 is used for storing application software installed in the intelligent terminal and various data, such as program codes of the installed intelligent terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 has stored thereon a vision-based vehicle collision detection program 40, and the vision-based vehicle collision detection program 40 is executable by the processor 10 to implement the vision-based vehicle collision detection method of the present application.
The processor 10 may be, in some embodiments, a Central Processing Unit (CPU), microprocessor or other data Processing chip for executing program codes stored in the memory 20 or Processing data, such as executing the vision-based vehicle collision detection method.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch panel, or the like in some embodiments. The display 30 is used for displaying information at the intelligent terminal and for displaying a visual user interface. The components 10-30 of the intelligent terminal communicate with each other via a system bus.
In one embodiment, the following steps are implemented when the processor 10 executes the vision-based vehicle collision detection program 40 in the memory 20:
detecting vehicles in the traffic monitoring video, framing the vehicles in the traffic monitoring video through a rectangular frame, and outputting a rectangular frame and vehicle numbers of the vehicles;
tracking the detected vehicles, judging whether the rectangular frames of the vehicles between the vehicles are overlapped or not, and extracting the acceleration, the track and the direction change of the vehicles;
when the rectangular frames of the vehicles are overlapped, obtaining an abnormal index according to the extracted acceleration, track and direction change of the vehicles, and judging whether the abnormal index is larger than a preset threshold value or not;
and when the abnormal index is larger than a preset threshold value, judging that the vehicle is a vehicle collision accident.
The method for detecting the vehicles in the traffic monitoring video specifically comprises the following steps:
and detecting vehicles in the traffic monitoring video by using a Mask R-CNN deep neural network.
Wherein, the tracking the detected vehicle specifically includes:
calculating to obtain a centroid coordinate of the vehicle rectangular frame according to the width and the height of the vehicle rectangular frame;
calculating the Euclidean distance between the centroid coordinate of the vehicle rectangular frame of the newly detected vehicle and the centroid coordinate of the existing vehicle rectangular frame;
updating the centroid coordinates of the existing vehicle based on the current set of centroids and the previously stored shortest euclidean distances of the centroids;
allocating a new vehicle number to the newly detected vehicle, and storing the centroid coordinate;
and logging out the vehicle information which disappears in the current frame.
Wherein, judge whether the vehicle rectangle frame between the vehicle appears overlapping, specifically include:
respectively obtaining a vehicle rectangular frame a and a vehicle rectangular frame B of a vehicle A and a vehicle B, wherein x and y respectively represent the barycenter coordinates of the vehicle rectangular frames, and alpha and beta respectively represent the width and height of the vehicle rectangular frames;
judging whether (2 x) is satisfieda-xb|<αab)∧(2×|ya-yb|<βab) And thus, whether the rectangular vehicle frames between the vehicle a and the vehicle B overlap is determined.
Wherein, judge whether the vehicle rectangle frame between vehicle A and the vehicle B appears overlapping, later include:
when (2 x) is satisfieda-xb|<αab)∧(2×|ya-yb|<βab) Then the vehicle rectangular frames between the vehicle A and the vehicle B are overlapped;
extracting the difference value between the centroids of the rectangular frames of the vehicles of the same vehicle every interval of the first preset number of frame images as a motion direction vector mu of the vehicle, wherein the modulus is
Figure BDA0002746704740000131
With a normalized direction of motion vector of
Figure BDA0002746704740000132
When | μ | is greater than a threshold, storing the normalized direction vector of the vehicle in each frame; otherwise, the data is not stored;
setting mu1And mu2For the directional vectors of two vehicles with overlapping rectangular borders of the vehicles, the angle θ between the tracks of the two vehicles is
Figure BDA0002746704740000133
Setting the playing frame rate of the video to be FR, the time interval between two frames of video is
Figure BDA0002746704740000134
In two frames of images set at a distance of I frameThe mass center position of the vehicle rectangular frame of the vehicle is c1And c2The speed of the vehicle is
Figure BDA0002746704740000141
Setting the height of the image as H and the height of the rectangular detection frame of the vehicle as H, then the normalized speed of the vehicle is
Figure BDA0002746704740000142
Setting the speed of the vehicle in two frames of images separated by the distance I frame as
Figure BDA0002746704740000143
And
Figure BDA0002746704740000144
the acceleration of the vehicle is
Figure BDA0002746704740000145
Obtaining an abnormal index according to the extracted acceleration, track and direction change of the vehicle, and judging whether the abnormal index is greater than a preset threshold value, wherein the method specifically comprises the following steps:
calculating the average acceleration A of a second preset number of frames before the rectangular frames of the vehicle are overlappedFront sideAnd the maximum acceleration A of the second preset number of frames after the overlappingRear endThen A isRear endAnd AFront sideComparing the difference value with a preset condition to determine an acceleration abnormal factor alpha;
if the rectangular frames of the vehicles are overlapped, judging whether the included angle of the vehicle track meets theta epsilon (theta is equal to theta)LH) If yes, determining the track abnormal factor beta according to a condition predefined by the theta value;
calculating the angle (theta) of the vehicle relative to the track of the vehicle in a first preset number of frame intervals, and determining the change of the angle abnormal gamma according to a predefined condition on the basis of the angle of each vehicle;
fitting three separately determined abnormality factors into an abnormality function f (alpha, beta, gamma), generating a score between 0 and 1 by the abnormality function f (alpha, beta, gamma), and judging whether the score is larger than a preset threshold value.
When the abnormality index is greater than a preset threshold value, a vehicle collision accident is judged, specifically:
and if the score is larger than a preset threshold value, judging that the vehicle is a vehicle collision accident.
Wherein, whether the included angle of the vehicle track meets theta epsilon (theta) or not is judgedLH) And then further comprising:
when not, β is determined by the distance of the intersection of θ with the trajectory from the set of predefined conditions.
The present invention also provides a storage medium having stored thereon a vision-based vehicle collision detection program that, when executed by a processor, implements the steps of the vision-based vehicle collision detection method described above.
In summary, the present invention provides a vehicle collision detection method based on vision, an intelligent terminal and a storage medium, wherein the method includes: detecting vehicles in the traffic monitoring video, framing the vehicles in the traffic monitoring video through a rectangular frame, and outputting a rectangular frame and vehicle numbers of the vehicles; tracking the detected vehicles, judging whether the rectangular frames of the vehicles between the vehicles are overlapped or not, and extracting the acceleration, the track and the direction change of the vehicles; when the rectangular frames of the vehicles are overlapped, obtaining an abnormal index according to the extracted acceleration, track and direction change of the vehicles, and judging whether the abnormal index is larger than a preset threshold value or not; and when the abnormal index is larger than a preset threshold value, judging that the vehicle is a vehicle collision accident. The invention achieves the purpose of detecting the vehicle collision accident by extracting and analyzing the acceleration, the track and the direction characteristics of the vehicle and comprehensively considering the factors of the vehicle in the aspects of acceleration abnormity, track abnormity and direction abnormity, and can quickly and accurately detect the vehicle collision accident in a scene.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program instructing relevant hardware (such as a processor, a controller, etc.), and the program may be stored in a computer readable storage medium, and when executed, the program may include the processes of the above method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A vision-based vehicle collision detection method, characterized in that the vision-based vehicle collision detection method comprises:
detecting vehicles in the traffic monitoring video, framing the vehicles in the traffic monitoring video through a rectangular frame, and outputting a rectangular frame and vehicle numbers of the vehicles;
tracking the detected vehicles, judging whether the rectangular frames of the vehicles between the vehicles are overlapped or not, and extracting the acceleration, the track and the direction change of the vehicles;
when the rectangular frames of the vehicles are overlapped, obtaining an abnormal index according to the extracted acceleration, track and direction change of the vehicles, and judging whether the abnormal index is larger than a preset threshold value or not;
and when the abnormal index is larger than a preset threshold value, judging that the vehicle is a vehicle collision accident.
2. The vision-based vehicle collision detection method according to claim 1, wherein the detecting vehicles in the traffic surveillance video is specifically:
and detecting vehicles in the traffic monitoring video by using a Mask R-CNN deep neural network.
3. The vision-based vehicle collision detection method according to claim 1, wherein the tracking the detected vehicle specifically comprises:
calculating to obtain a centroid coordinate of the vehicle rectangular frame according to the width and the height of the vehicle rectangular frame;
calculating the Euclidean distance between the centroid coordinate of the vehicle rectangular frame of the newly detected vehicle and the centroid coordinate of the existing vehicle rectangular frame;
updating the centroid coordinates of the existing vehicle based on the current set of centroids and the previously stored shortest euclidean distances of the centroids;
allocating a new vehicle number to the newly detected vehicle, and storing the centroid coordinate;
and logging out the vehicle information which disappears in the current frame.
4. The vision-based vehicle collision detection method according to claim 3, wherein the determining whether the rectangular vehicle borders between the vehicles overlap specifically comprises:
respectively obtaining a vehicle rectangular frame a and a vehicle rectangular frame B of a vehicle A and a vehicle B, wherein x and y respectively represent the barycenter coordinates of the vehicle rectangular frames, and alpha and beta respectively represent the width and height of the vehicle rectangular frames;
judging whether (2 x) is satisfieda-xb|<αab)∧(2×|ya-yb|<βab) And thus, whether the rectangular vehicle frames between the vehicle a and the vehicle B overlap is determined.
5. The vision-based vehicle collision detection method of claim 4, wherein the determining whether the vehicle rectangular borders between vehicle A and vehicle B overlap comprises:
when (2 x) is satisfieda-xb|<αab)∧(2×|ya-yb|<βab) Then the vehicle rectangular frames between the vehicle A and the vehicle B are overlapped;
extracting the difference value between the centroids of the rectangular frames of the vehicles of the same vehicle every two frames of images with the first preset number as the motion direction vector mu of the vehicleThe mold is
Figure FDA0002746704730000021
With a normalized direction of motion vector of
Figure FDA0002746704730000022
When | μ | is greater than a threshold, storing the normalized direction vector of the vehicle in each frame; otherwise, the data is not stored;
setting mu1And mu2For the directional vectors of two vehicles with overlapping rectangular borders of the vehicles, the angle θ between the tracks of the two vehicles is
Figure FDA0002746704730000023
Setting the playing frame rate of the video to be FR, the time interval between two frames of video is
Figure FDA0002746704730000031
Setting the mass center positions of the vehicle rectangular frame of the vehicle in two frames of images separated by the distance I frame as c1And c2The speed of the vehicle is
Figure FDA0002746704730000032
Setting the height of the image as H and the height of the rectangular detection frame of the vehicle as H, then the normalized speed of the vehicle is
Figure FDA0002746704730000033
Setting the speed of the vehicle in two frames of images separated by the distance I frame as
Figure FDA0002746704730000034
And
Figure FDA0002746704730000035
the acceleration of the vehicle is
Figure FDA0002746704730000036
6. The vision-based vehicle collision detection method according to claim 5, wherein the step of obtaining an abnormality index according to the extracted acceleration, trajectory and direction change of the vehicle and determining whether the abnormality index is greater than a preset threshold value specifically comprises:
calculating the average acceleration A of a second preset number of frames before the rectangular frames of the vehicle are overlappedFront sideAnd the maximum acceleration A of the second preset number of frames after the overlappingRear endThen A isRear endAnd AFront sideComparing the difference value with a preset condition to determine an acceleration abnormal factor alpha;
if the rectangular frames of the vehicles are overlapped, judging whether the included angle of the vehicle track meets theta epsilon (theta is equal to theta)LH) If yes, determining the track abnormal factor beta according to a condition predefined by the theta value;
calculating the angle (theta) of the vehicle relative to the track of the vehicle in a first preset number of frame intervals, and determining the change of the angle abnormal gamma according to a predefined condition on the basis of the angle of each vehicle;
fitting three separately determined abnormality factors into an abnormality function f (alpha, beta, gamma), generating a score between 0 and 1 by the abnormality function f (alpha, beta, gamma), and judging whether the score is larger than a preset threshold value.
7. The vision-based vehicle collision detection method according to claim 6, wherein when the abnormality index is greater than a preset threshold, it is determined that a vehicle collision accident is caused, specifically:
and if the score is larger than a preset threshold value, judging that the vehicle is a vehicle collision accident.
8. The vision-based vehicle collision detection method of claim 6, wherein the determining whether the included angle of the vehicle trajectory satisfies θ e (θ ∈)LH) And then further comprising:
when not, β is determined by the distance of the intersection of θ with the trajectory from the set of predefined conditions.
9. An intelligent terminal, characterized in that, intelligent terminal includes: a memory, a processor and a vision based vehicle collision detection program stored on the memory and executable on the processor, the vision based vehicle collision detection program when executed by the processor implementing the steps of the vision based vehicle collision detection method according to any one of claims 1-8.
10. A storage medium storing a vision-based vehicle collision detection program that when executed by a processor implements the steps of the vision-based vehicle collision detection method of any one of claims 1-8.
CN202011169031.1A 2020-10-28 2020-10-28 Vision-based vehicle collision detection method, intelligent terminal and storage medium Pending CN112200131A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011169031.1A CN112200131A (en) 2020-10-28 2020-10-28 Vision-based vehicle collision detection method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011169031.1A CN112200131A (en) 2020-10-28 2020-10-28 Vision-based vehicle collision detection method, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112200131A true CN112200131A (en) 2021-01-08

Family

ID=74011738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011169031.1A Pending CN112200131A (en) 2020-10-28 2020-10-28 Vision-based vehicle collision detection method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112200131A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378803A (en) * 2021-08-12 2021-09-10 深圳市城市交通规划设计研究中心股份有限公司 Road traffic accident detection method, device, computer and storage medium
CN113409587A (en) * 2021-06-16 2021-09-17 北京字跳网络技术有限公司 Abnormal vehicle detection method, device, equipment and storage medium
CN114291025A (en) * 2021-12-31 2022-04-08 成都路行通信息技术有限公司 Vehicle collision detection method and system based on data segmentation aggregation distribution
CN114872656A (en) * 2022-04-29 2022-08-09 东风汽车集团股份有限公司 Vehicle occupant safety protection system and control method
CN114926983A (en) * 2022-05-11 2022-08-19 中国地质大学(武汉) Traffic accident emergency oriented multi-scale comprehensive sensing method
CN116152758A (en) * 2023-04-25 2023-05-23 松立控股集团股份有限公司 Intelligent real-time accident detection and vehicle tracking method
CN117630414A (en) * 2024-01-25 2024-03-01 荣耀终端有限公司 Acceleration sensor calibration method, folding electronic device and storage medium
CN118379692A (en) * 2024-04-24 2024-07-23 山东理工职业学院 Road monitoring and identifying system and method based on computer vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226891A (en) * 2013-03-26 2013-07-31 中山大学 Video-based vehicle collision accident detection method and system
CN110688954A (en) * 2019-09-27 2020-01-14 上海大学 Vehicle lane change detection method based on vector operation
CN111445699A (en) * 2020-04-13 2020-07-24 黑龙江工程学院 Intersection traffic conflict discrimination method based on real-time vehicle track

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226891A (en) * 2013-03-26 2013-07-31 中山大学 Video-based vehicle collision accident detection method and system
CN110688954A (en) * 2019-09-27 2020-01-14 上海大学 Vehicle lane change detection method based on vector operation
CN111445699A (en) * 2020-04-13 2020-07-24 黑龙江工程学院 Intersection traffic conflict discrimination method based on real-time vehicle track

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
叶荣炬;李振龙;陈阳舟;辛乐;: "1种基于车辆时空图的车辆异常行为检测方法", 交通信息与安全, vol. 30, no. 04, 31 December 2012 (2012-12-31), pages 89 - 92 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409587B (en) * 2021-06-16 2022-11-22 北京字跳网络技术有限公司 Abnormal vehicle detection method, device, equipment and storage medium
CN113409587A (en) * 2021-06-16 2021-09-17 北京字跳网络技术有限公司 Abnormal vehicle detection method, device, equipment and storage medium
WO2022262471A1 (en) * 2021-06-16 2022-12-22 北京字跳网络技术有限公司 Anomalous vehicle detection method and apparatus, device, and storage medium
CN113378803A (en) * 2021-08-12 2021-09-10 深圳市城市交通规划设计研究中心股份有限公司 Road traffic accident detection method, device, computer and storage medium
CN114291025A (en) * 2021-12-31 2022-04-08 成都路行通信息技术有限公司 Vehicle collision detection method and system based on data segmentation aggregation distribution
CN114291025B (en) * 2021-12-31 2022-11-01 成都路行通信息技术有限公司 Vehicle collision detection method and system based on data segmentation aggregation distribution
CN114872656A (en) * 2022-04-29 2022-08-09 东风汽车集团股份有限公司 Vehicle occupant safety protection system and control method
CN114872656B (en) * 2022-04-29 2023-09-05 东风汽车集团股份有限公司 Vehicle occupant safety protection system and control method
CN114926983A (en) * 2022-05-11 2022-08-19 中国地质大学(武汉) Traffic accident emergency oriented multi-scale comprehensive sensing method
CN116152758A (en) * 2023-04-25 2023-05-23 松立控股集团股份有限公司 Intelligent real-time accident detection and vehicle tracking method
CN117630414A (en) * 2024-01-25 2024-03-01 荣耀终端有限公司 Acceleration sensor calibration method, folding electronic device and storage medium
CN117630414B (en) * 2024-01-25 2024-05-24 荣耀终端有限公司 Acceleration sensor calibration method, folding electronic device and storage medium
CN118379692A (en) * 2024-04-24 2024-07-23 山东理工职业学院 Road monitoring and identifying system and method based on computer vision

Similar Documents

Publication Publication Date Title
CN112200131A (en) Vision-based vehicle collision detection method, intelligent terminal and storage medium
US11643076B2 (en) Forward collision control method and apparatus, electronic device, program, and medium
US10733482B1 (en) Object height estimation from monocular images
US8218818B2 (en) Foreground object tracking
US8218819B2 (en) Foreground object detection in a video surveillance system
US8902053B2 (en) Method and system for lane departure warning
KR102073162B1 (en) Small object detection based on deep learning
CN111311010B (en) Vehicle risk prediction method, device, electronic equipment and readable storage medium
US20210312799A1 (en) Detecting traffic anomaly event
US20220245955A1 (en) Method and Device for Classifying Pixels of an Image
EP3769286A1 (en) Video object detection
JPWO2019111932A1 (en) Model learning device, model learning method and computer program
CN109657577B (en) Animal detection method based on entropy and motion offset
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN116245915A (en) Target tracking method based on video
CN112434601B (en) Vehicle illegal detection method, device, equipment and medium based on driving video
US20200279103A1 (en) Information processing apparatus, control method, and program
CN111765892B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN114677662A (en) Method, device, equipment and storage medium for predicting vehicle front obstacle state
US20240257535A1 (en) Systems and methods for determining road object importance based on forward facing and driver facing video data
KR102485099B1 (en) Method for data purification using meta data, and computer program recorded on record-medium for executing method therefor
CN113963322B (en) Detection model training method and device and electronic equipment
CN118135542B (en) Obstacle dynamic and static state judging method and related equipment thereof
KR102403174B1 (en) Method for data purification according to importance, and computer program recorded on record-medium for executing method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination