CN112749210A - Vehicle collision recognition method and system based on deep learning - Google Patents

Vehicle collision recognition method and system based on deep learning Download PDF

Info

Publication number
CN112749210A
CN112749210A CN202110061524.1A CN202110061524A CN112749210A CN 112749210 A CN112749210 A CN 112749210A CN 202110061524 A CN202110061524 A CN 202110061524A CN 112749210 A CN112749210 A CN 112749210A
Authority
CN
China
Prior art keywords
data
vehicle
time
model
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110061524.1A
Other languages
Chinese (zh)
Other versions
CN112749210B (en
Inventor
朱瑞
何道峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubiai Information Technology Beijing Co ltd
Original Assignee
Ubiai Information Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubiai Information Technology Beijing Co ltd filed Critical Ubiai Information Technology Beijing Co ltd
Priority to CN202110061524.1A priority Critical patent/CN112749210B/en
Publication of CN112749210A publication Critical patent/CN112749210A/en
Application granted granted Critical
Publication of CN112749210B publication Critical patent/CN112749210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2474Sequence data queries, e.g. querying versioned data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2477Temporal data queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/254Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Biomedical Technology (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Strategic Management (AREA)
  • Probability & Statistics with Applications (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Fuzzy Systems (AREA)
  • Artificial Intelligence (AREA)
  • Human Resources & Organizations (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Computer Security & Cryptography (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a vehicle collision recognition method and a system based on deep learning, wherein the vehicle collision recognition method comprises the following steps: step 1: acquiring real-time running data of a vehicle, and acquiring historical running data and historical danger data of the vehicle; step 2: preprocessing vehicle operation data; and step 3: performing low-frequency to high-frequency data interpolation on the vehicle operation data; and 4, step 4: ETL engineering characteristic transformation; and 5: performing model training and iteration through a deep learning method and a mathematical statistics correlation principle to obtain a correlation model; step 6: and obtaining a vehicle collision recognition result according to the real-time vehicle operation data and by combining the relevant models. The method adopts massive Internet of vehicles data, models 5 angles from abnormal running track, abnormal deceleration, brake distance prediction, abnormal parking and abnormal turning, and identifies collision accidents, thereby avoiding judgment troubles caused by complex geographic factors.

Description

Vehicle collision recognition method and system based on deep learning
Technical Field
The invention relates to the technical field of vehicle collision detection, in particular to a vehicle collision identification method and system based on deep learning.
Background
With the rapid increase of the number of automobiles in China, road traffic accidents are frequently happened, so that serious casualties and property losses are caused, the road traffic safety situation is severe, and the attention of the whole society is aroused. A large number of surveys show that after a road traffic accident occurs, particularly after a collision accident occurs, the number of casualties caused by untimely rescue is far larger than that caused by the accident, and therefore, the vehicle collision accident can be timely known, and the rescue requirement can be obtained. Due to the limitation of data collection conditions, in the prior art, the method for detecting the collision mainly uses a threshold value of a gravity acceleration sensor as a judgment condition for detecting the collision. Many current detection methods read the values of the three axes of the acceleration sensor X, Y, Z by polling, calculate the difference between the maximum value and the minimum value of each axis, and then calculate the vector of the three axes. By comparing this vector value with a predetermined value, a collision is considered to have occurred if the vector value is greater than the predetermined value. However, when the vehicle is driven on a complex road condition, the detection technology can cause the situation of false alarm or false alarm. For example, suppose the road conditions have sewer well lid or deceleration strip, the vehicle can take place great vibrations when passing these places, and some car owners have great brake dynamics when passing these places for the acceleration sensor who installs on the car produces great difference, has just produced the wrong report very easily. Meanwhile, because the parameters of the bodies of vehicles of different models are different and the collision preset values of different products are also different, the collision detection technology is very difficult to realize in use and maintenance aiming at various vehicle models and various application scenes, the accuracy of an output result is not enough, a large amount of misjudgments or missed judgments can be caused, and correct judgments can not be made in time when an accident occurs. Some collision detection methods, such as an OBD interface-based vehicle-mounted equipment judgment method, have the disadvantages that the method is not compatible with all vehicle types, the equipment cost is high, the detection algorithm is simple, and the number of collision scenes is small; the method for judging the vehicle-mounted equipment based on the video images has the defects of high equipment manufacturing cost, easy damage of the equipment in the collision process, low cost performance of detection and calculation, less coverage on collision scenes and low detection precision.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for recognizing a vehicle collision based on deep learning, which overcome the disadvantages of the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme: a vehicle collision recognition method based on deep learning comprises the following steps:
step 1: acquiring real-time running data of a vehicle, and acquiring historical running data and historical danger data of the vehicle;
step 2: preprocessing vehicle operation data;
and step 3: performing low-frequency to high-frequency data interpolation processing on the vehicle operation data;
and 4, step 4: ETL engineering characteristic conversion treatment;
and 5: performing model training and iteration through a deep learning method and a mathematical statistics correlation principle to obtain a correlation model;
step 6: and obtaining a vehicle collision recognition result according to the real-time vehicle operation data and by combining the relevant models.
Optionally, the acquiring historical vehicle operation data and historical vehicle risk data includes:
obtaining historical vehicle operating data, the data comprising: a timestamp, a GPS speed, a GPS direction, a GPS altitude, a device power status, a device location status, a vehicle movement status, a GPS longitude, and a GPS latitude;
acquiring historical insurance information of the vehicle through a third party, wherein the insurance information comprises: the time of the accident, the place of the accident and the collision position of the vehicle;
matching the insurance information with the historical vehicle running data through insurance time, intercepting time sequence data with different time length Ts at the insurance moment by a data sliding method, marking the data as insurance, and simultaneously intercepting a certain amount of non-insurance time sequence data and marking the data as non-insurance.
Optionally, the preprocessing the vehicle operation data includes:
carrying out time sequence chaotic processing on vehicle operation data;
processing the abnormal value of GPS positioning drift of the data after the time sequence disorder processing;
and (3) performing signal loss data processing on the data subjected to the GPS positioning drift abnormal value processing, and deleting the data which has no positioning signal and has the speed of 0.
Optionally, the performing time-series chaotic processing on the vehicle operation data includes:
for historical data used for model training, performing mode calculation on min records of the Ts time sequence data, comparing each record with the current mode, calculating a timestamp difference, and if the difference is greater than preset time, considering that the record is a complement operation on previous data, and deleting the record;
for the time sequence data calculated in real time on line of the model, calculating the time difference between the time stamp of each piece of data and the current system time, if the difference is greater than the preset time, considering the piece of data as the operation of system supplementary recording data, and deleting the piece of data;
and sequencing the processed historical data or the real-time analyzed data according to the time stamp.
Optionally, the processing of the GPS positioning drift outlier on the data after the time-series chaos processing includes:
sequentially calculating the time interval between the t and t +1 records and the earth linear distance;
obtaining a suspicious straight line distance;
and positioning the drift point according to the suspicious linear distance, the time interval and the earth linear distance.
Optionally, the performing low-frequency to high-frequency data interpolation processing on the vehicle operation data includes:
carrying out low-frequency to high-frequency data interpolation processing on vehicle operation data by using a differential integral method and a Bezier curve method;
specifically, a differential integration method is used for carrying out speed term interpolation processing on vehicle running data;
and carrying out GPS longitude and latitude item interpolation on the vehicle running data subjected to the speed item interpolation processing by using a Bezier curve method.
Optionally, the model training and iteration performed by the deep learning method and the mathematical statistics correlation principle includes:
performing training set and test set division on the processed historical data;
and performing LSTM modeling on the training set data by using LSTM model parameters, and evaluating the test set data prediction result by using LSTM model optimization and evaluation parameters so as to perform optimization iteration on the model parameters.
Optionally, the correlation model includes: the system comprises a running track abnormity identification model, an emergency brake abnormity identification model, a turning abnormity identification model, a parking abnormity identification model and a brake distance abnormity identification model;
the obtaining of the recognition result according to the real-time vehicle operation data and the combination of the relevant models comprises the following steps:
dividing the real-time vehicle running data according to a time interval of Ts;
processing the divided Ts duration data in real time according to the steps 2, 3 and 4;
inputting the processed data into a running track abnormity identification model, a sudden braking abnormity identification model, a turning abnormity identification model, a parking abnormity identification model and a braking distance abnormity identification model to respectively obtain driving abnormity scores of the models;
and determining a recognition result according to the driving abnormity scores of the models.
Optionally, the determining the recognition result according to the driving abnormality score of each model includes:
solving the sum of the products of the abnormal driving scores of the models and the corresponding weights of the models, and taking the sum as a prediction score;
judging whether the prediction score is greater than an accident threshold value;
if the predicted score is larger than the accident threshold value, judging that the vehicle has a collision accident; otherwise, judging that the vehicle has no collision accident.
The invention also provides a vehicle collision recognition system based on deep learning, which comprises:
the data acquisition module is used for acquiring real-time running data of the vehicle and acquiring historical running data and historical danger data of the vehicle;
the preprocessing module is used for preprocessing vehicle operation data;
the data interpolation processing module is used for performing low-frequency to high-frequency data interpolation processing on the vehicle operation data;
the conversion module is used for ETL engineering characteristic conversion treatment;
the modeling module is used for carrying out model training and iteration through a deep learning method and a mathematical statistics correlation principle to obtain a correlation model;
and the recognition module is used for obtaining a vehicle collision recognition result according to the processed vehicle real-time operation data and by combining the relevant models.
By adopting the technical scheme, the vehicle collision recognition method based on deep learning comprises the following steps: step 1: acquiring real-time running data of a vehicle, and acquiring historical running data and historical danger data of the vehicle; step 2: preprocessing vehicle operation data; and step 3: performing low-frequency to high-frequency data interpolation processing on the vehicle operation data; and 4, step 4: ETL engineering characteristic conversion treatment; and 5: performing model training and iteration through a deep learning method and a mathematical statistics correlation principle to obtain a correlation model; step 6: and obtaining a vehicle collision recognition result according to the real-time vehicle operation data and by combining the relevant models. According to the identification method, mass Internet of vehicles data are adopted, so that the accuracy of the judgment result is improved; factors such as driving behavior factors, gender, weather conditions and the like of the car owner are added in the model establishing process, so that the judgment result is more accurate; the model identifies the collision accident from 5 angles of abnormal running track, abnormal deceleration, brake distance prediction, abnormal parking and abnormal turning, and the judgment angle is more comprehensive; the model judges whether a vehicle collision accident occurs or not from the result presentation of time sequence data and the change rule of the data, and replaces the mode that the traditional algorithm only uses one data record at a certain moment to judge, so that the judgment result is more reliable; the identification method can avoid judgment troubles caused by complex geographic factors when the traditional equipment is used for collision induction identification.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart provided by a first embodiment of a deep learning-based vehicle collision recognition method of the present invention;
FIG. 2 is a schematic flow chart of a second embodiment of the deep learning-based vehicle collision recognition method according to the present invention;
FIG. 3 is a schematic view of a vehicle traveling at a constant speed;
FIG. 4 is a schematic view of a vehicle traveling at a non-uniform speed;
FIG. 5 is a schematic diagram of a first order Bezier curve interpolation GPS recording point;
FIG. 6 is a schematic diagram of a third order Bezier curve interpolation GPS recording point;
FIG. 7 is a schematic structural diagram provided by an embodiment of the deep learning-based vehicle collision recognition system of the present invention.
In the figure: 1. a data acquisition module; 2. a preprocessing module; 3. a data interpolation processing module; 4. a conversion module; 5. a modeling module; 6. and identifying the module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
Fig. 1 is a schematic flow chart provided by a first embodiment of a deep learning-based vehicle collision recognition method according to the present invention.
As shown in fig. 1, a vehicle collision recognition method based on deep learning according to the embodiment includes:
step 1: acquiring real-time running data of a vehicle, and acquiring historical running data and historical danger data of the vehicle;
step 2: preprocessing vehicle operation data;
and step 3: performing low-frequency to high-frequency data interpolation processing on the vehicle operation data;
and 4, step 4: ETL engineering characteristic conversion treatment;
and 5: performing model training and iteration through a deep learning method and a mathematical statistics correlation principle to obtain a correlation model;
step 6: and obtaining a vehicle collision recognition result according to the real-time vehicle operation data and by combining the relevant models.
In practical use, as shown in fig. 2, the vehicle collision recognition method described in this embodiment includes the following specific steps:
step 1: the method comprises the steps that real-time running data of a vehicle are collected through front-loading equipment and rear-loading equipment, and historical insurance data of the vehicle are obtained through a 4s dealer;
firstly, historical real-time running data of a vehicle is acquired, and the acquisition frequency is 1/5 seconds, wherein the acquisition frequency comprises the following steps: a timestamp, a GPS speed, a GPS direction, a GPS altitude, a device power status, a device location status, a vehicle movement status, a GPS longitude, and a GPS latitude.
Secondly, obtain historical emergence information through third party such as insurance company, dealer, car owner, including: time to accident, location to accident, and vehicle impact location.
And finally, matching the insurance information with the historical real-time running data of the vehicle through insurance time, intercepting different time sequence data with the time length Ts (which can be set to 120s) at the insurance moment by a data sliding method, marking the data as insurance, and meanwhile intercepting a certain amount of non-insurance time sequence data and marking the data as non-insurance.
Step 2: pre-processing vehicle operation data, comprising: the method comprises the following specific processes of time sequence disorder processing, GPS positioning drift abnormal value processing and signal loss data processing:
1) first, the time-series disorder processing: due to the influence of the defects of the data acquisition equipment, individual records cannot be acquired and transmitted to the interior of the system in real time, and are returned and stacked after the existing data after a certain time, so that the collision accident is misjudged by the model, and the 120s time sequence data needs to be screened and identified, wherein the method comprises the following steps:
for historical data used for model training, performing mode calculation on min records of 120s time sequence data, comparing each record with a current mode, calculating a timestamp difference, and if the difference is greater than preset time (such as 250s), considering the record as a device to perform complement operation on previous data, and processing the previous data by adopting a deletion method;
for the time sequence data calculated in real time on line of the model, calculating the time difference between the time stamp of each piece of data and the current system time, if the difference is greater than the preset time (such as 250s), considering the operation of supplementing the data by the system, and further deleting the piece of data; and finally, sequencing the processed historical data and the real-time analyzed data according to the time stamp.
2) Secondly, processing abnormal values of GPS positioning drift: due to the self-reason of the equipment, the problem of equipment positioning drift can be caused, so that the distance between two points in front of and behind the GPS is abnormally amplified, and the model result is misjudged, and therefore the equipment positioning drift judgment method comprises the following steps:
(1) sequentially calculating the time interval delta and the distance of the earth straight line between the t and t +1 records;
(2) obtaining the suspicious straight line distance (distance is more than 5000 m);
(3) sequentially judging the suspicious distance in the step (2), if the suspicious distance only has one item and the distance is formed by 120s time sequence data t120And t119If the calculated speed is larger than 40m/s, t is judged120Recording as a positioning drift point, and deleting the record; if the suspicious distance is determined by t1And t2If the distance is calculated, the distance between the time interval delta corresponding to the travel and the next distance interval distance is judgedt23If Δ < 120s and distancet23If < 1000m, t is determined1For the drift point, if Δ < 120s and the next distance interval > 4000m, t is considered to be2For the drift point, if the corresponding time interval Δ of the stroke is larger than 120s, and the distance is nextt23If the time interval is more than 4000m and the corresponding time interval of the next distance is less than 120s, the t is considered to be2Is a drift point; if the suspicious distance is neither t1And t2Calculated as not t120And t119And calculating, if the time interval delta corresponding to the travel is less than 120s and the next distance interval is more than 4000m, considering that the second record in the distance is calculated as a drift point, and if the time interval delta corresponding to the travel is more than 120s and the next distance interval is more than 4000m and the time interval corresponding to the next distance is less than 120s, considering that the second record in the distance is calculated as a drift point.
3) And finally, processing the signal loss data, and deleting the data which has no positioning signal and has the speed of 0.
And step 3: and performing low-frequency to high-frequency data interpolation processing on the vehicle data by using a differential integration and Bezier curve method.
Because the data acquisition frequency of the equipment is sub/5 s, and the collision accident happens in a certain moment and is far less than 5s, the records need to be interpolated, the data frequency is improved by adopting the following method:
1) speed term interpolation: first, assuming that the vehicle is running at a constant speed between the records t and t +1 (as shown in fig. 3), the running distance in the time interval is calculated by using the integral principle, and the formula is as follows:
Figure BDA0002902552460000091
secondly, the earth straight-line distance k between the two points is calculated by using the corresponding GPS point, if s > k, the vehicle is not driven at a constant speed in the driving process, but is decelerated firstly and then accelerated (as shown in FIG. 4), so a record t3 is interpolated in the interval between t1 and t 2.
Two interpolation principles are specified:
the first one is: FIG. 4 the sum of the two trapezoidal areas equals s;
a second bar: the abnormality of the original data is not increased, and the two pieces of data are smooth as far as possible, so that the sum of the accelerations of the two strokes is required to be | a1| + | a2| as small as possible.
The calculation formula is as follows:
Figure BDA0002902552460000092
t=t1+t2
Figure BDA0002902552460000093
Figure BDA0002902552460000094
Figure BDA0002902552460000095
since y is to be made ═ a1|+|a2L is minimized, so
Figure BDA0002902552460000101
Figure BDA0002902552460000102
Figure BDA0002902552460000103
Figure BDA0002902552460000104
Figure BDA0002902552460000105
the interpolation data t3 and v3 between t1 and t2 are as shown in the above results, and the other cases are analogized in turn.
2) Interpolation of longitude and latitude items of the GPS:
according to the scheme, the Bezier curve is utilized to approximately interpolate the GPS record between two GPS positioning points, and the method is specific:
if the GPS recording points are on the same straight line in 120s time sequence data, interpolating the GPS recording points by adopting a first-order Bezier curve, wherein the calculation formula is as follows:
B(t)=(1-t)P0+tP1,t∈[0,1]
t is t1 in the velocity interpolation, as shown in FIG. 5.
If the GPS recording points are not on the same straight line in 120s time series data, a third-order Bezier curve is adopted to interpolate the GPS recording points, firstly, 4 GPS recording points G1(x1, y1), G2(x2, y2), G3(x3, y3) and G4(x4, y4) are assumed to exist,
Midpoint1_x=(x1+x2)/2,Midpoint1_y=(y1+y2)/2
Midpoint2_x=(x2+x3)/2,Midpoint1_y=(y2+y3)/2
Midpoint3_x=(x3+x4)/2,Midpoint1_y=(y3+y4)/2
Figure BDA0002902552460000106
Figure BDA0002902552460000107
Figure BDA0002902552460000108
K1=d1/(d1+d2)
K2=d2/(d2+d3)
split1=midpoint1+K1*(midpoint2-midpoint1)
split2=midpoint2+K2*(midpoint3-midpoint2)
translating the segment of midpoint1 and midpoint2 until split1 and G2 coincide, marking the coincident point as V1, and marking the translated points of midpoint1 and midpoint2 as V2 and V3, then
p0=V2+K1*(V1-V2)
p1=V3+K1*(V1-V3)
Similarly, p2, p3 can be calculated.
Note that G1, p0 is a control point between G1 and G2, p1 and p2 are control points between G2 and G3, and p2 and G4 are control points between G3 and G4.
The interpolated GPS record between two points can be calculated according to the following formula for third-order bezier curve smoothing, for example, the interpolation between two points AB is as follows:
M(t)=(1-t)3A+3tB(1-t)2+3Ct2(1-t)+Dt3,t∈[0,1]
wherein, A and D are respectively a starting point and an end point, B and C are control points, and M (t) is an interpolated GPS recording point in A and D. t is t1 in the speed interpolation, as shown in FIG. 6.
And 4, step 4: ETL engineering feature transformation
And (3) performing characteristic transformation on the high-frequency data interpolated in the step (3), wherein the characteristic transformation is shown in table 1 and is specifically divided into the following characteristics:
Figure BDA0002902552460000111
Figure BDA0002902552460000121
TABLE 1
And 5: model training and iteration are carried out through a deep learning method and a mathematical statistics correlation principle, wherein the model comprises the following steps: the system comprises a running track abnormity identification model, an emergency brake abnormity identification model, a turning abnormity identification model, a parking abnormity identification model and a brake distance abnormity identification model.
1) The training and iteration process of the sudden braking abnormity model comprises the following steps:
firstly, taking real-time operation data of 1000 vehicles for 15 days, respectively calculating the absolute value of the negative acceleration of the vehicle at the current speed, and establishing a two-dimensional relation model of the speed-the absolute value of the negative acceleration;
secondly, segmenting the speed according to rounding, for example, v is 1m/s-2m/s, and finally calculating a 95% quantile (which can be set according to actual needs) of the negative acceleration absolute value corresponding to each speed interval as a threshold of deceleration abnormity;
finally, a relatively abnormal gear shifting behavior score p of the single deceleration behavior is calculatedi,piThe calculation method of (2) is as follows:
Figure BDA0002902552460000131
when a isiWhen the value is greater than threshold, the score is recorded as 100; when a isiWhen the value is less than or equal to the threshold, the score is recorded as pi*100。
It should be noted that, 15-day real-time operation data of 1000 vehicles is taken as a sample, and the number of vehicles and the number of days of the sample can be set according to actual needs, which is only an example and does not limit the protection scope.
2) The training and iteration principle of the turning abnormality recognition model is the same as that of the emergency brake abnormality model.
3) The training and iteration process of the parking abnormity identification model comprises the following steps:
the parking abnormity is mainly recognized for the abnormal parking on the expressway, if the 120s time series data includes records with the speed greater than 110KM/h and records with the speed less than 3.6KM/h (considered parking records), and meanwhile, records of high-speed running exist in 10s before the vehicle is parked and the vehicle is not started in 120s time series after the vehicle is parked, the behavior is judged as the parking abnormity, and the abnormity score is recorded as 100.
4) The training and iteration process of the braking distance anomaly model comprises the following steps:
at the same speed, the minimum value of the braking distance of the vehicle is calculated according to the formula
d=0.75*v+0.0256*v2
Calculating the difference s between the actual deceleration distance and the minimum braking distance, and then calculating the abnormal braking distance score by the formula
Figure BDA0002902552460000132
5) The training and iteration process of the operation track abnormity identification model comprises the following steps:
and (4) carrying out deep learning LSTM model learning on the model data subjected to ETL engineering characteristic conversion in the step (4), wherein the LSTM (Long short term memory) principle is the prior art and is not described herein again.
Specifically, the steps of training the operation track abnormity identification model and predicting the model are as follows:
firstly, the historical data processed in the step 4 is divided into a training set and a test set.
Secondly, LSTM model building is carried out on the training set data by applying LSTM model parameters, and the test set data prediction result is evaluated by applying LSTM model optimization and evaluation parameters, so that optimization iteration is carried out on the model parameters.
And a sixth step: and (3) putting the model into use, collecting vehicle operation data in real time, processing the vehicle operation data collected in real time by using the steps 1, 2, 3 and 4, and inputting the processed vehicle operation data into the model in the step 5.
Specifically, the models are put into use, real-time data of the vehicle are processed in real time according to steps 1, 2, 3 and 4 at time intervals of 120s, then the processed data are input into 5 models in step 5, abnormal driving scores of the models are obtained respectively, weighting and summarizing are carried out according to certain weight to obtain predicted scores, and finally whether the vehicle has a collision accident or not is judged according to the size relation between the predicted scores and an accident threshold value. If the predicted score is larger than the accident threshold value, judging that the vehicle has a collision accident; otherwise, judging that the vehicle has no collision accident.
According to the identification method, mass Internet of vehicles data are adopted, so that the accuracy of the judgment result is improved; factors such as driving behavior factors, gender, weather conditions and the like of the car owner are added in the model establishing process, so that the judgment result is more accurate; the model identifies the collision accident from 5 angles of abnormal running track, abnormal deceleration, brake distance prediction, abnormal parking and abnormal turning, and the judgment angle is more comprehensive; the model judges whether a vehicle collision accident occurs or not from the result presentation of time sequence data and the change rule of the data, and replaces the mode that the traditional algorithm only uses one data record at a certain moment to judge, so that the judgment result is more reliable; the identification method can avoid judgment troubles caused by complex geographic factors when the traditional equipment is used for collision induction identification.
FIG. 7 is a schematic structural diagram provided by an embodiment of the deep learning-based vehicle collision recognition system of the present invention.
As shown in fig. 7, a deep learning based vehicle collision recognition system according to the present embodiment includes:
the data acquisition module 1 is used for acquiring real-time running data of the vehicle and acquiring historical running data and historical danger data of the vehicle;
the preprocessing module 2 is used for preprocessing vehicle operation data;
the data interpolation processing module 3 is used for performing low-frequency to high-frequency data interpolation processing on the vehicle operation data;
the conversion module 4 is used for ETL engineering characteristic conversion treatment;
the modeling module 5 is used for carrying out model training and iteration through a deep learning method and a mathematical statistics correlation principle to obtain a correlation model;
and the recognition module 6 is used for obtaining a vehicle collision recognition result according to the processed vehicle real-time operation data and by combining the relevant models.
The working principle of the vehicle collision recognition system based on deep learning in this embodiment is the same as that of the vehicle collision recognition method based on deep learning in any one of the above embodiments, and is not described herein again.
The vehicle collision recognition system based on deep learning provided by the invention adopts massive vehicle networking data, models 5 angles from abnormal running track, abnormal deceleration, braking distance prediction, abnormal parking and abnormal turning and recognizes collision accidents, avoids judgment troubles caused by complex geographic factors, and has the advantages of wide coverage on collision scenes and high detection precision.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A vehicle collision recognition method based on deep learning is characterized by comprising the following steps:
step 1: acquiring real-time running data of a vehicle, and acquiring historical running data and historical danger data of the vehicle;
step 2: preprocessing vehicle operation data;
and step 3: performing low-frequency to high-frequency data interpolation processing on the vehicle operation data;
and 4, step 4: ETL engineering characteristic conversion treatment;
and 5: performing model training and iteration through a deep learning method and a mathematical statistics correlation principle to obtain a correlation model;
step 6: and obtaining a vehicle collision recognition result according to the real-time vehicle operation data and by combining the relevant models.
2. The vehicle collision recognition method according to claim 1, wherein the acquiring vehicle historical operation data and vehicle historical emergence data includes:
obtaining historical vehicle operating data, the data comprising: a timestamp, a GPS speed, a GPS direction, a GPS altitude, a device power status, a device location status, a vehicle movement status, a GPS longitude, and a GPS latitude;
acquiring historical insurance information of the vehicle through a third party, wherein the insurance information comprises: the time of the accident, the place of the accident and the collision position of the vehicle;
matching the insurance information with the historical vehicle running data through insurance time, intercepting time sequence data with different time length Ts at the insurance moment by a data sliding method, marking the data as insurance, and simultaneously intercepting a certain amount of non-insurance time sequence data and marking the data as non-insurance.
3. The vehicle collision recognition method according to claim 2, wherein the preprocessing of the vehicle operation data includes:
carrying out time sequence chaotic processing on vehicle operation data;
processing the abnormal value of GPS positioning drift of the data after the time sequence disorder processing;
and (3) performing signal loss data processing on the data subjected to the GPS positioning drift abnormal value processing, and deleting the data which has no positioning signal and has the speed of 0.
4. The vehicle collision recognition method according to claim 3, wherein the subjecting of the vehicle operation data to time series confusion processing includes:
for historical data used for model training, performing mode calculation on min records of the Ts time sequence data, comparing each record with the current mode, calculating a timestamp difference, and if the difference is greater than preset time, considering that the record is a complement operation on previous data, and deleting the record;
for the time sequence data calculated in real time on line of the model, calculating the time difference between the time stamp of each piece of data and the current system time, if the difference is greater than the preset time, considering the piece of data as the operation of system supplementary recording data, and deleting the piece of data;
and sequencing the processed historical data or the real-time analyzed data according to the time stamp.
5. The vehicle collision recognition method according to claim 3, wherein the performing of the GPS positioning drift outlier processing on the time-series disorder processed data includes:
sequentially calculating the time interval between the t and t +1 records and the earth linear distance;
obtaining a suspicious straight line distance;
and positioning the drift point according to the suspicious linear distance, the time interval and the earth linear distance.
6. The vehicle collision recognition method according to claim 1, wherein the low-frequency to high-frequency data interpolation processing of the vehicle operation data includes:
carrying out low-frequency to high-frequency data interpolation processing on vehicle operation data by using a differential integral method and a Bezier curve method;
specifically, a differential integration method is used for carrying out speed term interpolation processing on vehicle running data;
and carrying out GPS longitude and latitude item interpolation on the vehicle running data subjected to the speed item interpolation processing by using a Bezier curve method.
7. The vehicle collision recognition method according to claim 1, wherein the model training and iteration through the deep learning method and the mathematical statistics correlation principle comprises:
performing training set and test set division on the processed historical data;
and performing LSTM modeling on the training set data by using LSTM model parameters, and evaluating the test set data prediction result by using LSTM model optimization and evaluation parameters so as to perform optimization iteration on the model parameters.
8. The vehicle collision recognition method according to any one of claims 2 to 7,
the correlation model includes: the system comprises a running track abnormity identification model, an emergency brake abnormity identification model, a turning abnormity identification model, a parking abnormity identification model and a brake distance abnormity identification model;
the obtaining of the recognition result according to the real-time vehicle operation data and the combination of the relevant models comprises the following steps:
dividing the real-time vehicle running data according to a time interval of Ts;
processing the divided Ts duration data in real time according to the steps 2, 3 and 4;
inputting the processed data into a running track abnormity identification model, a sudden braking abnormity identification model, a turning abnormity identification model, a parking abnormity identification model and a braking distance abnormity identification model to respectively obtain driving abnormity scores of the models;
and determining a recognition result according to the driving abnormity scores of the models.
9. The vehicle collision recognition method according to claim 8, wherein the determining a recognition result from the driving abnormality scores of the models includes:
solving the sum of the products of the abnormal driving scores of the models and the corresponding weights of the models, and taking the sum as a prediction score;
judging whether the prediction score is greater than an accident threshold value;
if the predicted score is larger than the accident threshold value, judging that the vehicle has a collision accident; otherwise, judging that the vehicle has no collision accident.
10. A deep learning based vehicle collision recognition system, comprising:
the data acquisition module is used for acquiring real-time running data of the vehicle and acquiring historical running data and historical danger data of the vehicle;
the preprocessing module is used for preprocessing vehicle operation data;
the data interpolation processing module is used for performing low-frequency to high-frequency data interpolation processing on the vehicle operation data;
the conversion module is used for ETL engineering characteristic conversion treatment;
the modeling module is used for carrying out model training and iteration through a deep learning method and a mathematical statistics correlation principle to obtain a correlation model;
and the recognition module is used for obtaining a vehicle collision recognition result according to the processed vehicle real-time operation data and by combining the relevant models.
CN202110061524.1A 2021-01-18 2021-01-18 Vehicle collision recognition method and system based on deep learning Active CN112749210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110061524.1A CN112749210B (en) 2021-01-18 2021-01-18 Vehicle collision recognition method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110061524.1A CN112749210B (en) 2021-01-18 2021-01-18 Vehicle collision recognition method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN112749210A true CN112749210A (en) 2021-05-04
CN112749210B CN112749210B (en) 2024-03-12

Family

ID=75652302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110061524.1A Active CN112749210B (en) 2021-01-18 2021-01-18 Vehicle collision recognition method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112749210B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114054736A (en) * 2021-10-12 2022-02-18 中国重型机械研究院股份公司 Buggy ladle parking system and method
CN114291081A (en) * 2021-12-31 2022-04-08 成都路行通信息技术有限公司 Vehicle collision detection method based on artificial intelligence algorithm
CN116189114A (en) * 2023-04-21 2023-05-30 西华大学 Method and device for identifying collision trace of vehicle
CN116343484A (en) * 2023-05-12 2023-06-27 天津所托瑞安汽车科技有限公司 Traffic accident identification method, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104882025A (en) * 2015-05-13 2015-09-02 东华大学 Crashing detecting and warning method based on vehicle network technology
CN109344903A (en) * 2018-10-12 2019-02-15 苏交科华东(浙江)工程设计有限公司 Urban road surfaces failure real-time detection method based on vehicle-mounted perception data
US20190287395A1 (en) * 2018-03-19 2019-09-19 Derq Inc. Early warning and collision avoidance
CN110599353A (en) * 2018-06-13 2019-12-20 百度在线网络技术(北京)有限公司 Vehicle insurance and claims rate prediction method, device, equipment and medium
CN111079590A (en) * 2019-12-04 2020-04-28 东北大学 Peripheral vehicle behavior pre-judging method of unmanned vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104882025A (en) * 2015-05-13 2015-09-02 东华大学 Crashing detecting and warning method based on vehicle network technology
US20190287395A1 (en) * 2018-03-19 2019-09-19 Derq Inc. Early warning and collision avoidance
CN110599353A (en) * 2018-06-13 2019-12-20 百度在线网络技术(北京)有限公司 Vehicle insurance and claims rate prediction method, device, equipment and medium
CN109344903A (en) * 2018-10-12 2019-02-15 苏交科华东(浙江)工程设计有限公司 Urban road surfaces failure real-time detection method based on vehicle-mounted perception data
CN111079590A (en) * 2019-12-04 2020-04-28 东北大学 Peripheral vehicle behavior pre-judging method of unmanned vehicle

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114054736A (en) * 2021-10-12 2022-02-18 中国重型机械研究院股份公司 Buggy ladle parking system and method
CN114054736B (en) * 2021-10-12 2022-10-18 中国重型机械研究院股份公司 Buggy ladle parking system and method
CN114291081A (en) * 2021-12-31 2022-04-08 成都路行通信息技术有限公司 Vehicle collision detection method based on artificial intelligence algorithm
CN114291081B (en) * 2021-12-31 2023-07-04 成都路行通信息技术有限公司 Vehicle collision detection method based on artificial intelligence algorithm
CN116189114A (en) * 2023-04-21 2023-05-30 西华大学 Method and device for identifying collision trace of vehicle
CN116343484A (en) * 2023-05-12 2023-06-27 天津所托瑞安汽车科技有限公司 Traffic accident identification method, terminal and storage medium
CN116343484B (en) * 2023-05-12 2023-10-03 天津所托瑞安汽车科技有限公司 Traffic accident identification method, terminal and storage medium

Also Published As

Publication number Publication date
CN112749210B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN112749210B (en) Vehicle collision recognition method and system based on deep learning
Saunier et al. Probabilistic framework for automated analysis of exposure to road collisions
Zaki et al. Use of drivers’ jerk profiles in computer vision–based traffic safety evaluations
JP4865711B2 (en) Operation monitoring system and method
CN111311914B (en) Vehicle driving accident monitoring method and device and vehicle
CN106339692B (en) A kind of fatigue driving state information determines method and system
US8687063B2 (en) Method for predicting lane line and lane departure warning system using the same
JP2016146162A (en) Driving determination device, driving determination program, calculation system, detection device, detection system, and detection method and program
CN105388021A (en) ADAS virtual development and test system
JP7149098B2 (en) Predictive diagnosis device and method
CN113936465B (en) Traffic event detection method and device
CN110781578B (en) Intelligent network coupling algorithm testing and evaluating method based on accident scene
JP2019519041A (en) Method, apparatus and system for detecting reverse running driver
Wei et al. Trajectory-based identification of critical instantaneous decision events at mixed-flow signalized intersections
CN115588310A (en) Vehicle collision risk prediction method based on trajectory data
CN116135640A (en) Anti-collision early warning method and system for vehicle and vehicle
CN111310696B (en) Parking accident identification method and device based on analysis of abnormal parking behaviors and vehicle
CN115527364B (en) Traffic accident tracing method and system based on radar data fusion
CN113511204A (en) Vehicle lane changing behavior identification method and related equipment
Singh et al. Driver behaviour modelling of vehicles at signalized intersection with heterogeneous traffic
CN117392855B (en) Vehicle overrun overload probability identification method and system based on satellite positioning data
CN116665188B (en) Bus image system data analysis method
CN112991769A (en) Traffic volume investigation method and device based on video
Scora et al. Real-time roadway emissions estimation using visual traffic measurements
CN205264045U (en) Vehicle management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant