CN116363769A - Vehicle collision monitoring method and system - Google Patents

Vehicle collision monitoring method and system Download PDF

Info

Publication number
CN116363769A
CN116363769A CN202310334623.1A CN202310334623A CN116363769A CN 116363769 A CN116363769 A CN 116363769A CN 202310334623 A CN202310334623 A CN 202310334623A CN 116363769 A CN116363769 A CN 116363769A
Authority
CN
China
Prior art keywords
collision
data
vehicle
segment
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310334623.1A
Other languages
Chinese (zh)
Inventor
唐溢
叶清明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Luxingtong Information Technology Co ltd
Original Assignee
Chengdu Luxingtong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Luxingtong Information Technology Co ltd filed Critical Chengdu Luxingtong Information Technology Co ltd
Priority to CN202310334623.1A priority Critical patent/CN116363769A/en
Publication of CN116363769A publication Critical patent/CN116363769A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0808Diagnosing performance data
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vehicle collision monitoring method and system, wherein, running data of a vehicle is acquired through an intelligent vehicle-mounted sensor and is uploaded to a cloud server; decoding, analyzing and processing the acquired driving data through a big data real-time processing system; then dividing and clustering the input multi-element time sequence, dividing the long-time segment into a plurality of sub-segments, and classifying the sub-segments into fixed scenes; constructing an isolated forest model, extracting feature vectors from the sub-segments, and inputting the extracted feature distribution into the isolated forest model to obtain an abnormal score vector; calculating the collision confidence coefficient of each sub-segment by a fuzzy comprehensive evaluation method, and obtaining the collision confidence coefficient of the whole long-time sequence; and pushing collision clues through a collision decision rule, and searching corresponding videos and transmitting the videos to customer service for secondary verification. The accuracy and the response speed of the system pushing accidents are guaranteed.

Description

Vehicle collision monitoring method and system
Technical Field
The invention relates to the technical field of vehicle collision monitoring, in particular to a vehicle collision monitoring method and system based on multisource sensor data anomaly detection.
Background
While improving the traffic regulation and supervision of drivers, it is also very necessary to establish a complete real-time monitoring system for vehicle collision so as to quickly rescue accidents and comb traffic. Advanced digital technologies such as big data, cloud computing, artificial intelligence and Internet of things technology are introduced, and actual combat meaning of urban intelligent traffic control can be achieved to the greatest extent.
When the vehicle collides, the acceleration and the angular velocity in the directions of x, y and z are generated on the vehicle body, and meanwhile, the vehicle speed and the braking and accelerator force have larger abrupt changes.
At present, time sequence data before and after a vehicle collides is generally sampled by a plurality of sensors, and then the data is processed and collision analysis is carried out by a collision detection system; the current system capable of collecting, uploading and processing multi-source time sequence data in real time and efficiently detecting vehicle collision accidents is less, and the following problems exist:
1. the data acquisition dimension is single, most of the collision detection methods only depend on acceleration sensor data with one dimension and two dimensions, so that the collision detection accuracy is poor, and more vehicle state information cannot be synthesized to improve the collision detection accuracy.
2. Because the driving data of the vehicle is high in complexity and huge in data volume, most of abnormal detection models are complex in calculation, large in calculation power and memory resource consumption, and the real-time requirements of the system cannot be met.
3. The collision scene and the damage degree of the vehicle body of the vehicle are greatly different, the performance on time sequence data is changed in a huge way, and the existing method is difficult to recall minor accidents such as slight collision, scraping, tire burst and the like.
4. The existing collision detection system cannot accurately verify detected accident clues, most of the collision detection systems are verified with drivers in a mode of telephone communication, APP interaction and the like, but only a small number of accidents can be verified, and high accuracy and recall rate of pushed collision accidents cannot be guaranteed.
Disclosure of Invention
In order to solve the problems, the invention provides a vehicle collision monitoring method, which comprises the following specific technical scheme:
s1: the intelligent vehicle-mounted sensor collects driving data of the vehicle and uploads various collected data to the cloud server;
s2: the acquired driving data are decoded, analyzed and processed through a big data real-time processing system and then are input into a trained multivariate time sequence data segmentation clustering model;
s3: dividing and clustering the input multi-element time sequence, dividing the long-time segment into a plurality of sub-segments, and classifying the sub-segments into fixed scenes;
s4: constructing an isolated forest model, extracting feature vectors from the subfragments, and inputting the extracted feature distribution into the isolated forest model of the corresponding scene and the corresponding feature to obtain an abnormal score vector;
s5: calculating the collision confidence of each sub-segment based on the abnormal score vector of each sub-segment by a fuzzy comprehensive evaluation method, and taking the maximum collision confidence in the sub-segment as the collision confidence of the whole long-time sequence;
s6: and inputting the collision confidence coefficient of the whole long-time sequence, scene classification data, isolated forest model output and feature vectors of the abnormal subfragments into a collision decision rule engine, and finally judging whether to push the clue.
Further, the segmentation cluster model is expressed as follows:
Figure SMS_1
wherein θ i Is the covariance matrix of class i, mu i Is the mean vector, x is the input signal segment, p i (x) Representing that the signal segment belongs to the mean value vector mu i Covariance matrix is θ i Gaussian distribution probability of (c).
Further, the specific process of obtaining the anomaly score vector is as follows:
s401: modeling each feature corresponding to each scene independently and training;
s402: and (3) carrying out anomaly scoring on the characteristic data of each sub-segment through a trained isolated forest model, and outputting an anomaly score value, wherein the anomaly score value is as follows:
Figure SMS_2
wherein E (h (x)) is the average of the path lengths of the input samples in the model t (iTree), and c (n) is the average path length of the input n samples to construct a BST binary tree.
Further, the isolated forest model is trained as follows:
m1: randomly selecting psi sample points from training data to serve as sample subsets, and putting the sample subsets into root nodes of a tree;
m2: randomly designating a dimension, namely a feature, and randomly generating a cutting point p in the current node data, wherein the cutting point is generated between the maximum value and the minimum value of the designated dimension in the current node data;
m3: generating a hyperplane based on the cutting point, dividing the current node data space into 2 subspaces, placing data smaller than p in a specified dimension on a left child node of the current node, and placing data larger than or equal to p on a right child node of the current node;
m4: recursion steps M2 and M3 in the child nodes continuously construct new child nodes until only one data in the child nodes is available, namely the child nodes cannot be cut continuously, or the child nodes reach a limited height;
m5: m1 to M4 are cyclically executed until t isolated trees iTree are generated.
Further, the collision confidence of the long time sequence is obtained as follows:
s501: based on expert experience, giving weight duty ratios of different features in various degrees of anomaly, and constructing a fuzzy relation matrix;
s502: normalizing the abnormal score of each characteristic of a certain sub-segment, and performing dot multiplication with a fuzzy relation matrix to obtain the score of the segment characteristic in each abnormal degree;
s503: multiplying the scores of different abnormal degrees by weights and then summing to obtain the collision confidence coefficient of the segment;
s504: and taking the largest collision confidence in the collision confidence corresponding to each sub-segment segmented in the long-time sequence as the collision confidence of the whole long-time sequence.
Further, the method further comprises the following steps:
s7: matching the pushed collision clues to videos at corresponding moments, transmitting the videos to customer service personnel for secondary checking, determining whether the car owners need rescue or not after collision, and transmitting the collision accident information to a third party.
The invention also discloses a vehicle collision monitoring system which comprises a data acquisition module, a big data real-time processing unit, a collision detection module and a collision decision module;
the data acquisition module is used for acquiring running data of the vehicle through a sensor and transmitting the running data to the big data real-time processing unit;
the big data real-time processing unit is used for decoding, analyzing and processing data and transmitting the processed data to the collision detection module;
the collision detection module comprises a segmentation clustering module and an anomaly detection module, wherein a multivariate time sequence segmentation clustering algorithm and a fuzzy isolated forest anomaly detection algorithm are respectively stored, and are used for calculating and obtaining collision confidence coefficient of a long-time sequence, and transmitting the collision confidence coefficient, scene classification data, isolated forest model output and feature vectors of anomaly subfragments to the collision decision module;
the collision decision module is used for judging whether to push the clue or not based on the set rule according to the received data information, pushing the clue if the set rule is met, and ending if the set rule is not met.
Furthermore, the data acquisition module also acquires video data through video equipment on the vehicle, screens out abnormal video data through an algorithm and transmits the abnormal video data to the cloud.
Further, the abnormal video data screening process is as follows:
based on data acquired by the acceleration sensor, first-order differential values of x, y and z axes are acquired, summation is carried out, and whether the summation result is larger than a preset threshold value is judged;
if the video data is larger than the preset threshold value, uploading the video data in a set time period before and after the current moment as abnormal video data, otherwise, not uploading.
Further, the system further comprises a collision clue video matching module, wherein the collision clue video matching module is used for receiving the collision clue pushed by the collision decision module and the cloud video data, searching and matching the corresponding video data based on the pushed collision clue, sending the searched video data to the terminal for manual judgment, pushing the video data to a third party if the video data are judged to be collision, and ending if the video data are judged to be non-collision.
The beneficial effects of the invention are as follows:
1. based on the multi-element time sequence data acquired by the sensor, the long-section time sequence data is segmented into different small fragments through a segmentation clustering model, and then the small fragments are clustered into different interpretable fragment scenes, so that the accuracy of sensing the state information of the vehicle body is improved, an unsupervised generation model is adopted, covariance matrix parameters are mainly saved, the main calculated amount of the model is set in the training process, the calculation and memory resources occupied during use are reduced, and the real-time requirement of the system is ensured.
2. The method comprises the steps of respectively constructing trees for different features under different scenes to form an isolated forest, judging scene abnormality degree of detection results of each dimension feature from multi-dimension, and finally performing fuzzy operation with a fuzzy matrix to obtain a final evaluation result, so that the problem that abnormality degrees of different scene fragment data for each attribute are different is effectively solved, and the perceptibility of minor accidents such as slight collision, scraping, tire burst and the like is greatly improved.
Drawings
FIG. 1 is a schematic flow chart of a collision detection method of the present invention;
FIG. 2 is a schematic diagram of a training process of a multivariate time series data segmentation clustering model of the present invention;
fig. 3 is a schematic flow chart of the collision detection system of the present invention.
Detailed Description
In the following description, the technical solutions of the embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the embodiments of the present invention, it should be noted that, the indicated orientation or positional relationship is based on the orientation or positional relationship shown in the drawings, or the orientation or positional relationship conventionally put in use of the product of the present invention as understood by those skilled in the art, merely for convenience of describing the present invention and simplifying the description, and is not indicative or implying that the apparatus or element to be referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used merely for distinguishing between descriptions and not for understanding as indicating or implying a relative importance.
In the description of the embodiments of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; may be directly connected or indirectly connected through an intermediate medium. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Example 1
The embodiment 1 of the invention discloses a vehicle collision monitoring method, as shown in fig. 1, comprising the following specific steps:
s1: the intelligent vehicle-mounted sensor collects driving data of the vehicle and uploads various collected data to the cloud server;
s2: the acquired driving data are decoded, analyzed and processed through a big data real-time processing system and then are input into a trained multivariate time sequence data segmentation clustering model;
s3: dividing and clustering the input multi-element time sequence, dividing the long-time segment into a plurality of sub-segments, and classifying the sub-segments into fixed scenes;
the vehicle-mounted sensor can generate a large amount of time series data in the running process of the vehicle, a plurality of sensors are generally arranged on the vehicle, and the data detected by different sensors are different, so that the acquired data are also multiple, multiple observation data at each moment can be obtained by integrating the sensors, the more abundant the observation data are, the more accurate the vehicle body state perceived by the model is; by decomposing the long time series into a series of states, each state can be defined as a simple scenario such as sudden braking, acceleration, overcentre, jolt, etc.
Based on the steps S1-S3, the long-segment time series data are divided into different small segments, and the small segments are clustered into different interpretable segment scenes, so that repeated scenes can be found, the driving state of a vehicle is understood, and the abnormality is detected, so that the large-scale and Gao Weiche load sensor data can be better interpreted.
As shown in fig. 2, in this embodiment, the clustering model is partitioned, and the construction and training process is as follows:
the model expression is:
Figure SMS_3
wherein θ is i Is the covariance matrix of class i, mu i Is the mean vector, x is the input signal segment, p i (x) Representing that the signal segment belongs to the mean value vector mu i Covariance matrix is θ i Gaussian distribution probability of (c).
And finally, calculating the probability that x belongs to K-class Gaussian distribution respectively, and taking the largest class as a clustering result. Our goal is therefore to solve for the mean vector μ and covariance matrix θ of the K gaussian distributions.
Assuming that there is an original signal with a signal length T
X=[x 1 ,x 2 ,...,x T ]∈R n×T
Wherein the signal X at each moment i For n-dimensional vectors, the T observations need to be aggregated into K classes, and neighboring data is guaranteed to be in the same class as much as possible.
Defining a time window w, w<<T, X is i The w time stamps adjacent before are spliced into a vector, and the vector is used as the minimum data granularity of a clustering algorithm, and the vector is as follows:
X i =[x i-w+1 ,...,x i ] T
in this embodiment, the minimum data granularity for clustering by the split cluster model is not a single timestamp X i Instead, each time stamp is considered in a time sequence in different time windows, for example, in the driving process of the automobile, one observation may display the driving state of the automobile at the current moment, but a short window, even lasting only a fraction of a second, can enable us to more comprehensively know the driving state of the automobile.
The data matrix of the whole clustering input is as follows:
Figure SMS_4
recording that all signal segment segments are divided into K classes, and recording that the sequence number set of the signal segment belonging to the j th class is p j J=1, 2,. -%, K; wherein each type of signal segment obeys 0-mean Gaussian distribution, and covariance inverse matrix is theta j ,j=1,2,...,K。
This is an nwxnw matrix, θ j Consists of w×w sub-matrices; each submatrix represents the correlation between the sensors, in other words θ j Is a partitioned Toeplitz matrix:
Figure SMS_5
wherein A is (0) ,A (1) ,...,A (w-1) ∈R n×n
Figure SMS_6
Namely A (0) And the elements of the rows i and the columns j in the matrix represent the dependency relationship between the sensor i and the sensor j at the current moment.
Model training is as follows: firstly, initializing a Gaussian mixture algorithm (GMM) to obtain a mean vector mu and a covariance matrix theta of K Gaussian distributions; then setting the iteration times to 1000 times, setting the objective function value threshold to 0.0002, and stopping iteration when the maximum iteration times or the objective function value is smaller than 0.0002.
Iterative solution of signal segment classification p by repeatedly using BP algorithm and ADMM algorithm i Sum various inverse covariance matrices theta i Until the objective function value reaches the optimum:
Figure SMS_7
wherein θ is i Gao Sini covariance, p, of the ith of k cluster categories i Is a signal segment sequence number set belonging to the i-th cluster class;
in the above formula, we should ensure each θ as much as possible i Is sparse, i.e. each θ i Having as many 0 s as possible, only a small fraction of the elements other than 0 s, can increase the interpretability, thus requiring the first term to be made
Figure SMS_8
Namely theta i Takes the minimum value of L1 norm of (2), and at the same time, in order to ensure that adjacent time vectors X i As far as possible, we need to have the last term in the objective function, i.e.>
Figure SMS_9
Taking a minimum value;
finally, we need to employ maximum likelihood estimation, so we needEnsuring the most intermediate term of the objective function, i.e
Figure SMS_10
Taking the minimum value.
After the model is trained, the result stored by the model is K covariance matrixes theta of Gaussian distribution, the dimension n of the multi-element time sequence and the mean value mu of the construction window data corresponding to each class, and the length is nw. When the trained model is used, the parameters are directly read, and the maximum probability that each signal segment belongs to a certain class is calculated; the class probability of each signal point is deduced by using continuous signal segments, and meanwhile, the continuity of the signal points is considered, so that penalty is applied when the clustering result of the connected signal segments is different; finally we get each signal point x of the time series t C=1, 2,..k. The continuous signal points with the same category are classified into the same sub-segment, so that the long-time sequence is divided and clustered into the sub-segments.
S4: and constructing an isolated forest model, extracting feature vectors from the subfragments, and inputting the extracted feature distribution into the isolated forest model of the corresponding scene and the corresponding feature to obtain an abnormal score vector.
In the present embodiment, the extracted features include, but are not limited to, the maximum variation absolute value of the forward axis acceleration, the maximum variation absolute value of the steering axis acceleration, the maximum variation absolute value of the gravitational axis acceleration, the information entropy of the sum of the non-gravitational axis variation, the maximum gradient of the forward axis, the maximum gradient of the steering axis, the braking force, the accelerator force, the angular velocity, and the like.
The specific process of obtaining the abnormal score vector is as follows:
s401: modeling each feature corresponding to each scene independently and training;
the total eigenvector is noted as f= { F 1 ,f 2 ,...,f l Total l features, scene s= { S }, scene 1 ,s 2 ,...,s k I.e., the number k of clusters, respectively constructing l x k isolated forest models,
the isolated forest model is trained as follows:
m1: randomly selecting psi sample points from training data to serve as sample subsets, and putting the sample subsets into root nodes of a tree;
m2: randomly designating a dimension, namely a feature, and randomly generating a cutting point p in the current node data, wherein the cutting point is generated between the maximum value and the minimum value of the designated dimension in the current node data;
m3: a hyperplane is generated based on the cut point, the current node data space is divided into 2 subspaces, data smaller than p in a specified dimension is placed on the left child node of the current node, and data larger than or equal to p is placed on the right child node of the current node.
M4: recursion steps M2 and M3 in the child nodes continuously construct new child nodes until only one data in the child nodes is available, namely the child nodes cannot be cut continuously, or the child nodes reach a limited height;
m5: m1 to M4 are cyclically executed until t isolated trees iTree are generated.
S402: and (3) carrying out anomaly scoring on the characteristic data of each sub-segment through a trained isolated forest model, and outputting an anomaly score value, wherein the anomaly score value is as follows:
Figure SMS_11
wherein E (h (x)) is the average of the path lengths of the input samples in the model t (iTree), and c (n) is the average path length of the input n samples to construct a BST binary tree.
S5: and calculating the collision confidence of each sub-segment based on the abnormal score vector of each sub-segment by a fuzzy comprehensive evaluation method, and taking the maximum collision confidence in the sub-segment as the collision confidence of the whole long-time sequence.
In this embodiment, the collision confidence of the long time sequence is obtained as follows:
s501: based on expert experience, giving weight duty ratios of different features in various degrees of anomaly, and constructing a fuzzy relation matrix;
in the present embodiment of the present invention,abnormal achievement V includes special anomalies, relatively anomalies, and normal, four grades, characterized by f= { F 1 ,f 2 ,...,f l },r ij Representing a certain characteristic f i The weight value in the comment set V; the fuzzy relation matrix of the scene c is R c C=1, 2, …, k, specifically as follows:
Figure SMS_12
s502: abnormality score T for each feature of a sub-segment i ={t 1 ,t 2 ,...,t l Normalized, and then dot multiplied with the fuzzy relation matrix, namely B i =T i *R c Obtaining scores B of the fragment characteristics on 4 abnormal degrees i
S503: the scores of different abnormal degrees are multiplied by weights and then summed to obtain the collision confidence of the segment, wherein the collision confidence is as follows:
Figure SMS_13
s504: and taking the largest collision confidence in the collision confidence corresponding to each sub-segment segmented in the long-time sequence as the collision confidence of the whole long-time sequence.
By means of the method for comprehensively evaluating the fusion of the isolated forest and the fuzzy, the problem that abnormal weights of various features cannot be adjusted according to different scene segments when the vehicle sensor data are detected abnormally by a common isolated forest algorithm is solved.
For example, in an acceleration scene, the abnormal weight occupied by the characteristic of the maximum change slope of the steering shaft is higher, and the abnormal weight of the characteristics such as the forward acceleration shaft, the maximum change amount of the accelerator force and the like is lower. The common isolated forest algorithm calculates the abnormal weights of all the input features in each piece of data equally, so that the problem is that, for example, under an acceleration scene, the characteristics of the vehicle such as the acceleration change of the advancing shaft and the change of the accelerator force are larger, and the model considers the reasonable characteristic change as a higher abnormal factor, so that the model accuracy is lower; according to the method, the different features in different scenes are respectively built to form an isolated forest, the scene abnormality degree judgment is carried out on the detection result of each dimension feature from the multi-dimension, and finally the fuzzy operation is carried out on the detection result and the fuzzy matrix to obtain a final evaluation result, so that the problem that the abnormality degree of different scene fragment data on each attribute is different is effectively solved, and the perceptibility of minor accidents such as slight collision, scraping, tire burst and the like is greatly improved.
S6: and inputting the collision confidence coefficient of the whole long-time sequence, scene classification data obtained by previous calculation, isolated forest model output and feature vectors of abnormal subfragments into a collision decision rule engine, and finally judging whether to push the clue.
The collision decision rule engine stores the rule set constructed by fusing the information such as the isolated forest score, the feature vector of the abnormal sub-segment, the speed, the driving scene and the like by taking the collision confidence as a main basis, so that the pushing of false alarm clues is further reduced, the accuracy of the system is improved, and the customer service pressure is reduced.
S7: matching the pushed collision clues to videos at corresponding moments, transmitting the videos to customer service personnel for secondary checking, determining whether the car owners need rescue or not after collision, and transmitting the collision accident information to a third party.
Example 2
The embodiment 2 of the invention discloses a vehicle collision monitoring system, which is shown in fig. 3, and comprises a data acquisition module, a big data real-time processing unit, a collision detection module, a collision decision module and a collision clue video matching module;
the data acquisition module is used for acquiring running data of the vehicle through a sensor and transmitting the running data to the big data real-time processing unit;
in this embodiment, the data acquisition module further acquires video data through video equipment on the vehicle, and screens out abnormal video data through an algorithm and transmits the abnormal video data to the cloud.
In this embodiment, the abnormal video data screening process is as follows:
based on data acquired by the acceleration sensor, first-order differential values of x, y and z axes are acquired, summation is carried out, and whether the summation result is larger than a preset threshold value is judged;
if the video data is larger than the preset threshold value, uploading the video data in a set time period before and after the current moment as abnormal video data, otherwise, not uploading; and the traffic consumption of video data uploading is saved.
The big data real-time processing unit is used for decoding, analyzing and processing data and transmitting the processed data to the collision detection module;
the collision detection module comprises a segmentation clustering module and an abnormality detection module, wherein a multivariate time sequence segmentation clustering algorithm and a fuzzy isolated forest abnormality detection algorithm are respectively stored, and are used for calculating and obtaining collision confidence coefficient of a long-time sequence, and transmitting the collision confidence coefficient, isolated forest score, feature vector of an abnormal sub-segment, speed and driving scene to the collision decision module;
the collision decision module is used for judging whether to push the clue or not based on a set rule according to the received data information, pushing the clue if the set rule is met, and ending if the set rule is not met;
the collision clue video matching module is used for receiving the collision clue pushed by the collision decision module and the video data of the cloud end, matching the equipment number, the time stamp and the video data of the collision clue based on the pushed collision clue search, matching the corresponding video data, sending the searched video data to the terminal for manual judgment, pushing the video data to a third party if the video data is judged to be collision, and ending if the video data is judged to be non-collision.
Based on the system, the system can automatically upload videos in set time before and after all abnormal state moments of the vehicle body, then video data are stored in the cloud server, after the system detects accident clues, the video can be accurately and rapidly matched with recorder videos at corresponding moments of the clues according to vehicle equipment numbers and accident time stamps, and then the videos are downloaded and transmitted to an accident verification system so as to be manually verified by customer service, so that the accident accuracy rate of the system pushing is guaranteed, and meanwhile, the response time of the whole process is guaranteed to be shorter.
The invention is not limited to the specific embodiments described above. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification, as well as to any novel one, or any novel combination, of the steps of the method or process disclosed.

Claims (10)

1. A vehicle collision monitoring method, comprising:
s1: the intelligent vehicle-mounted sensor collects driving data of the vehicle and uploads various collected data to the cloud server according to a set strategy;
s2: the acquired driving data are decoded, analyzed and processed through a big data real-time processing system and then are input into a trained multivariate time sequence data segmentation clustering model;
s3: dividing and clustering the input multi-element time sequence, dividing the long-time segment into a plurality of sub-segments, and classifying the sub-segments into fixed scenes;
s4: constructing an isolated forest model, extracting feature vectors from the subfragments, and inputting the extracted feature distribution into the isolated forest model of the corresponding scene and the corresponding feature to obtain an abnormal score vector;
s5: calculating the collision confidence of each sub-segment based on the abnormal score vector of each sub-segment by a fuzzy comprehensive evaluation method, and taking the maximum collision confidence in the sub-segment as the collision confidence of the whole long-time sequence;
s6: and inputting the collision confidence coefficient of the whole long-time sequence, scene classification data, isolated forest model output and feature vectors of the abnormal subfragments into a collision decision rule engine, and finally judging whether to push the clue.
2. The vehicle collision monitoring method of claim 1, wherein the partitioned cluster model is represented as follows:
Figure FDA0004156022560000011
wherein θ i Is the covariance matrix of class i, mu i Is the mean vector, x is the input signal segment, p i (x) Representing that the signal segment belongs to the mean value vector mu i Covariance matrix is θ i Gaussian distribution probability of (c).
3. The vehicle collision monitoring method according to claim 1, wherein the specific procedure of the obtaining of the anomaly score vector is as follows:
s401: modeling each feature corresponding to each scene independently and training;
s402: and (3) carrying out anomaly scoring on the characteristic data of each sub-segment through a trained isolated forest model, and outputting an anomaly score value, wherein the anomaly score value is as follows:
Figure FDA0004156022560000021
wherein E (h (x)) is the average of the path lengths of the input samples in the model t (iTree), and c (n) is the average path length of the input n samples to construct a BST binary tree.
4. A vehicle collision monitoring method according to claim 3, in which the isolated forest model is trained as follows:
m1: randomly selecting psi sample points from training data to serve as sample subsets, and putting the sample subsets into root nodes of a tree;
m2: randomly designating a dimension, namely a feature, and randomly generating a cutting point p in the current node data, wherein the cutting point is generated between the maximum value and the minimum value of the designated dimension in the current node data;
m3: generating a hyperplane based on the cutting point, dividing the current node data space into 2 subspaces, placing data smaller than p in a specified dimension on a left child node of the current node, and placing data larger than or equal to p on a right child node of the current node;
m4: recursion steps M2 and M3 in the child nodes continuously construct new child nodes until only one data in the child nodes is available, namely the child nodes cannot be cut continuously, or the child nodes reach a limited height;
m5: m1 to M4 are cyclically executed until t isolated trees iTree are generated.
5. The vehicle collision monitoring method according to claim 1, wherein the long time series of collision confidence levels are obtained as follows:
s501: based on expert experience, giving weight duty ratios of different features in various degrees of anomaly, and constructing a fuzzy relation matrix;
s502: normalizing the abnormal score of each characteristic of a certain sub-segment, and performing dot multiplication with a fuzzy relation matrix to obtain the score of the segment characteristic in each abnormal degree;
s503: multiplying the scores of different abnormal degrees by weights and then summing to obtain the collision confidence coefficient of the segment;
s504: and taking the largest collision confidence in the collision confidence corresponding to each sub-segment segmented in the long-time sequence as the collision confidence of the whole long-time sequence.
6. The vehicle collision monitoring method according to any one of claims 1 to 5, characterized by further comprising, after step S6:
s7: matching the pushed collision clues to videos at corresponding moments, transmitting the videos to customer service personnel for secondary checking, determining whether the car owners need rescue or not after collision, and transmitting the collision accident information to a third party.
7. The vehicle collision monitoring system is characterized by comprising a data acquisition module, a big data real-time processing unit, a collision detection module and a collision decision module;
the data acquisition module is used for acquiring running data of the vehicle through a sensor and transmitting the running data to the big data real-time processing unit;
the big data real-time processing unit is used for decoding, analyzing and processing data and transmitting the processed data to the collision detection module;
the collision detection module comprises a segmentation clustering module and an anomaly detection module, wherein a multivariate time sequence segmentation clustering algorithm and a fuzzy isolated forest anomaly detection algorithm are respectively stored, and are used for calculating and obtaining collision confidence coefficient of a long-time sequence, and transmitting the collision confidence coefficient, scene classification data, isolated forest model output and feature vectors of anomaly subfragments to the collision decision module;
the collision decision module is used for judging whether to push the clue or not based on the set rule according to the received data information, pushing the clue if the set rule is met, and ending if the set rule is not met.
8. The vehicle collision monitoring system of claim 7, wherein the data acquisition module further acquires video data through video equipment on the vehicle, and screens out abnormal video data through an algorithm and transmits the abnormal video data to the cloud.
9. The vehicle crash monitoring system of claim 8 wherein the abnormal video data screening process is as follows:
based on data acquired by the acceleration sensor, first-order differential values of x, y and z axes are acquired, summation is carried out, and whether the summation result is larger than a preset threshold value is judged;
if the video data is larger than the preset threshold value, uploading the video data in a set time period before and after the current moment as abnormal video data, otherwise, not uploading.
10. The vehicle collision monitoring system according to claim 7, further comprising a collision clue video matching module, configured to receive the collision clue pushed by the collision decision module and the video data of the cloud, search for matching corresponding video data based on the pushed collision clue, and send the searched video data to the terminal for manual discrimination, if the collision is discriminated, push the video data to a third party, and if the collision is discriminated, end.
CN202310334623.1A 2023-03-31 2023-03-31 Vehicle collision monitoring method and system Pending CN116363769A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310334623.1A CN116363769A (en) 2023-03-31 2023-03-31 Vehicle collision monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310334623.1A CN116363769A (en) 2023-03-31 2023-03-31 Vehicle collision monitoring method and system

Publications (1)

Publication Number Publication Date
CN116363769A true CN116363769A (en) 2023-06-30

Family

ID=86906645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310334623.1A Pending CN116363769A (en) 2023-03-31 2023-03-31 Vehicle collision monitoring method and system

Country Status (1)

Country Link
CN (1) CN116363769A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117250602A (en) * 2023-11-15 2023-12-19 中汽研汽车检验中心(天津)有限公司 Collision type prediction method, apparatus, and storage medium
CN117289778A (en) * 2023-11-27 2023-12-26 惠州市鑫晖源科技有限公司 Real-time monitoring method for health state of industrial control host power supply

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117250602A (en) * 2023-11-15 2023-12-19 中汽研汽车检验中心(天津)有限公司 Collision type prediction method, apparatus, and storage medium
CN117250602B (en) * 2023-11-15 2024-03-15 中汽研汽车检验中心(天津)有限公司 Collision type prediction method, apparatus, and storage medium
CN117289778A (en) * 2023-11-27 2023-12-26 惠州市鑫晖源科技有限公司 Real-time monitoring method for health state of industrial control host power supply
CN117289778B (en) * 2023-11-27 2024-03-26 惠州市鑫晖源科技有限公司 Real-time monitoring method for health state of industrial control host power supply

Similar Documents

Publication Publication Date Title
CN116363769A (en) Vehicle collision monitoring method and system
CN110263846B (en) Fault diagnosis method based on fault data deep mining and learning
CN111181939B (en) Network intrusion detection method and device based on ensemble learning
US20220217170A1 (en) Intrusion detection method and system for internet of vehicles based on spark and deep learning
CN110386145B (en) Real-time prediction system for driving behavior of target driver
CN112015153B (en) System and method for detecting abnormity of sterile filling production line
CN108769104B (en) Road condition analysis and early warning method based on vehicle-mounted diagnosis system data
CN117156442B (en) Cloud data security protection method and system based on 5G network
CN112612820A (en) Data processing method and device, computer readable storage medium and processor
CN112580536A (en) High-order video vehicle and license plate detection method and device
CN117392604A (en) Real-time information monitoring and management system and method for Internet of things
CN113780432B (en) Intelligent detection method for operation and maintenance abnormity of network information system based on reinforcement learning
CN115761900A (en) Internet of things cloud platform for practical training base management
CN109615027B (en) Intelligent prediction method for extracting wind speed characteristics along high-speed railway
CN116684878A (en) 5G information transmission data safety monitoring system
CN110689140A (en) Method for intelligently managing rail transit alarm data through big data
CN113761715A (en) Method for establishing personalized vehicle following model based on Gaussian mixture and hidden Markov
CN115018194A (en) Method, system, electronic device and storage medium for predicting fault level of electric vehicle
CN114863210A (en) Method and system for resisting sample attack of bridge structure health monitoring data driving model
CN113822155A (en) Clustering-assisted weak surveillance video anomaly detection method and device
CN118013234B (en) Multi-source heterogeneous big data-based key vehicle driver portrait intelligent generation system
Wang et al. A driver abnormal behavior warning method based on isolated forest algorithm.
CN115293297B (en) Method for predicting track of ship driven by intention
CN113869182B (en) Video anomaly detection network and training method thereof
CN117725839A (en) Power transmission tower vulnerability assessment method based on multisource data concentration monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination