CN110263836B - Bad driving state identification method based on multi-feature convolutional neural network - Google Patents

Bad driving state identification method based on multi-feature convolutional neural network Download PDF

Info

Publication number
CN110263836B
CN110263836B CN201910510060.0A CN201910510060A CN110263836B CN 110263836 B CN110263836 B CN 110263836B CN 201910510060 A CN201910510060 A CN 201910510060A CN 110263836 B CN110263836 B CN 110263836B
Authority
CN
China
Prior art keywords
data
data set
feature
layer
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910510060.0A
Other languages
Chinese (zh)
Other versions
CN110263836A (en
Inventor
谢非
汪壬甲
刘文慧
杨继全
吴俊�
章悦
刘益剑
陆飞
汪璠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Intelligent High End Equipment Industry Research Institute Co ltd
Nanjing Normal University
Original Assignee
Nanjing Intelligent High End Equipment Industry Research Institute Co ltd
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Intelligent High End Equipment Industry Research Institute Co ltd, Nanjing Normal University filed Critical Nanjing Intelligent High End Equipment Industry Research Institute Co ltd
Priority to CN201910510060.0A priority Critical patent/CN110263836B/en
Publication of CN110263836A publication Critical patent/CN110263836A/en
Application granted granted Critical
Publication of CN110263836B publication Critical patent/CN110263836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a bad driving state identification method based on a multi-feature convolutional neural network, which comprises the following steps: acquiring data of an inertial sensor of a vehicle-mounted smart phone, and preprocessing the data to obtain a source data set; dividing a source data set into individual data units, performing statistical feature extraction on each data unit, and marking the data units to prepare a data set named as a feature data set; building a multi-feature convolutional neural network, selecting appropriate network parameters and an optimizer, and fully training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model; and classifying the data of the vehicle-mounted mobile phone inertial sensor by using the trained multi-feature convolutional neural network model, so that the current driving state of the automobile is identified, whether the current driving state of the automobile is a bad driving state is judged, and data recording and processing are carried out in the background. The method has the advantages of high operation speed, high recognition rate and strong environmental interference resistance.

Description

Bad driving state identification method based on multi-feature convolutional neural network
Technical Field
The invention relates to the technical field of sensor data acquisition and deep learning, in particular to a bad driving state identification method based on a multi-feature convolutional neural network.
Background
With the rapid development of the automobile industry and the increasing popularity of automobiles, automobiles have become the most important transportation tools. However, some drivers still have the problem of irregular driving, and traffic control departments and some network appointment platforms hope to supervise the driving state of the drivers so as to evaluate the driving habits of the drivers.
At present, there are three main methods for detecting bad driving states, and firstly, dangerous driving states are detected by installing different types of sensors or vehicle-mounted computer systems on an automobile, so that driving risks are reduced. And secondly, whether the driving state of the driver is good or not is judged according to the external states of the driver, such as eyeball motion, nodding, physiological indexes and the like. And thirdly, identifying and classifying the driving state of the vehicle by combining portable equipment such as a smart phone, a smart watch and the like. Compared with the former two methods, the driving state analysis is simpler and more convenient by using the sensor information of the portable equipment, and the method is beneficial to popularization. At present, the method is mainly based on data acquired instantly and utilizes a sensor data change threshold value and a traditional machine learning algorithm to analyze, and both the robustness and the accuracy of the system need to be improved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a bad driving state identification method based on a multi-feature convolutional neural network, which comprises the following steps:
step 1: collecting and storing data of an inertial sensor of the vehicle-mounted smart phone, preprocessing the collected data of the inertial sensor of the vehicle-mounted smart phone, labeling the data to prepare a data set, and recording the data set as a source data set;
step 2: completing data division of a source data set, dividing the source data set into data units, and performing statistical feature extraction on each data unit to obtain a feature data set;
and step 3: building a multi-feature convolutional neural network, and fully training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model;
and 4, classifying the data of the inertial sensor of the vehicle-mounted smart phone by using the trained multi-feature convolutional neural network model, and judging whether the current driving state of the automobile is a bad driving state or not according to the classification result.
Further, the step 1 comprises:
step 1.1: acquiring data of an inertial sensor of the smart phone in various automobile driving states, and acquiring and storing various data of the vehicle-mounted smart phone sensor in various driving states, wherein the inertial sensor comprises an accelerometer and a gyroscope;
step 1.2: preprocessing the data acquired in the step 1.1 by adopting a data filtering, coordinate conversion and data centralization method to obtain preprocessed data;
step 1.3: and (3) according to the driving state of the automobile during data acquisition, performing labeling operation on the preprocessed data obtained in the step (1.2) to obtain a labeled data set, and naming the labeled data set as a source data set.
Further, the step 1.1 comprises:
the various driving states of the automobile comprise 10 types: normal driving and stopping stateAttitude, normal acceleration, normal deceleration, normal left turn, normal right turn, sharp left turn, sharp right turn, sharp deceleration, and sharp acceleration. In the 10 states, data acquisition is respectively carried out on an inertial sensor (an accelerometer and a gyroscope) of the vehicle-mounted smart phone, and a triaxial acceleration acc of the smart phone is acquired by the accelerometerx、accy、acczAcquiring three-axis angular velocity gyr of mobile phone for gyroscopex、gyry、gyrzAnd recording the acquisition time t, wherein D seconds are acquired for each of 10 driving states, and n is acquired every second1And (generally taking a value of 100) times, obtaining a data sequence and storing the data sequence in a file.
Further, the step 1.2 comprises:
step 1.2.1, carrying out data filtering on the obtained data sequence according to a Kalman filter to suppress a noise signal;
and step 1.2.2, when the front face of the mobile phone is placed horizontally upwards, the coordinate system of the mobile phone is overlapped with the geodetic coordinate system, namely the mobile phone is horizontally forwards in the driving direction of the automobile to form a positive direction of a y axis, is horizontally rightwards in the driving direction of the automobile to form a positive direction of an x axis, and is vertically arranged on a plane of the y axis of the x axis to form a positive direction of a z axis. The mobile phone coordinate system and the geodetic coordinate system are both right-hand coordinate systems;
and 1.2.3, if the mobile phone cannot keep a horizontal posture in the data acquisition process, converting the data in the mobile phone coordinate system into a geodetic coordinate system by using matrix transformation. The following formulas are coordinate rotation matrixes of an x axis, a y axis and a z axis respectively:
Figure BDA0002093164280000021
Figure BDA0002093164280000022
Figure BDA0002093164280000023
wherein R isx(theta) is the x-axis coordinateThe matrix of the rotation is then rotated in a direction,
Figure BDA0002093164280000024
is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, theta is an included angle between the x-axis of the mobile phone coordinate system and the x-axis of the geodetic coordinate system,
Figure BDA0002093164280000031
is the included angle between the y axis of the mobile phone coordinate system and the y axis of the geodetic coordinate system, and psi is the included angle between the z axis of the mobile phone coordinate system and the z axis of the geodetic coordinate system. And performing coordinate conversion on the acquired acceleration data and the acquired angular velocity data under the mobile phone coordinate system by using the following formula:
Figure BDA0002093164280000032
Figure BDA0002093164280000033
wherein A is the collected acceleration data under the mobile phone coordinate system, G is the collected angular velocity data under the mobile phone coordinate system, and Rx(theta) is the x-axis coordinate rotation matrix,
Figure BDA0002093164280000034
is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, AEAcceleration data in a geodetic coordinate system obtained by coordinate conversion, GEStoring the coordinate-converted data into a data set A for obtaining the angular velocity data under the geodetic coordinate system after coordinate conversion1In (1).
Step 1.2.4, data set A is mapped using the following equation1The data in (3) is processed by data centralization:
Figure BDA0002093164280000035
wherein, Xc kAs data set A1C row and k column, e +1 is data set A1Number of lines of (1), midXc kData set A after centralization processing1C row and k column data to obtain a preprocessed data set A2
Further, the step 1.3 includes:
respectively corresponding to 10 automobile driving states by using numbers 0-9 according to the data set A2The preprocessed data set A obtained in the step 1.2 is subjected to corresponding automobile driving states during data acquisition of each row2Tagging, i.e. in data set A2And adding a column, wherein the content is a number of 0-9 corresponding to the driving state of the automobile during data acquisition of each row. Recording the labeled data set as a source data set, wherein the data structure of the source data set is as follows:
V=(acc′x acc′y acc′z gyr′x gyr′y gyr′z t S),
where V is the line sequence of the source data set, acc'xIs preprocessed mobile phone x-axis acceleration data, accy'is preprocessed mobile phone y-axis acceleration data, acc'zIs preprocessed mobile phone z-axis acceleration data, gyrx' is preprocessed mobile phone x-axis angular velocity data, gyry' is preprocessed mobile phone y-axis angular velocity data, gyrz' is preprocessed mobile phone z-axis angular velocity data, t is the acquisition time of the line data, and S is the data label of the line.
Further, the step 2 includes:
step 2.1: and dividing the source data set according to the acquisition time of the data. When data is collected, data collection is carried out for D seconds in each driving state, and 100 groups of data are collected per second, so that n collected in the same 1 second1(value is 100) row data as a data unit;
step 2.2: and (3) performing statistical feature extraction on each data unit obtained in the step (2.1), and making a data set named as a feature data set.
Further, the step 2.2 includes:
and (3) extracting statistical characteristics of the data units divided in the step (2.1), wherein the statistical characteristics needing to be extracted comprise: the average value, the variance, the maximum value, the minimum value, the variation amplitude and the average crossing rate specifically comprise:
the average value can well reflect the average level of data, so that the average value of each line of data of the data units plays an important role in the prediction classification of the data, one data unit is taken as the current data unit, and the average value calculation is carried out according to the following formula:
Figure BDA0002093164280000041
wherein, Xi jFor the data of ith row and jth column of the current data unit, n +1 is the row number of each data unit, since 100 rows of data are taken as one data unit, where n is 99,
Figure BDA0002093164280000042
is the average value of the jth column data of the current data unit.
The degree of dispersion of the data can be reflected by the variance, which is calculated for each column of data cells using the following formula:
Figure BDA0002093164280000043
wherein the content of the first and second substances,
Figure BDA0002093164280000044
is the variance of the jth column of the current data unit.
Maximum value Max (X) of data unit per columni) And minimum Min (X)i) The peak value of the change of the vehicle acceleration can be reflected, and the peak value can also be used as an auxiliary characteristic. The amplitude of the data change is calculated using the following formula:
Figure BDA0002093164280000045
of these, Max (X)i) Is the maximum value of the ith column data of the current data unit, Min (X)i) Is the maximum value of the ith column data of the current data unit,
Figure BDA0002093164280000046
the variation amplitude of the ith column data of the current data unit is obtained.
The average crossing rate of data in each column in the data unit can reflect the correlation between adjacent rows of data in the same column. Calculating the average crossing rate of each column of data in the data unit by adopting the following formula:
Figure BDA0002093164280000051
wherein, Xi jFor data in ith row and jth column of current data cell, Xi+1 jIs the data of the i +1 th row and the j th column of the data unit,
Figure BDA0002093164280000052
is the average value of the jth column data of the current data unit, gamma is an indicator function,
Figure BDA0002093164280000053
the average cross rate of the jth column data of the current data unit is obtained.
The first 6 columns of data for each data unit have 6 eigenvalues per column: mean, variance, maximum, minimum, amplitude of variation, average cross rate, so there are 36 statistical features per data unit. And forming a new data set by using the statistical characteristics of each data unit, and marking the new data set as a characteristic data set. The characteristic data set has m rows and 36 columns, and m is the number of data units.
Further, the step 3 includes:
step 3.1: building a multi-feature convolutional neural network, and determining a network structure;
step 3.2: and selecting a network optimizer and training the multi-feature convolutional neural network by using the source data set and the feature data set to obtain a trained multi-feature convolutional neural network model.
Further, the step 3.1 includes:
the multi-feature convolutional neural network structure is composed of 3 parts, and the specific construction method is as follows:
the first part comprises an input layer, two convolution layers and a pooling layer and is used for carrying out convolution feature extraction on data and is used for obtaining a convolution feature map of the b-th data unit. And b is taken from 0 to m in sequence, wherein m is the number of the data units. The first part of the input comes from the source data set, and since each data unit is a two-dimensional array of 100 × 8, the first 6 columns (100 × 6) are taken as the input of the first part and sent to the input layer. The input layer is followed by the first convolutional layer of the first part, which uses 16 convolution kernels 3 x 3, with step size 1 and number of fills 1. The calculation formula of the output size of the convolutional layer is shown as the following formula:
Figure BDA0002093164280000054
where Z is the length of the convolution output data, W is the length of the convolution input data, P is the number of fills, F is the length of the convolution kernel, and S represents the step size. For the first convolutional layer in the first part, the output data size of the first convolutional layer in the first part is therefore calculated to be 100 × 6 × 16 from the above formula. A linear rectification function is used as the activation function after the convolutional layer. And sending the data passing through the activation function into a second convolution layer of the first part, wherein the second convolution layer adopts 32 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, and the output size of the second convolution layer of the first part is 100 × 6 × 32 according to an output size calculation formula of the convolution layers. The second convolutional layer of the first part is also followed by a linear rectification function as the activation function. The data passing through the activation function is sent to the first part of the pooling layer, and the pooling layer is generally divided into two types according to the working principle: the method comprises the steps of a maximum pooling layer and a mean pooling layer, wherein all the pooling layers adopted by the method are maximum pooling layers, and for the first part of the pooling layers, rectangular windows with the size of 2 x 2 are adopted for sliding, the step length in the horizontal direction is 2, and the step length in the vertical direction is 2. The output size calculation formula of the pooling layer is shown as follows:
Figure BDA0002093164280000061
where Z 'is the length of the output of the pooling layer, W' is the length of the input of the pooling layer, F 'is the length of the filter, and S' represents the step size in the horizontal direction. The output size of the pooled layers of the first fraction was 50 x 3 x 32 according to the above formula. The output of the pooling layer of the first part is the convolution signature of the b-th data unit.
The second part consists of a convolutional layer and a pooling layer. And fusing and shaping the convolution characteristic graph of the b-th data unit obtained from the first part and the convolution characteristic graph of the b-1 th data unit into a two-dimensional vector with the size of 100 x 96, and naming the two-dimensional vector as a full integration characteristic graph. When b is 0, the convolution signature of the b-1 th data unit is replaced by all 0 data with a size of 50 x 3 x 32. And (3) feeding the fully integrated feature map into a second part of the convolution layer, wherein the convolution layer adopts 6 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, and the output size of the second convolution layer in the first part is 100 × 96 × 6 according to the output size calculation formula of the convolution layer. The second convolutional layer of the first part is also followed by a linear rectification function as the activation function. The data that passed the activation function is fed into the first part of the pooling layer, which is slid with a rectangular window of 2 x 2 size, with a horizontal step size of 2 and a vertical step size of 2. The output size of the convolution layer of the second portion is 50 x 48 x 6 according to the output size calculation formula of the pooling layer.
The third part consists of three layers of fully connected networks: an input layer, a hidden layer and an output layer. The input layer of the third section is formed by the output of the second section together with the feature data set. The output size of the second part is 50 x 48 x 6, the b th row of the feature data set has 36 data, they are integrated into a one-dimensional vector of 14436 x 1, and this vector is fed into the input layer as the input of the third part. 1024 neurons are placed in the hidden layer, 10 neurons are placed in the output layer, and the hidden layer corresponds to 10 driving states of the automobile.
Further, the step 3.2 includes:
firstly, a source data set and a feature data set are divided into a training set and a testing set according to a ratio of 4: 1. In the training process, each data unit of the source data set and each line of the feature data set are used as a training unit, the loss function uses a cross entropy loss function, and the network optimizer adopts a adam optimizer to fully train the network, so that the trained multi-feature convolutional neural network model is obtained.
Further, the step 4 comprises:
step 4.1: and (3) acquiring data of an inertial sensor of the vehicle-mounted smart phone in the driving process of the automobile in real time, and classifying the data acquired in real time by using the trained model obtained in the step (3) to obtain the current driving state class of the automobile.
Step 4.2: and 4, judging according to the current driving state of the automobile obtained in the step 4.1, if the driving state of the automobile is as follows: the method comprises the steps of judging whether the current driving state of a driver is good or not by any one of normal driving, parking state, normal acceleration, normal deceleration, normal left turn and normal right turn, judging whether the current driving state of the driver is good or not by any one of sudden left turn, sudden right turn, sudden deceleration and sudden acceleration, judging whether the current driving state of the driver is bad or not, sending out prompt tones to remind the driver of driving normally, recording data, counting the bad driving times of the driver every 24 hours, and uploading the bad driving times to a background.
According to the technical scheme, the invention provides a bad driving state identification method based on a multi-feature convolutional neural network, which comprises the following steps: step 1: collecting and storing data of an inertial sensor of the vehicle-mounted smart phone, preprocessing the collected data of the inertial sensor of the vehicle-mounted smart phone, labeling the data to prepare a data set, and naming the data set as a source data set; step 2: finishing data division of a source data set, dividing the source data set into individual data units, performing statistical feature extraction on each data unit, and making a data set named as a feature data set; and step 3: building a multi-feature convolutional neural network, selecting appropriate network parameters and an optimizer, and fully training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model; and 4, classifying the data of the inertial sensor of the vehicle-mounted smart phone by using the trained multi-feature convolutional neural network model, so as to realize the identification of the current driving state of the automobile. And judging whether the current driving state of the automobile is a bad driving state or not, and recording and processing data in the background.
The invention provides a bad driving state identification method based on a multi-feature convolutional neural network, which fully considers the characteristic that the adjacent moments of the driving state of an automobile are linked and designs the multi-feature convolutional neural network. And analyzing and predicting the driving state by using the data of adjacent moments. The problems that the accuracy rate of the existing poor driving recognition system is not high, the stability of the system is not good and the like are solved. And the method has high portability, can be applied to a smart phone platform, and has wide application prospect.
Aiming at the problem that the existing driving state identification precision and stability are not high, the invention provides a bad driving state identification method based on a multi-feature convolutional neural network and an intelligent mobile phone inertial sensor, and designs a multi-feature convolutional neural network model and an algorithm. The stability and the accuracy of the driving state recognition system are improved, and various driving states can be reliably recognized.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic workflow diagram of a method for identifying an adverse driving state based on a multi-feature convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a system general block diagram of an identification method of an adverse driving state based on a multi-feature convolutional neural network according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a coordinate system of a smart phone according to an embodiment of the present invention;
fig. 4 is a data acquisition field image of an inertial sensor of a vehicle-mounted smart phone according to an embodiment of the present invention;
FIG. 5 is a structural diagram of a bad driving recognition system based on a multi-feature convolutional neural network according to an embodiment of the present invention;
FIG. 6 is a diagram of a multi-feature convolutional neural network model provided by an embodiment of the present invention;
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
The embodiment of the invention discloses a method for identifying bad driving states based on a multi-feature convolutional neural network, which comprises the following steps that: the accelerometer [ can refer to the principle and teaching application [ J ] of accelerometer in smart phone, such as Yang bud, Huyiquan, physical report, 2017(01):80-81 ], the gyroscope [ can refer to Liuyan column, Yandong, miniature gyroscope [ J ] mechanics and practice, 2017,39(05): 506-. And preprocessing the data to prepare a data set, dividing a data unit, and extracting statistical characteristics. And constructing a multi-feature convolutional neural network, training the network by using the acquired data, and predicting the driving state of the automobile by using the obtained network model. The method can be applied to the fields of intelligent driving and the like.
Referring to fig. 1 and fig. 2, a schematic workflow diagram of an undesirable driving condition identification method based on a multi-feature convolutional neural network according to an embodiment of the present invention includes the following steps:
step 1: collecting and storing data of an inertial sensor of the vehicle-mounted smart phone, preprocessing the collected data of the inertial sensor of the vehicle-mounted smart phone, labeling the data to prepare a data set, and naming the data set as a source data set;
step 2: finishing data division of a source data set, dividing the source data set into individual data units, performing statistical feature extraction on each data unit, and making a data set named as a feature data set;
and step 3: building a multi-feature convolutional neural network, selecting network parameters and an optimizer, and fully training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model;
and 4, classifying the data of the inertial sensor of the vehicle-mounted smart phone by using the trained multi-feature convolutional neural network model, so as to realize the identification of the current driving state of the automobile. And judging whether the current driving state of the automobile is a bad driving state or not, and recording and processing data in the background.
The invention is further described with reference to the following figures and specific examples.
In the embodiment of the present invention, a coordinate system of the smartphone is as shown in fig. 3, a field picture of smartphone sensor data acquisition is as shown in fig. 4, and data of the smartphone inertial sensor is acquired, coordinate converted, and preprocessed according to the example of fig. 4.
The step 1 comprises the following steps: step 1.1: acquiring data of inertial sensors (an accelerometer and a gyroscope) of the smart phone in various automobile driving states, and acquiring and storing various data of the vehicle-mounted smart phone sensors in various driving states;
step 1.2: preprocessing the data acquired in the step 1.1 by adopting a data filtering, coordinate conversion and data centralization method to obtain preprocessed data;
step 1.3: and (3) according to the driving state of the automobile during data acquisition, performing labeling operation on the preprocessed data obtained in the step (1.2) to obtain a labeled data set, and naming the labeled data set as a source data set.
In an embodiment of the present invention, step 1.1 includes:
the driving state of the automobile is divided into 10 types: normal driving, parking state, normal acceleration, normal deceleration, normal left turn, normal right turn, sharp left turn, sharp right turn, sharp deceleration, sharp acceleration. In the 10 states, data acquisition is respectively carried out on an inertial sensor (an accelerometer and a gyroscope) of the vehicle-mounted smart phone, and a triaxial acceleration acc of the smart phone is acquired by the accelerometerx、accy、acczAcquiring three-axis angular velocity gyr of mobile phone for gyroscopex、gyry、gyrzAnd recording the acquisition time t, wherein D seconds (D is not less than 10000) are acquired in each of 10 driving states, 100 times are acquired per second, a data sequence is obtained, and the data sequence is stored in a file.
The step 1.2 comprises the following steps:
according to the Kalman filter (a common recursive filter), refer to Zhou P, Li M, Shen G, Use ItFree: instant knotting Young telephone Attitude [ C ]// Mobile Computing & Networking ACM,2014: 101-. When the front face of the mobile phone is placed horizontally upwards, a mobile phone coordinate system is superposed with a geodetic coordinate system, namely, the mobile phone coordinate system is horizontally forward along the driving direction of the automobile to be a positive direction of a y axis, is horizontally rightward along the driving direction of the automobile to be a positive direction of an x axis, and is vertically arranged on a plane of the y axis of the x axis to be a positive direction of a z axis. The mobile phone coordinate system and the geodetic coordinate system are both right-hand coordinate systems. If the mobile phone cannot keep a horizontal posture in the data acquisition process, matrix transformation is used for converting data in a mobile phone coordinate system into a geodetic coordinate system. The following formulas are coordinate rotation matrixes of an x axis, a y axis and a z axis respectively:
Figure BDA0002093164280000101
Figure BDA0002093164280000102
Figure BDA0002093164280000103
wherein R isx(theta) is the x-axis coordinate rotation matrix,
Figure BDA0002093164280000104
is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, and theta is the x-axis and the large of the mobile phone coordinate systemThe included angle of the x-axis of the ground coordinate system,
Figure BDA0002093164280000105
is the included angle between the y axis of the mobile phone coordinate system and the y axis of the geodetic coordinate system, and psi is the included angle between the z axis of the mobile phone coordinate system and the z axis of the geodetic coordinate system. And performing coordinate conversion on the acquired acceleration data and the acquired angular velocity data under the mobile phone coordinate system by using the following formula:
Figure BDA0002093164280000106
Figure BDA0002093164280000107
wherein A is the collected acceleration data under the mobile phone coordinate system, G is the collected angular velocity data under the mobile phone coordinate system, and Rx(theta) is the x-axis coordinate rotation matrix,
Figure BDA0002093164280000108
is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, AEAcceleration data in a geodetic coordinate system obtained by coordinate conversion, GEStoring the coordinate-converted data into a data set A for obtaining the angular velocity data under the geodetic coordinate system after coordinate conversion1In (1).
For dataset A, the following formula is used1The data in (3) is processed by data centralization:
Figure BDA0002093164280000109
wherein, Xc kAs data set A1C row and k column, e +1 is data set A1Number of lines of (1), midXc kData set A after centralization processing1C row and k column data to obtain a preprocessed data set A2
The step 1.3 comprises the following steps:
respectively corresponding to 10 automobile driving states by using numbers 0-9 according to the data set A2The preprocessed data set A obtained in the step 1.2 is subjected to corresponding automobile driving states during data acquisition of each row2Tagging, i.e. in data set A2And adding a column, wherein the content is a number of 0-9 corresponding to the driving state of the automobile during data acquisition of each row. Recording the labeled data set as a source data set, wherein the data structure of the source data set is as follows:
V=(acc′x acc′y acc′z gyr′x gyr′y gyr′z t S),
where V is the row sequence of the source data set, accx' is preprocessed mobile phone x-axis acceleration data, accy' is preprocessed mobile phone y-axis acceleration data, accz' is preprocessed mobile phone z-axis acceleration data, gyrx' is preprocessed mobile phone x-axis angular velocity data, gyry' is preprocessed mobile phone y-axis angular velocity data, gyrz' is preprocessed mobile phone z-axis angular velocity data, t is the acquisition time of the line data, and S is the data label of the line.
In the embodiment of the present invention, the step 2 includes:
step 2.1: and dividing the source data set according to the acquisition time of the data. When data is collected, data collection is carried out for D seconds in each driving state, and 100 groups of data are collected per second, so that n collected in the same 1 second1(value is 100) row data as a data unit;
step 2.2: and (3) performing statistical feature extraction on each data unit obtained in the step (2.1), and making a data set named as a feature data set.
The step 2.2 comprises the following steps:
and (3) extracting statistical characteristics of the data units divided in the step (2.1), wherein the statistical characteristics needing to be extracted comprise: mean, variance, maximum, minimum, amplitude of change, and average crossing rate.
The mean value can well reflect the average level of data, so that the mean value of each line of data of a data unit plays an important role in the prediction classification of the data, and the mean value is calculated according to the following formula:
Figure BDA0002093164280000111
wherein, Xi jFor the data of ith row and jth column of data unit, n +1 is the row number of each data unit, since 100 rows of data are taken as one data unit, where n is 99,
Figure BDA0002093164280000112
is the average value of the jth column data of the current data unit.
The degree of dispersion of the data can be reflected by the variance, which is calculated for each column of data cells using the following formula:
Figure BDA0002093164280000113
wherein the content of the first and second substances,
Figure BDA0002093164280000114
is the variance of the jth column of the current data unit.
Maximum value Max (X) of data unit per columni) And minimum Min (X)i) The peak value of the change of the vehicle acceleration can be reflected, and the peak value can also be used as an auxiliary characteristic. And the magnitude of the data change can be calculated using the following equation:
Figure BDA0002093164280000121
of these, Max (X)i) Is the maximum value of the ith row data of the data unit, Min (X)i) Is the maximum value of the ith column of data in the data unit,
Figure BDA0002093164280000122
the variation amplitude of the ith column data of the data unit is shown.
The average crossing rate of data in each column in the data unit can reflect the correlation between adjacent rows of data in the same column. The calculation method is shown as the following formula:
Figure BDA0002093164280000123
wherein, Xi jIs data of ith row and jth column of data cell, Xi+1 jIs the data of the i +1 th row and the j th column of the data unit,
Figure BDA0002093164280000124
is the average of the jth column data of the current data unit, and gamma is the indication function [ Zhengwei, Zhang Jing, Yang Hu ] which can be consulted ] improving the level set active contour model [ J ] of the boundary indication function]Laser technology, 2016,40(1):126-,
Figure BDA0002093164280000125
method for dynamically adjusting crossover rate and variation rate of genetic algorithm based on fuzzy reasoning [ J ] for average crossover rate of jth column data of current data unit [ see Pengzheing, Lishaping ]]Pattern recognition and artificial intelligence 2002,15(04): 413-.
The first 6 columns of data for each data unit have 6 eigenvalues per column: mean, variance, maximum, minimum, amplitude of variation, average cross rate, so there are 36 statistical features per data unit. A new data set, named feature data set, is constructed using the statistical features of each data unit. The characteristic data set has m rows and 36 columns, and m is the number of data units.
As shown in fig. 5, which is a structural diagram of a system for identifying an unfavorable driving state based on a multi-feature convolutional neural network according to an embodiment of the present invention, it can be known from fig. 5 that the present invention utilizes data of two adjacent data units to jointly analyze a driving state of an automobile, and sends source data set data and feature data set data of two adjacent data units to the multi-feature convolutional neural network for classification, and fig. 6 shows a structural diagram of the multi-feature convolutional neural network. In the embodiment of the present invention, the step 3 includes:
step 3.1: building a multi-feature convolutional neural network, and determining a network structure;
step 3.2: and selecting a network optimizer and training the multi-feature convolutional neural network by using the source data set and the feature data set to obtain a trained multi-feature convolutional neural network model.
In the embodiment of the present invention, the step 3.1 includes:
the multi-feature convolutional neural network structure is composed of 3 parts, and the specific construction method is as follows:
the first part comprises an input layer, two convolution layers and a pooling layer, is mainly used for carrying out convolution feature extraction on data and is used for obtaining a convolution feature map of the b-th data unit. And b is taken from 0 to m in sequence, wherein m is the number of the data units. The input of the part comes from the source data set, and since each data unit is a two-dimensional array of 100 × 8, the first 6 columns (100 × 6) are taken as the input of the first part, and sent to the input layer. The input layer is followed by the first convolutional layer of the first part, which uses 16 convolution kernels 3 x 3, with step size 1 and number of fills 1. The calculation formula of the output size of the convolutional layer is shown as the following formula:
Figure BDA0002093164280000131
where Z is the length of the convolution output data, W is the length of the convolution input data, P is the number of fills, F is the length of the convolution kernel, and S represents the step size. For the first convolutional layer in the first part, the output data size of the first convolutional layer in the first part is therefore calculated to be 100 × 6 × 16 from the above formula. Linear rectification functions [ can refer to wunavey ] image denoising algorithm based on deep learning [ D ] shanghai transport university, 2015 ] are used as activation functions after the convolutional layer. And sending the data passing through the activation function into a second convolution layer of the first part, wherein the second convolution layer adopts 32 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, and the output size of the second convolution layer of the first part is 100 × 6 × 32 according to an output size calculation formula of the convolution layers. The second convolutional layer of the first part is also followed by a linear rectification function as the activation function. And sending the data subjected to the activation function into the first part of the pooling layers, wherein the pooling layers are generally divided into two types according to the working principle, namely a maximum pooling layer and an average pooling layer, all the pooling layers adopted by the method are the maximum pooling layers, and for the first part of the pooling layers, a rectangular window with the size of 2 x 2 is adopted for sliding, the step length in the horizontal direction is 2, and the step length in the vertical direction is 2. The output size calculation formula of the pooling layer is shown as follows:
Figure BDA0002093164280000132
where Z 'is the length of the output of the pooling layer, W' is the length of the input of the pooling layer, F 'is the length of the filter, and S' represents the step size in the horizontal direction. The output size of the pooled layers of the first fraction was 50 x 3 x 32 according to the above formula. The output of the pooling layer of the first part is the convolution signature of the b-th data unit.
The second part consists of a convolutional layer and a pooling layer. And fusing and shaping the convolution characteristic graph of the b-th data unit obtained from the first part and the convolution characteristic graph of the b-1 th data unit into a two-dimensional vector with the size of 100 x 96, and naming the two-dimensional vector as a full integration characteristic graph. When b is 0, the convolution signature of the b-1 th data unit is replaced by all 0 data with a size of 50 x 3 x 32. And (3) feeding the fully integrated feature map into a second part of the convolution layer, wherein the convolution layer adopts 6 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, and the output size of the second convolution layer in the first part is 100 × 96 × 6 according to the output size calculation formula of the convolution layer. The second convolutional layer of the first part is also followed by a linear rectification function as the activation function. The data that passed the activation function is fed into the first part of the pooling layer, which is slid with a rectangular window of 2 x 2 size, with a horizontal step size of 2 and a vertical step size of 2. The output size of the convolution layer of the second portion is 50 x 48 x 6 according to the output size calculation formula of the pooling layer.
The third part consists of three layers of fully connected networks: an input layer, a hidden layer and an output layer. The input layer of the third section is formed by the output of the second section together with the feature data set. The output size of the second part is 50 x 48 x 6, the b th row of the feature data set has 36 data, they are integrated into a one-dimensional vector of 14436 x 1, and this vector is fed into the input layer as the input of the third part. 1024 neurons are placed in the hidden layer, 10 neurons are placed in the output layer, and the hidden layer corresponds to 10 driving states of the automobile.
In the embodiment of the present invention, the step 3.2 includes:
firstly, a source data set and a feature data set are divided into a training set and a testing set according to a ratio of 4: 1. In the training process, each data unit of a source data set and each row of a feature data set are used as a training unit, the loss function uses a cross entropy loss function [ can refer to Ron, Wanling, Li Xin, Liu Peng, modified deep convolutional neural network of a Softmax classifier and application [ J ] of the deep convolutional neural network in face recognition [ natural science edition ], Shanghai university student newspaper (natural science edition), 2018,24(03): 352-.
In the embodiment of the present invention, the step 4 includes:
step 4.1: and (3) acquiring data of an inertial sensor of the vehicle-mounted smart phone in the driving process of the automobile in real time, and classifying the data acquired in real time by using the trained model obtained in the step (3) to obtain the current driving state class of the automobile.
Step 4.2: judging according to the current driving state of the automobile obtained in the step 4.1, if the driving state of the automobile is as follows: normal driving, parking state, normal acceleration, normal deceleration, normal left turn, normal right turn then indicate that present driver's driving state is good, if the car driving state is sharp left turn, sharp right turn, sharp deceleration, sharp acceleration, then indicate that bad driving state has appeared, send out the prompt tone and remind the driver to standardize the driving to carry out data recording, count the bad driving number of times of driver every 24 hours, and upload to the backstage.
Examples illustrate that: taking a rapid acceleration state as an example, when an automobile is in the rapid acceleration state, acquiring data of a vehicle-mounted mobile phone sensor to perform state recognition, preprocessing the data in real time and then sending the data into a multi-feature convolutional neural network, wherein output results of 10 neurons of an output layer of the multi-feature convolutional neural network are respectively as follows: (0.011, 0.002, 0.073, 0.003, 0.013, 0.020, 0.003, 0.009 and 0.866), the result shows that the probability value of the automobile at the moment belongs to the states of 0 to 9, the probability value of the automobile belonging to the state 9 is the highest, and 9 corresponds to the rapid acceleration state, which indicates that the state of the automobile predicted by the network at the moment is the rapid acceleration state and belongs to the bad driving state.
Through the implementation of the technical scheme, the invention has the advantages that: (1) the data acquisition method and the data preprocessing process of the vehicle-mounted smart phone sensor are provided and comprise the following steps of; data filtering, coordinate transformation and data centralization. (2) A data set manufacturing method and a multi-feature convolutional neural network building method are provided, and two adjacent data units are used for completing classification of automobile driving states together. (3) A bad driving state identification method based on a multi-feature convolutional neural network is provided. (4) The method has the advantages of high speed of identifying bad driving states, high identification precision and good system stability.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in each embodiment of the method for identifying an adverse driving state based on a multi-feature convolutional neural network provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The invention provides a method for identifying bad driving states based on a multi-feature convolutional neural network, and a method and a way for implementing the technical scheme are numerous, the above description is only a preferred embodiment of the invention, and it should be noted that, for a person skilled in the art, a plurality of improvements and embellishments can be made without departing from the principle of the invention, and the improvements and embellishments should also be regarded as the protection scope of the invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (2)

1. A bad driving state identification method based on a multi-feature convolutional neural network is characterized by comprising the following steps:
step 1: collecting and storing data of an inertial sensor of the vehicle-mounted smart phone, preprocessing the collected data of the inertial sensor of the vehicle-mounted smart phone, labeling the data to prepare a data set, and recording the data set as a source data set;
step 2: completing data division of a source data set, dividing the source data set into data units, and performing statistical feature extraction on each data unit to obtain a feature data set;
and step 3: building a multi-feature convolutional neural network, and fully training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model;
step 4, classifying the data of the inertial sensor of the vehicle-mounted smart phone by using the trained multi-feature convolutional neural network model, and judging whether the current driving state of the automobile is a bad driving state or not according to the classification result;
the step 1 comprises the following steps:
step 1.1: acquiring data of an inertial sensor of the smart phone in various automobile driving states, and acquiring and storing various data of the vehicle-mounted smart phone sensor in various driving states, wherein the inertial sensor comprises an accelerometer and a gyroscope;
step 1.2: preprocessing the data acquired in the step 1.1 by adopting a data filtering, coordinate conversion and data centralization method to obtain preprocessed data;
step 1.3: labeling the preprocessed data obtained in the step 1.2 according to the driving state of the automobile during data acquisition to obtain a labeled data set, and recording the labeled data set as a source data set;
step 1.1 comprises:
the various driving states of the automobile comprise 10 types: normal driving, parking state, normal acceleration, normal deceleration, normal left turn, normal right turn, sharp left turn, sharp right turn, sharp deceleration and sharp acceleration, respectively carry out data acquisition to the inertial sensor of on-vehicle smart mobile phone under these 10 kinds of states, gather the triaxial acceleration acc of cell-phone to the accelerometerx、accy、acczAcquiring three-axis angular velocity gyr of mobile phone for gyroscopex、gyry、gyrzAnd recording the acquisition time t, wherein D seconds are acquired for each of 10 driving states, and n is acquired every second1Then, obtaining a data sequence and storing the data sequence;
step 1.2 comprises:
step 1.2.1, performing data filtering on the obtained data sequence according to a Kalman filter;
step 1.2.2, when the front face of the mobile phone is placed horizontally upwards, a mobile phone coordinate system is overlapped with a geodetic coordinate system, namely, the mobile phone coordinate system is horizontally forward along the automobile driving direction and is a positive direction of a y axis, the mobile phone coordinate system is horizontally rightward along the automobile driving direction and is a positive direction of an x axis, and the mobile phone coordinate system is vertically arranged on a plane of the y axis of the x axis and is a positive direction of a z axis; the mobile phone coordinate system and the geodetic coordinate system are both right-hand coordinate systems;
step 1.2.3, if the mobile phone cannot keep a horizontal posture in the data acquisition process, converting data under a mobile phone coordinate system into a geodetic coordinate system by using matrix transformation, wherein the following formulas are coordinate rotation matrixes of an x axis, a y axis and a z axis respectively:
Figure FDA0002777421920000021
Figure FDA0002777421920000022
Figure FDA0002777421920000023
wherein R isx(theta) is the x-axis coordinate rotation matrix,
Figure FDA0002777421920000024
is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, theta is an included angle between the x-axis of the mobile phone coordinate system and the x-axis of the geodetic coordinate system,
Figure FDA0002777421920000025
is the included angle between the y axis of the mobile phone coordinate system and the y axis of the geodetic coordinate system, and psi is the included angle between the z axis of the mobile phone coordinate system and the z axis of the geodetic coordinate system;
and performing coordinate conversion on the acquired acceleration data and the acquired angular velocity data under the mobile phone coordinate system by using the following formula:
Figure FDA0002777421920000026
Figure FDA0002777421920000027
wherein A is the acceleration of the collected mobile phone coordinate systemDegree data, G is collected angular velocity data under the coordinate system of the mobile phone, AEAcceleration data in a geodetic coordinate system obtained by coordinate conversion, GEStoring the coordinate-converted data into a data set A for obtaining the angular velocity data under the geodetic coordinate system after coordinate conversion1Performing the following steps;
step 1.2.4, data set A is mapped using the following equation1The data in (3) is processed by data centralization:
Figure FDA0002777421920000028
wherein, Xc kAs data set A1C row and k column, e +1 is data set A1Number of lines of (1), midXc kData set A after centralization processing1C row and k column data to obtain a preprocessed data set A2
Step 1.3 comprises:
respectively corresponding to 10 automobile driving states by using numbers 0-9 according to the data set A2The preprocessed data set A obtained in the step 1.2 is subjected to corresponding automobile driving states during data acquisition of each row2Tagging, i.e. in data set A2Adding a column, wherein the content is a number of 0-9 corresponding to the driving state of the automobile during data acquisition of each row, and recording the labeled data set as a source data set, wherein the data structure of the source data set is shown as the following formula:
V=(accx′ accy′ accz′ gyrx′ gyry′ gyrz′ t S),
where V is the row sequence of the source data set, accx' is preprocessed mobile phone x-axis acceleration data, accy' is preprocessed mobile phone y-axis acceleration data, accz' is preprocessed mobile phone z-axis acceleration data, gyrx' is preprocessed mobile phone x-axis angular velocity data, gyry' is preprocessed y-axis angular velocity data of the mobile phone,gyrzthe method comprises the steps of firstly, preprocessing z-axis angular velocity data of the mobile phone, wherein t is the acquisition time of the line of data, and S is a data label of the line;
the step 2 includes:
step 2.1: dividing a source data set according to the acquisition time of the data, and acquiring n within the same 1 second1The line data is taken as a data unit;
step 2.2: performing statistical feature extraction on each data unit obtained in the step 2.1, and making a data set named as a feature data set;
step 2.2 comprises:
and (3) extracting statistical characteristics of the data units divided in the step (2.1), wherein the statistical characteristics needing to be extracted comprise: the average value, the variance, the maximum value, the minimum value, the variation amplitude and the average crossing rate specifically comprise:
taking any one data unit as a current data unit, and carrying out average value calculation according to the following formula:
Figure FDA0002777421920000031
wherein, Xi jIs the data of ith row and jth column of the current data unit, n +1 is the row number of each data unit,
Figure FDA0002777421920000032
the average value of the jth column data of the current data unit is obtained;
the variance of each column of data cells is calculated using the following formula:
Figure FDA0002777421920000033
wherein the content of the first and second substances,
Figure FDA0002777421920000034
the variance of the jth column of the current data unit;
the amplitude of the data change is calculated using the following formula:
Figure FDA0002777421920000035
of these, Max (X)i) Is the maximum value of the ith column data of the current data unit, Min (X)i) Is the maximum value of the ith column data of the current data unit,
Figure FDA0002777421920000036
the variation amplitude of the ith column of data of the current data unit is obtained;
calculating the average crossing rate of each column of data in the data unit by adopting the following formula:
Figure FDA0002777421920000041
wherein, Xi+1 jIs the data of the (i + 1) th row and the (j) th column of the current data unit, gamma is an indicating function,
Figure FDA0002777421920000042
the average cross rate of the jth column data of the current data unit is obtained;
the first 6 columns of data for each data unit have 6 eigenvalues per column: the average value, the variance, the maximum value, the minimum value, the variation amplitude and the average crossing rate, so that each data unit has 36 statistical characteristics, a new data set is formed by using the statistical characteristics of each data unit and is marked as a characteristic data set, the characteristic data set has m rows and 36 columns, and m is the number of the data units;
the step 3 comprises the following steps:
step 3.1: building a multi-feature convolutional neural network, and determining a network structure;
step 3.2: selecting a network optimizer and training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model;
step 3.1 comprises:
the network structure consists of 3 parts, and the specific construction method comprises the following steps:
the first part comprises an input layer, two convolution layers and a pooling layer, is used for performing convolution feature extraction on data and has the function of acquiring a convolution feature map of the b-th data unit, b is sequentially taken from 0 to m, the input of the first part is from a source data set, and each data unit is n1The two-dimensional array of 8 takes the first 6 columns as the input of the first part and sends the input to the input layer; the input layer is followed by the first convolutional layer of the first part, which selects 16 convolution kernels of 3 x 3, the step size is 1, the number of fills is 1, and the calculation formula of the output size of the convolutional layer is shown as the following formula:
Figure FDA0002777421920000043
wherein Z is the length of convolution output data, W is the length of convolution input data, P is the number of padding, F is the length of convolution kernel, and S represents the step size; for the first convolutional layer in the first part, the output data size of the first convolutional layer in the first part is n calculated by the formula16 x 16; after the convolution layer, using linear rectification function as activation function, sending the data passing through the activation function into the second convolution layer of the first part, the second convolution layer adopts 32 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, according to the output size calculation formula of the convolution layer, the output size of the second convolution layer of the first part is n16 x 32; and after the second convolution layer of the first part, the linear rectification function is also used as an activation function, data passing through the activation function is sent to the pooling layers of the first part, the pooling layers are maximum pooling layers, for the pooling layers of the first part, a rectangular window with the size of 2 x 2 is adopted for sliding, the step size in the horizontal direction is 2, the step size in the vertical direction is 2, and the calculation formula of the output size of the pooling layers is shown as the following formula:
Figure FDA0002777421920000051
wherein Z 'is the length of the output of the pooling layer, W' is the length of the input of the pooling layer, F 'is the length of the filter, and S' represents the step size in the horizontal direction, and according to the formula, the output size of the pooling layer of the first part is 50 x 3 x 32, and the output of the pooling layer of the first part is the convolution characteristic map of the b-th data unit;
the second part consists of a convolution layer and a pooling layer, and the convolution characteristic diagram of the b-th data unit obtained from the first part and the convolution characteristic diagram of the b-1-th data unit are fused and shaped into a convolution characteristic diagram with the size of n1And (3) a 96 two-dimensional vector named as a fully integrated feature map, when b is 0, replacing the convolution feature map of the b-1 data unit with full 0 data with the size of 50 x 3 x 32, sending the fully integrated feature map into a second part of convolution layers, wherein the convolution layers adopt 6 3 x 3 convolution kernels, the step size is 1, the filling quantity is 1, and the output size of the second convolution layer in the first part is n according to the output size calculation formula of the convolution layers196 x 6; using a linear rectification function as an activation function after the second convolution layer of the first part, and sending data passing through the activation function into the pooling layer of the first part, wherein the pooling layer slides by adopting a rectangular window with the size of 2 x 2, the step length in the horizontal direction is 2, and the step length in the vertical direction is 2; the output size of the convolution layer of the second portion is 50 x 48 x 6 according to the output size calculation formula of the pooling layer;
the third part consists of three layers of fully connected networks: an input layer, a hidden layer and an output layer; the input layer of the third part is formed by the output of the second part and the characteristic data set together; the output size of the second part is 50 x 48 x 6, the b th row of the feature data set has 36 data, the 36 data are integrated into a one-dimensional vector of 14436 x 1, and the vector is used as the input of the third part and is sent to the input layer; 1024 neurons are placed in the hidden layer, 10 neurons are placed in the output layer, and the hidden layer corresponds to 10 driving states of the automobile.
2. The method according to claim 1, characterized in that step 3.2 comprises:
dividing a source data set and a feature data set into a training set and a testing set according to a ratio of 4:1, taking each data unit of the source data set and each line of the feature data set as a training unit in the training process, wherein a cross entropy loss function is used as a loss function, and a network optimizer adopts an adam optimizer to fully train the network to obtain a trained multi-feature convolutional neural network model;
step 4 comprises the following steps:
step 4.1: acquiring inertial sensor data of a vehicle-mounted smart phone in the driving process of the automobile in real time, and classifying the data acquired in real time by using the trained model obtained in the step 3 to obtain the current driving state class of the automobile;
step 4.2: and 4, judging according to the current driving state of the automobile obtained in the step 4.1, if the driving state of the automobile is as follows: the method comprises the steps of judging whether the current driving state of a driver is good or not by any one of normal driving, parking state, normal acceleration, normal deceleration, normal left turn and normal right turn, judging whether the current driving state of the driver is good or not by any one of sudden left turn, sudden right turn, sudden deceleration and sudden acceleration, judging whether the current driving state of the driver is bad or not, sending out prompt tones to remind the driver of driving normally, recording data, counting the bad driving times of the driver every 24 hours, and uploading the bad driving times to a background.
CN201910510060.0A 2019-06-13 2019-06-13 Bad driving state identification method based on multi-feature convolutional neural network Active CN110263836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910510060.0A CN110263836B (en) 2019-06-13 2019-06-13 Bad driving state identification method based on multi-feature convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910510060.0A CN110263836B (en) 2019-06-13 2019-06-13 Bad driving state identification method based on multi-feature convolutional neural network

Publications (2)

Publication Number Publication Date
CN110263836A CN110263836A (en) 2019-09-20
CN110263836B true CN110263836B (en) 2021-02-19

Family

ID=67918021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910510060.0A Active CN110263836B (en) 2019-06-13 2019-06-13 Bad driving state identification method based on multi-feature convolutional neural network

Country Status (1)

Country Link
CN (1) CN110263836B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111231971B (en) * 2020-03-02 2021-04-30 中汽数据(天津)有限公司 Automobile safety performance analysis and evaluation method and system based on big data
CN112732797A (en) * 2021-01-26 2021-04-30 武汉理工大学 Fuel cell automobile driving behavior analysis method, device and storage medium
CN113095197A (en) * 2021-04-06 2021-07-09 深圳市汉德网络科技有限公司 Vehicle driving state identification method and device, electronic equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140039718A1 (en) * 2012-07-31 2014-02-06 International Business Machines Corporation Detecting an abnormal driving condition
CN108563891A (en) * 2018-04-23 2018-09-21 吉林大学 A method of traffic accident is intelligently prevented based on Inertial Measurement Unit
CN109034134A (en) * 2018-09-03 2018-12-18 深圳市尼欧科技有限公司 Abnormal driving behavioral value method based on multitask depth convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140039718A1 (en) * 2012-07-31 2014-02-06 International Business Machines Corporation Detecting an abnormal driving condition
CN108563891A (en) * 2018-04-23 2018-09-21 吉林大学 A method of traffic accident is intelligently prevented based on Inertial Measurement Unit
CN109034134A (en) * 2018-09-03 2018-12-18 深圳市尼欧科技有限公司 Abnormal driving behavioral value method based on multitask depth convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张蓓.智能手机车辆异常驾驶事件检测系统的设计与实现.《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》.2018,摘要、第24-25页、第35-37页、第45-47页、第53-59页. *
智能手机车辆异常驾驶事件检测系统的设计与实现;张蓓;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;20180815;摘要、第24-25页、第35-37页、第45-47页、第53-59页 *
智能手机车辆异常驾驶行为检测方法;周后飞等;《智能系统学报》;20160630;第11卷(第3期);第410-417页 *

Also Published As

Publication number Publication date
CN110263836A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN108875674B (en) Driver behavior identification method based on multi-column fusion convolutional neural network
CN109978893B (en) Training method, device, equipment and storage medium of image semantic segmentation network
CN111079602B (en) Vehicle fine granularity identification method and device based on multi-scale regional feature constraint
CN110329271A (en) A kind of multisensor vehicle driving detection system and method based on machine learning
CN110263836B (en) Bad driving state identification method based on multi-feature convolutional neural network
JP2020530578A (en) Driving behavior scoring method and equipment
CN108280415A (en) Driving behavior recognition methods based on intelligent mobile terminal
CN108537197A (en) A kind of lane detection prior-warning device and method for early warning based on deep learning
CN108694408B (en) Driving behavior recognition method based on deep sparse filtering convolutional neural network
CN111428558A (en) Vehicle detection method based on improved YO L Ov3 method
CN109871789A (en) Vehicle checking method under a kind of complex environment based on lightweight neural network
US20200151557A1 (en) Method and system for deep neural networks using dynamically selected feature-relevant points from a point cloud
CN109087337B (en) Long-time target tracking method and system based on hierarchical convolution characteristics
CN108769104A (en) A kind of road condition analyzing method for early warning based on onboard diagnostic system data
CN110852358A (en) Vehicle type distinguishing method based on deep learning
CN114970705A (en) Driving state analysis method, device, equipment and medium based on multi-sensing data
CN113221759A (en) Road scattering identification method and device based on anomaly detection model
CN114926825A (en) Vehicle driving behavior detection method based on space-time feature fusion
CN114492634B (en) Fine granularity equipment picture classification and identification method and system
CN113378883A (en) Fine-grained vehicle classification method based on channel grouping attention model
CN113052071B (en) Method and system for rapidly detecting distraction behavior of driver of hazardous chemical substance transport vehicle
CN115115825A (en) Method and device for detecting object in image, computer equipment and storage medium
CN114863170A (en) Deep learning-based new energy vehicle battery spontaneous combustion early warning method and device
CN113076804B (en) Target detection method, device and system based on YOLOv4 improved algorithm
EP3382570A1 (en) Method for characterizing driving events of a vehicle based on an accelerometer sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant