CN116092059B - Neural network-based vehicle networking user driving behavior recognition method and system - Google Patents

Neural network-based vehicle networking user driving behavior recognition method and system Download PDF

Info

Publication number
CN116092059B
CN116092059B CN202211522917.9A CN202211522917A CN116092059B CN 116092059 B CN116092059 B CN 116092059B CN 202211522917 A CN202211522917 A CN 202211522917A CN 116092059 B CN116092059 B CN 116092059B
Authority
CN
China
Prior art keywords
internet
driving
vehicles
layer
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211522917.9A
Other languages
Chinese (zh)
Other versions
CN116092059A (en
Inventor
顾进峰
卢峰
蒋新星
陶健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tongli Fengda Software Technology Co ltd
Original Assignee
Nanjing Tongli Fengda Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tongli Fengda Software Technology Co ltd filed Critical Nanjing Tongli Fengda Software Technology Co ltd
Priority to CN202211522917.9A priority Critical patent/CN116092059B/en
Publication of CN116092059A publication Critical patent/CN116092059A/en
Application granted granted Critical
Publication of CN116092059B publication Critical patent/CN116092059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application discloses a vehicle networking user driving behavior recognition method and system based on a neural network, comprising the following steps: clustering the driving data of the internet of vehicles through a clustering optimization algorithm to obtain a plurality of feature vectors; inputting a plurality of feature vectors into a base model, wherein the base model comprises an input layer, a first convolution layer, an encoding unit, a feature extraction layer, a decoding unit, a second convolution layer, a pooling layer and an output layer, the convolution kernel of the first convolution layer is 1*1, the convolution kernel of the second convolution layer is 5*5, and the pooling window of the pooling layer is 3*3; iteratively training the base model, stopping training when the Loss function value Loss of the base model reaches the minimum, and obtaining a driving behavior recognition model; inputting the internet of vehicles data into a driving behavior recognition model to realize recognition of driving behaviors; wherein the driving behavior includes normal behavior and abnormal behavior; the application improves the self-learning capability of the model and can quickly and accurately identify the driving behavior of the driver.

Description

Neural network-based vehicle networking user driving behavior recognition method and system
Technical Field
The application relates to the technical field of internet of vehicles data analysis, in particular to an internet of vehicles user driving behavior identification method and system based on a neural network.
Background
In recent years, with the rapid development of Chinese economy, the living standard of residents is continuously improved, and automobiles become an indispensable transportation means in the production of human life. However, with the popularization of automobiles, road traffic accidents also frequently occur. Once an emergency occurs, it is often difficult for a driver to quickly take countermeasures, which may cause serious traffic accidents.
Along with the development of the Internet, in order to improve traffic operation efficiency and reduce traffic accidents, the Internet of vehicles technology has been developed, and the concept of the Internet of vehicles is derived from the Internet of things, namely, the Internet of vehicles, which is to use a running vehicle as an information sensing object, and realize network connection between vehicles and X (namely, vehicles, people, roads and service platforms) by means of a new generation of information communication technology, so that the overall intelligent driving level of the vehicle is improved, safe, comfortable, intelligent and efficient driving feeling and traffic service are provided for users, and the intelligent level of social traffic service is improved. Therefore, in order to reduce traffic accidents, part of technologies perform cluster analysis on the internet of vehicles data, so that the driving behavior of a driver is judged, and abnormal driving behaviors can be found in time.
However, research on the internet of vehicles at home and abroad is only in a starting stage so far, most of the internet of vehicles are only in a small-scale closed-loop system, and when the internet of vehicles has more data, the internet of vehicles is difficult to analyze and extract useful information from the internet of vehicles, so that the recognition effect of driving behaviors is poor, and the driving behaviors of a driver are difficult to recognize rapidly and accurately.
Disclosure of Invention
The present application has been made in view of the above-described problems occurring in the prior art.
Therefore, the application provides a neural network-based vehicle networking user driving behavior recognition method, which solves the problems of large calculation amount and difficult analysis of mass-level vehicle networking data by establishing a base model for the vehicle networking data, and can rapidly and accurately recognize the driving behavior of a driver.
In order to solve the technical problems, the application provides the following technical scheme that: clustering the internet of vehicles driving data through a clustering optimization algorithm to obtain a plurality of feature vectors, wherein the sampling frequency of the internet of vehicles driving data is more than 10Hz; inputting a plurality of feature vectors into a base model, wherein the base model comprises an input layer, a first convolution layer, an encoding unit, a feature extraction layer, a decoding unit, a second convolution layer, a pooling layer and an output layer, wherein the convolution kernel size of the first convolution layer is 1*1, the convolution kernel size of the second convolution layer is 5*5, and the pooling window size of the pooling layer is 3*3; iteratively training the base model, stopping training when the Loss function value Loss of the base model reaches the minimum, and obtaining a driving behavior recognition model; inputting the internet of vehicles data into a driving behavior recognition model to realize recognition of driving behaviors; wherein the driving behavior includes normal behavior and abnormal behavior.
As a preferable scheme of the neural network-based vehicle networking user driving behavior recognition method, the application comprises the following steps: comprising the following steps: the internet of vehicles driving data includes: driving mileage, overspeed driving mileage, daytime driving time, nighttime driving time, idling time, acceleration time, overspeed driving time, acceleration average value, deceleration average value, vehicle speed average value, unit mileage idling times, unit mileage rapid acceleration times, unit mileage rapid deceleration times and unit mileage rapid braking times; before clustering, data cleaning and data aggregation are needed for the internet of vehicles data, wherein the internet of vehicles data are subjected to data aggregation according to seconds.
As a preferable scheme of the neural network-based vehicle networking user driving behavior recognition method, the application comprises the following steps: the clustering includes: step one: establishing a first feature tree and a first classifier according to the processed internet of vehicles data, removing abnormal triples in the feature tree, and merging the two triples if the hypersphere distance of the two triples is smaller than the cluster radius R; step two: calculating the error rate and the weight of the first classifier in the processed internet of vehicles data; step three: according to the weight Q, the processed internet of vehicles data is re-weightedNewly giving weight; step four: determining a second classifier based on the vehicle networking data with the weight distribution of Q, and repeatedly executing the steps for n times to obtain n classifiers; step five: weighted voting is carried out on n classifiers to obtain a classifier H n (x n ) The method comprises the steps of carrying out a first treatment on the surface of the Step six: clustering all triples through a classifier to generate a second feature tree; step seven: and clustering all sample points according to the distance by using the centroids of all triplets of the second feature tree as initial centroid points to obtain a plurality of feature vectors.
As a preferable scheme of the neural network-based vehicle networking user driving behavior recognition method, the application comprises the following steps: the base model includes: the encoding unit and the decoding unit are connected in a cascading manner; the feature extraction layer comprises a plurality of base units, at least one unsaturated activation unit and a full connection layer; the number of the base units is the same as that of the feature vectors, and the base units are used for extracting the features of each feature vector, giving weight and inhibiting redundant features; the unsaturated activating unit maps the characteristics output by the base unit to the full connection layer to finish the characteristic extraction.
As a preferable scheme of the neural network-based vehicle networking user driving behavior recognition method, the application comprises the following steps: the iterative training comprises: step one: initializing a basic model strengthening factor, and constructing a position coordinate of an ant population according to the basic model strengthening factor, wherein the basic model strengthening factor is an abscissa and an ordinate is 0; step two: calculating the pheromone content according to the position coordinates, and judging whether the ants move or not according to the pheromone content; step three: if moving, updating the ant position, and calculating the pheromone content at the moment; if not, executing the first step; step four: and repeating the second to third steps until the pheromone content reaches the maximum, stopping executing, obtaining the optimal position, and outputting the optimal base model strengthening factor.
As a preferable scheme of the neural network-based vehicle networking user driving behavior recognition method, the application comprises the following steps: the iterative training comprises: step one: utilizing the rand function to initiate learning probability, and randomly generating m individuals according to the learning probability; step two: calculating the adaptation values of m individuals; step three: updating the probability model by using the maximum adaptation value; step four: and (3) generating a new individual according to the probability model sampling, repeatedly executing the steps from the second step to the third step for k times, and if k=500, stopping executing to obtain an optimal individual as a base model strengthening factor.
As a preferable scheme of the neural network-based vehicle networking user driving behavior recognition method, the application comprises the following steps: the Loss function value Loss includes:
Loss=(Y’-Y)L ls +μL QU
wherein L is ls Y' is the expected output value of the base model, Y is the actual output value of the base model, mu is the base model strengthening factor, L QU Is quantile loss.
As a preferable scheme of the vehicle networking user driving behavior recognition system based on the neural network, the application comprises the following steps: comprising the following steps: the clustering module is used for clustering the internet of vehicles driving data to obtain a plurality of feature vectors, wherein the sampling frequency of the internet of vehicles driving data is more than 10Hz, and the internet of vehicles driving data comprises: driving mileage, overspeed driving mileage, daytime driving time, nighttime driving time, idling time, acceleration time, overspeed driving time, acceleration average value, deceleration average value, vehicle speed average value, unit mileage idling times, unit mileage rapid acceleration times, unit mileage rapid deceleration times and unit mileage rapid braking times; the initial model generation module is used for constructing an initial base model and inputting a plurality of feature vectors into the base model; the driving behavior recognition model generation module is used for iteratively training the base model, stopping training when the Loss function value Loss of the base model reaches the minimum, and generating a driving behavior recognition model; the recognition module is used for inputting the internet of vehicles data into the driving behavior recognition model to realize recognition of driving behaviors; wherein the driving behavior includes normal behavior and abnormal behavior.
As a preferable scheme of the vehicle networking user driving behavior recognition system based on the neural network, the application comprises the following steps: the base model comprises an input layer, a first convolution layer, an encoding unit, a feature extraction layer, a decoding unit, a second convolution layer, a pooling layer and an output layer, wherein the convolution kernel of the first convolution layer is 1*1, the convolution kernel of the second convolution layer is 5*5, and the pooling window of the pooling layer is 3*3.
As a preferable scheme of the vehicle networking user driving behavior recognition system based on the neural network, the application comprises the following steps: the clustering module comprises a data cleaning unit and a data aggregation unit; the data cleaning unit is used for deleting missing values in the internet of vehicles data; and the data aggregation unit is used for classifying the internet of vehicles data according to the set conditions, wherein the internet of vehicles data is aggregated according to seconds.
The application has the beneficial effects that: according to the application, the neural network and the optimizing algorithm are combined to establish the base model for the vehicle networking data, so that the self-learning capacity of the model is improved, the problems of large calculation amount and difficult analysis of mass vehicle networking data are solved, and the driving behaviors of a driver can be rapidly and accurately identified.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a schematic view of a basic model structure according to a first embodiment of the present application;
FIG. 2 is a schematic diagram of a basic model iterative training process according to a first embodiment of the present application;
fig. 3 is a schematic diagram of a basic model iterative training process according to a second embodiment of the present application.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present application can be understood in detail, a more particular description of the application, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present application is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the application. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
While the embodiments of the present application have been illustrated and described in detail in the drawings, the cross-sectional view of the device structure is not to scale in the general sense for ease of illustration, and the drawings are merely exemplary and should not be construed as limiting the scope of the application. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present application, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present application and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present application. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
Example 1
Referring to fig. 1 to fig. 2, in a first embodiment of the present application, a method for identifying driving behavior of a user on the internet of vehicles based on a neural network is provided, including:
s1: clustering the internet of vehicles driving data through a clustering optimization algorithm to obtain a plurality of feature vectors, wherein the sampling frequency of the internet of vehicles driving data is more than 10Hz.
The driving data of the internet of vehicles comprises driving mileage, overspeed driving mileage, daytime driving duration, night driving duration, idling duration, acceleration duration, overspeed driving duration, acceleration mean value, deceleration mean value, vehicle speed mean value, idle speed frequency of unit mileage, rapid acceleration frequency of unit mileage, rapid deceleration frequency of unit mileage and rapid braking frequency of unit mileage. It should be noted that, the internet of vehicles data may be obtained through a corresponding sensor, or may be an existing internet of vehicles data set, which is not limited in this embodiment.
Before clustering, data cleaning and data aggregation are needed for the Internet of vehicles data, and more accurate data sources are provided for subsequent steps, wherein the Internet of vehicles data are aggregated according to seconds.
Further, the driving data of the internet of vehicles are clustered through a clustering optimization algorithm, and the method specifically comprises the following steps:
step one: establishing a first feature tree and a first classifier k according to the processed internet of vehicles data 1 (x 1 ) The method comprises the steps of carrying out a first treatment on the surface of the Removing abnormal triples in the feature tree, and if the hypersphere distance of the two triples is smaller than the cluster radius R, merging the two triples; first classifier k 1 (x 1 ) The method comprises the following steps:
wherein x is 1 Is input.
Step two: calculating error rate beta and weight Q of the first classifier in the processed internet of vehicles data;
β=P(k t (x t )≠y t )
Q=log(β(1-β))
wherein: k (k) t (x t ) Is the t classifier, y t Is the desired output of the t-th classifier.
Step three: according to the weight Q, the weight W is re-assigned to the processed internet of vehicles data;
W=e -Qt
step four: determining a second classifier based on the vehicle networking data with the weight distribution of Q, and repeatedly executing the steps for n times to obtain n classifiers;
step five: weighted voting is carried out on n classifiers to obtain a classifier H n (x n );
Step six: clustering all triples through a classifier to generate a second feature tree;
step seven: and clustering all sample points according to the distance by using the centroids of all triplets of the second feature tree as initial centroid points to obtain a plurality of feature vectors.
Preferably, the clustering method and the clustering system save a large amount of memory and accelerate the clustering speed of mass internet of vehicles data by optimizing the clustering algorithm.
S2: a plurality of feature vectors are input to the base model.
Referring to fig. 1, the base model includes an input layer, a first convolution layer, an encoding unit, a feature extraction layer, a decoding unit, a second convolution layer, a pooling layer, and an output layer, the first convolution layer having a convolution kernel size of 1*1, the second convolution layer having a convolution kernel size of 5*5, and the pooling window of the pooling layer having a pooling window size of 3*3.
The input layer is connected with the first convolution layer, the first convolution layer is connected with the second convolution layer, the second convolution layer is connected with the coding unit, the coding unit is connected with the decoding unit, the decoding unit is connected with the feature extraction layer, the feature extraction layer is connected with the pooling layer, and the pooling layer is connected with the output layer.
The feature extraction layer comprises a plurality of base units, at least one unsaturated activation unit and a full connection layer; the number of the base units is the same as that of the feature vectors, and the base units are used for extracting the features of each feature vector, giving weight, inhibiting redundant features and improving the accuracy of feature extraction; the unsaturated activating unit maps the characteristics output by the base unit to the full connection layer to finish the characteristic extraction.
The unsaturated activation unit may be a ReLU activation function, a leak ReLU activation function, or a prilu activation function, which is not limited in this embodiment.
Preferably, the decoder and encoder of this embodiment are connected in a cascade, which facilitates gradient flow and ease of training.
S3: and iteratively training the base model, and stopping training when the Loss function value Loss of the base model is minimum, so as to obtain the driving behavior recognition model.
The Loss function value Loss is:
Loss=(Y’-Y)L ls +μL QU
wherein L is ls Y' is the expected output value of the base model, Y is the actual output value of the base model, mu is the base model strengthening factor, L QU Is quantile loss.
It should be noted that, the ant colony algorithm (Ant Clony Optimization, ACO) is a bionic intelligent optimization algorithm, the inspiration of the ant colony algorithm is derived from the foraging process of ants, the ants can leave pheromones on the path for searching food sources, and the ants in the colony can perceive the pheromones and move along the place with high pheromone concentration to form a positive feedback mechanism; after a period of time, the ant can determine an optimal path to the food source.
Referring to fig. 2, in this embodiment, by combining with the ant colony algorithm, iterative training is performed on the base model to obtain an optimal solution of the loss function, and specific iterative training steps are as follows:
step one: initializing a basic model strengthening factor, and constructing a position coordinate of an ant population according to the basic model strengthening factor, wherein the basic model strengthening factor is an abscissa and an ordinate is 0;
step two: calculating the pheromone content according to the position coordinates, and judging whether the ants move or not according to the pheromone content;
step three: if moving, updating the ant position, and calculating the pheromone content at the moment; if not, executing the first step;
step four: and repeating the second to third steps until the pheromone content reaches the maximum, stopping executing, obtaining an optimal position, outputting an optimal base model strengthening factor, enabling the Loss function value Loss to be minimum, and obtaining the driving behavior recognition model.
S4: inputting the internet of vehicles data into a driving behavior recognition model to realize recognition of driving behaviors; wherein the driving behavior includes normal behavior and abnormal behavior.
Example 2
Referring to fig. 3, for a second embodiment of the present application, which differs from the first embodiment, there is provided another method of iteratively training a base model, comprising,
step one: utilizing the rand function to initiate learning probability, and randomly generating m individuals according to the learning probability;
step two: calculating the adaptation values of m individuals;
step three: updating probability model P with maximum fitness value m+1 (x);
Probability model P m+1 (x) The method comprises the following steps:
P m+1 (x)=(1-a)P m (x)+a/F∑x m
wherein a is learning probability, P z (x) Natural selection probability for m individuals, x is the individual, and F is the maximum adaptation value;
step four: and (3) generating a new individual according to the probability model sampling, repeatedly executing the steps from two to three k times, and if k=500, stopping executing to obtain an optimal individual as a base model strengthening factor so that the Loss function value Loss is minimum, thereby obtaining the driving behavior recognition model.
Example 3
Unlike the first embodiment, this embodiment provides a neural network-based driving behavior recognition system for a user of the internet of vehicles, including,
the clustering module is used for clustering the internet of vehicles driving data to obtain a plurality of feature vectors, wherein the sampling frequency of the internet of vehicles driving data is more than 10Hz, and the internet of vehicles driving data comprises: driving mileage, overspeed driving mileage, daytime driving time, nighttime driving time, idling time, acceleration time, overspeed driving time, acceleration average value, deceleration average value, vehicle speed average value, unit mileage idling times, unit mileage rapid acceleration times, unit mileage rapid deceleration times and unit mileage rapid braking times. The clustering module comprises a data cleaning unit and a data aggregation unit; the data cleaning unit is used for deleting missing values in the internet of vehicles data; and the data aggregation unit is used for classifying the internet of vehicles data according to the set conditions, wherein the internet of vehicles data is aggregated according to seconds.
And the initial model generation module is used for constructing an initial base model and inputting a plurality of feature vectors into the base model. The base model comprises an input layer, a first convolution layer, an encoding unit, a feature extraction layer, a decoding unit, a second convolution layer, a pooling layer and an output layer, wherein the convolution kernel of the first convolution layer is 1*1, the convolution kernel of the second convolution layer is 5*5, and the pooling window of the pooling layer is 3*3.
And the driving behavior recognition model generation module is used for iteratively training the base model, stopping training when the Loss function value Loss of the base model reaches the minimum, and generating a driving behavior recognition model.
The recognition module is used for inputting the internet of vehicles data into the driving behavior recognition model to realize recognition of driving behaviors; wherein the driving behavior includes normal behavior and abnormal behavior.
It should be appreciated that embodiments of the application may be implemented or realized by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer readable storage medium configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, in accordance with the methods and drawings described in the specific embodiments. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Furthermore, the operations of the processes described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes (or variations and/or combinations thereof) described herein may be performed under control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications), by hardware, or combinations thereof, collectively executing on one or more processors. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable computing platform, including, but not limited to, a personal computer, mini-computer, mainframe, workstation, network or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the application may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optical read and/or write storage medium, RAM, ROM, etc., such that it is readable by a programmable computer, which when read by a computer, is operable to configure and operate the computer to perform the processes described herein. Further, the machine readable code, or portions thereof, may be transmitted over a wired or wireless network. When such media includes instructions or programs that, in conjunction with a microprocessor or other data processor, implement the steps described above, the application described herein includes these and other different types of non-transitory computer-readable storage media. The application also includes the computer itself when programmed according to the methods and techniques of the present application. The computer program can be applied to the input data to perform the functions described herein, thereby converting the input data to generate output data that is stored to the non-volatile memory. The output information may also be applied to one or more output devices such as a display. In a preferred embodiment of the application, the transformed data represents physical and tangible objects, including specific visual depictions of physical and tangible objects produced on a display.
As used in this disclosure, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, the components may be, but are not limited to: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Furthermore, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above embodiments are only for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application, which is intended to be covered in the scope of the claims of the present application.

Claims (6)

1. The method for identifying the driving behavior of the internet of vehicles user based on the neural network is characterized by comprising the following steps of:
clustering the internet of vehicles driving data through a clustering optimization algorithm to obtain a plurality of feature vectors, wherein the sampling frequency of the internet of vehicles driving data is more than 10Hz;
inputting a plurality of feature vectors into a base model, wherein the base model comprises an input layer, a first convolution layer, an encoding unit, a feature extraction layer, a decoding unit, a second convolution layer, a pooling layer and an output layer, wherein the convolution kernel size of the first convolution layer is 1*1, the convolution kernel size of the second convolution layer is 5*5, and the pooling window size of the pooling layer is 3*3;
iteratively training the base model, stopping training when the Loss function value Loss of the base model reaches the minimum, and obtaining a driving behavior recognition model;
inputting the driving data of the Internet of vehicles into a driving behavior recognition model to realize recognition of driving behaviors; wherein the driving behavior includes normal behavior and abnormal behavior;
wherein the base model comprises:
the encoding unit and the decoding unit are connected in a cascading manner;
the feature extraction layer comprises a plurality of base units, at least one unsaturated activation unit and a full connection layer; the number of the base units is the same as that of the feature vectors, and the base units are used for extracting the features of each feature vector, giving weight and inhibiting redundant features; the unsaturated activating unit maps the characteristics output by the base unit to the full-connection layer to finish the characteristic extraction;
the clustering includes:
step one: establishing a first feature tree and a first classifier according to the processed internet of vehicles driving data, removing abnormal triples in the feature tree, and if the hypersphere distance of the two triples is smaller than the cluster radius R, merging the two triples;
step two: calculating the error rate and the weight of the first classifier in the processed internet of vehicles driving data;
step three: according to the weight Q, the weight is reapplied to the processed driving data of the internet of vehicles;
step four: determining a second classifier based on the driving data of the vehicle networking with the weight distribution of Q, and repeatedly executing the steps for n times to obtain n classifiers;
step five: carrying out weighted voting on the n classifiers to obtain a classifier Hn (xn);
step six: clustering all triples through a classifier to generate a second feature tree;
step seven: utilizing centroids of all triplets of the second feature tree as initial centroid points, and clustering all sample points according to distances to obtain a plurality of feature vectors;
the iterative training comprises:
step one: initializing a basic model strengthening factor, and constructing a position coordinate of an ant population according to the basic model strengthening factor, wherein the basic model strengthening factor is an abscissa and an ordinate is 0;
step two: calculating the pheromone content according to the position coordinates, and judging whether the ants move or not according to the pheromone content;
step three: if moving, updating the ant position, and calculating the pheromone content at the moment; if not, executing the first step;
step four: and repeating the second to third steps until the pheromone content reaches the maximum, stopping executing, obtaining the optimal position, and outputting the optimal base model strengthening factor.
2. The neural network-based internet of vehicles user driving behavior recognition method of claim 1, wherein the internet of vehicles driving data comprises: driving mileage, overspeed driving mileage, daytime driving time, nighttime driving time, idling time, acceleration time, overspeed driving time, acceleration average value, deceleration average value, vehicle speed average value, unit mileage idling times, unit mileage rapid acceleration times, unit mileage rapid deceleration times and unit mileage rapid braking times;
before clustering, data cleaning and data aggregation are needed for the Internet of vehicles driving data, wherein the Internet of vehicles driving data are subjected to data aggregation according to seconds.
3. The neural network-based driving behavior recognition method of a user of the internet of vehicles according to claim 2, wherein the Loss function value Loss includes:
Loss=(Y’ - Y)Lls +µLQU
wherein, lls is a multi-classification cross entropy function, Y' is an expected output value of the base model, Y is an actual output value of the base model, [ mu ] is a base model strengthening factor, and LQU is quantile loss.
4. The method for identifying the driving behavior of the internet of vehicles user based on the neural network is characterized by comprising the following steps of:
clustering the internet of vehicles driving data through a clustering optimization algorithm to obtain a plurality of feature vectors, wherein the sampling frequency of the internet of vehicles driving data is more than 10Hz;
inputting a plurality of feature vectors into a base model, wherein the base model comprises an input layer, a first convolution layer, an encoding unit, a feature extraction layer, a decoding unit, a second convolution layer, a pooling layer and an output layer, wherein the convolution kernel size of the first convolution layer is 1*1, the convolution kernel size of the second convolution layer is 5*5, and the pooling window size of the pooling layer is 3*3;
iteratively training the base model, stopping training when the Loss function value Loss of the base model reaches the minimum, and obtaining a driving behavior recognition model;
inputting the driving data of the Internet of vehicles into a driving behavior recognition model to realize recognition of driving behaviors; wherein the driving behavior includes normal behavior and abnormal behavior;
wherein the base model comprises:
the encoding unit and the decoding unit are connected in a cascading manner;
the feature extraction layer comprises a plurality of base units, at least one unsaturated activation unit and a full connection layer; the number of the base units is the same as that of the feature vectors, and the base units are used for extracting the features of each feature vector, giving weight and inhibiting redundant features; the unsaturated activating unit maps the characteristics output by the base unit to the full-connection layer to finish the characteristic extraction;
the clustering includes:
step one: establishing a first feature tree and a first classifier according to the processed internet of vehicles driving data, removing abnormal triples in the feature tree, and if the hypersphere distance of the two triples is smaller than the cluster radius R, merging the two triples;
step two: calculating the error rate and the weight of the first classifier in the processed internet of vehicles driving data;
step three: according to the weight Q, the weight is reapplied to the processed driving data of the internet of vehicles;
step four: determining a second classifier based on the driving data of the vehicle networking with the weight distribution of Q, and repeatedly executing the steps for n times to obtain n classifiers;
step five: carrying out weighted voting on the n classifiers to obtain a classifier Hn (xn);
step six: clustering all triples through a classifier to generate a second feature tree;
step seven: utilizing centroids of all triplets of the second feature tree as initial centroid points, and clustering all sample points according to distances to obtain a plurality of feature vectors;
the iterative training comprises:
step one: utilizing the rand function to initiate learning probability, and randomly generating m individuals according to the learning probability;
step two: calculating the adaptation values of m individuals;
step three: updating the probability model by using the maximum adaptation value;
step four: and (3) generating a new individual according to the probability model sampling, repeatedly executing the steps from the second step to the third step for k times, and if k=500, stopping executing to obtain an optimal individual as a base model strengthening factor.
5. The neural network-based internet of vehicles user driving behavior recognition method of claim 4, wherein the internet of vehicles driving data comprises: driving mileage, overspeed driving mileage, daytime driving time, nighttime driving time, idling time, acceleration time, overspeed driving time, acceleration average value, deceleration average value, vehicle speed average value, unit mileage idling times, unit mileage rapid acceleration times, unit mileage rapid deceleration times and unit mileage rapid braking times;
before clustering, data cleaning and data aggregation are needed for the Internet of vehicles driving data, wherein the Internet of vehicles driving data are subjected to data aggregation according to seconds.
6. The neural network-based driving behavior recognition method of a vehicle networking user according to claim 5, wherein the Loss function value Loss comprises:
Loss=(Y’ - Y)Lls +µLQU
wherein, lls is a multi-classification cross entropy function, Y' is an expected output value of the base model, Y is an actual output value of the base model, [ mu ] is a base model strengthening factor, and LQU is quantile loss.
CN202211522917.9A 2022-11-30 2022-11-30 Neural network-based vehicle networking user driving behavior recognition method and system Active CN116092059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211522917.9A CN116092059B (en) 2022-11-30 2022-11-30 Neural network-based vehicle networking user driving behavior recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211522917.9A CN116092059B (en) 2022-11-30 2022-11-30 Neural network-based vehicle networking user driving behavior recognition method and system

Publications (2)

Publication Number Publication Date
CN116092059A CN116092059A (en) 2023-05-09
CN116092059B true CN116092059B (en) 2023-10-20

Family

ID=86212709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211522917.9A Active CN116092059B (en) 2022-11-30 2022-11-30 Neural network-based vehicle networking user driving behavior recognition method and system

Country Status (1)

Country Link
CN (1) CN116092059B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169421A (en) * 2017-04-20 2017-09-15 华南理工大学 A kind of car steering scene objects detection method based on depth convolutional neural networks
CN108875812A (en) * 2018-06-01 2018-11-23 宁波工程学院 A kind of driving behavior classification method based on branch's convolutional neural networks
CN109150830A (en) * 2018-07-11 2019-01-04 浙江理工大学 A kind of multilevel intrusion detection method based on support vector machines and probabilistic neural network
WO2019056471A1 (en) * 2017-09-19 2019-03-28 平安科技(深圳)有限公司 Driving model training method, driver recognition method and apparatus, device, and medium
WO2020169052A1 (en) * 2019-02-21 2020-08-27 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for driving condition identification
CN112766373A (en) * 2021-01-19 2021-05-07 汉纳森(厦门)数据股份有限公司 Driving behavior analysis method based on Internet of vehicles
KR20220031249A (en) * 2020-09-04 2022-03-11 인하대학교 산학협력단 Lightweight Driver Behavior Identification Model with Sparse Learning on In-vehicle CAN-BUS Sensor Data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10198693B2 (en) * 2016-10-24 2019-02-05 International Business Machines Corporation Method of effective driving behavior extraction using deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169421A (en) * 2017-04-20 2017-09-15 华南理工大学 A kind of car steering scene objects detection method based on depth convolutional neural networks
WO2019056471A1 (en) * 2017-09-19 2019-03-28 平安科技(深圳)有限公司 Driving model training method, driver recognition method and apparatus, device, and medium
CN108875812A (en) * 2018-06-01 2018-11-23 宁波工程学院 A kind of driving behavior classification method based on branch's convolutional neural networks
CN109150830A (en) * 2018-07-11 2019-01-04 浙江理工大学 A kind of multilevel intrusion detection method based on support vector machines and probabilistic neural network
WO2020169052A1 (en) * 2019-02-21 2020-08-27 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for driving condition identification
KR20220031249A (en) * 2020-09-04 2022-03-11 인하대학교 산학협력단 Lightweight Driver Behavior Identification Model with Sparse Learning on In-vehicle CAN-BUS Sensor Data
CN112766373A (en) * 2021-01-19 2021-05-07 汉纳森(厦门)数据股份有限公司 Driving behavior analysis method based on Internet of vehicles

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SF-CNN在驾驶行为识别中的应用研究;王忠民;张瑶;衡霞;;计算机工程与应用(11);133-137+165 *
基于一维卷积神经网络和降噪自编码器的驾驶行为识别;杨云开;范文兵;彭东旭;;计算机应用与软件(08);177-182 *

Also Published As

Publication number Publication date
CN116092059A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN108550259B (en) Road congestion judging method, terminal device and computer readable storage medium
CN111291678B (en) Face image clustering method and device based on multi-feature fusion
CN111178452B (en) Driving risk identification method, electronic device and readable storage medium
CN114930352A (en) Method for training image classification model
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN103942568A (en) Sorting method based on non-supervision feature selection
CN108021908B (en) Face age group identification method and device, computer device and readable storage medium
CN109492748B (en) Method for establishing medium-and-long-term load prediction model of power system based on convolutional neural network
WO2021179818A1 (en) Travel state recognition method and apparatus, and terminal and storage medium
CN117157678A (en) Method and system for graph-based panorama segmentation
CN112529638B (en) Service demand dynamic prediction method and system based on user classification and deep learning
CN108491859A (en) The recognition methods of driving behavior heterogeneity feature based on automatic coding machine
CN112905997B (en) Method, device and system for detecting poisoning attack facing deep learning model
CN110378397A (en) A kind of driving style recognition methods and device
CN106506528A (en) A kind of Network Safety Analysis system under big data environment
CN112287014A (en) Product information visualization processing method and device and computer equipment
CN116092059B (en) Neural network-based vehicle networking user driving behavior recognition method and system
CN114565092A (en) Neural network structure determining method and device
CN114065838B (en) Low-light obstacle detection method, system, terminal and storage medium
WO2022222228A1 (en) Method and apparatus for recognizing bad textual information, and electronic device and storage medium
Wang et al. A data-driven estimation of driving style using deep clustering
CN112529637B (en) Service demand dynamic prediction method and system based on context awareness
CN115114992A (en) Method, device and equipment for training classification model and storage medium
Zhou et al. An intelligent model validation method based on ECOC SVM
CN108427967B (en) Real-time image clustering method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant