CN113537002B - Driving environment evaluation method and device based on dual-mode neural network model - Google Patents

Driving environment evaluation method and device based on dual-mode neural network model Download PDF

Info

Publication number
CN113537002B
CN113537002B CN202110747135.4A CN202110747135A CN113537002B CN 113537002 B CN113537002 B CN 113537002B CN 202110747135 A CN202110747135 A CN 202110747135A CN 113537002 B CN113537002 B CN 113537002B
Authority
CN
China
Prior art keywords
dimensional data
subset
driving environment
dimensional
dual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110747135.4A
Other languages
Chinese (zh)
Other versions
CN113537002A (en
Inventor
李松
李玉
刘近平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anyang Institute of Technology
Original Assignee
Anyang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anyang Institute of Technology filed Critical Anyang Institute of Technology
Priority to CN202110747135.4A priority Critical patent/CN113537002B/en
Publication of CN113537002A publication Critical patent/CN113537002A/en
Application granted granted Critical
Publication of CN113537002B publication Critical patent/CN113537002B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

According to the driving environment evaluation method and device based on the dual-mode neural network model, the multiple sets of coordinate sequences and the multiple sets of image sequences of different driving vehicles around are obtained, different dimensional data are respectively subjected to standardization processing, the phenomenon that the data are processed wrongly due to outliers is avoided, and the accuracy of subsequent identification is improved; the dual-mode neural network model is constructed to carry out targeted combined identification on data with different dimensions, so that the identification accuracy can be improved; by designing a nonlinear function as an excitation function of the dual-mode neural network model, the dual-mode neural network model has the capability of classifying the nonlinear data set, so that the identification accuracy is improved, and the identification efficiency is greatly improved; and predicting the state of the driving environment in the driving environment according to the dual-mode neural network to realize an intelligent driving assistance function.

Description

Driving environment evaluation method and device based on dual-mode neural network model
Technical Field
The invention belongs to the technical field of automatic driving, and particularly relates to a driving environment evaluation method and device based on a dual-mode neural network model.
Background
The automatic driving system is a popular field of recent industrial research, and depends on the cooperative cooperation of information acquisition and analysis systems such as machine learning, image processing, radar and positioning systems, so that the system can automatically and safely control motor vehicles in an environment without active operation of human beings. The driving environment is a significant factor for automatic driving, and is concerned about.
In the prior art, an automatic driving system depends on the cooperative cooperation of machine learning, image processing, various signal and data acquisition devices, so that the system can automatically and safely control a motor vehicle in an environment without active operation of human beings. The neural network is a key technology for completing automatic driving and auxiliary driving tasks, is an operational model and is formed by connecting a large number of nodes (called neurons) with one another. Each node of the neural network represents a particular output function, called the excitation function; each connection between two nodes represents a weighted value, called weight, for the signal passing through the connection. The weight values obtained after training the neural network are analogous to human memory. The neural network as a whole has a plurality of inputs and a plurality of outputs, the neuron nodes directly connected with the inputs are called input layers, the neurons directly connected with the outputs are called output layers, and the neurons between the input layers and the output layers are collectively called hidden layers. In the process of analyzing the driving environment, the driving image and the driving data are sequentially input into the neural network model, the neural network model extracts features according to the input sequence to analyze, and finally, a result of whether the driving environment is safe or not is output.
Due to the diversity of driving vehicles, the types of driving images and driving data are more and are often in different dimensions, and the neural network model needs to continuously adapt to the data characteristics of different dimensions in the process of analyzing and extracting the characteristics, so that the efficiency is low; meanwhile, the data with different dimensions are easy to generate an outlier, the outlier data are nonlinear, and the existing neural network model has insufficient recognition capability on the nonlinear data, so that the result of evaluating whether the driving environment is safe is not accurate.
Disclosure of Invention
The invention provides a driving environment evaluation method and device based on a dual-mode neural network model, and aims to improve the accuracy and the efficiency of the driving environment evaluation method. The specific technical scheme is as follows.
In a first aspect, the driving environment assessment method based on the dual-mode neural network model provided by the invention comprises the following steps:
acquiring a plurality of sets of coordinate sequences and a plurality of sets of image sequences of surrounding driving vehicles;
the image sequence is two-dimensional data, and the coordinate sequence is one-dimensional data;
forming a first subset by the multiple sets of coordinate sequences, and forming a second subset by the multiple sets of image sequences;
wherein each element in the first subset is a one-dimensional vector and each element in the second subset is a two-dimensional matrix;
normalizing each vector in the first subset and each two-dimensional matrix in the second subset according to a data dimension so as to map the vectors in a fixed range, and obtaining a normalized first subset and a normalized second subset;
constructing a dual-mode neural network model;
the dual-mode neural network model comprises a one-dimensional data sub-network and a two-dimensional data sub-network, the last hidden layer of the one-dimensional data sub-network and the last hidden layer of the two-dimensional data sub-network are fully connected with an output layer, an input layer of the one-dimensional data sub-network is independent of an input layer of the two-dimensional data sub-network, the hidden layer of the one-dimensional data sub-network is independent of the hidden layer of the two-dimensional data sub-network, and an excitation function of the dual-mode neural network model is a nonlinear function;
inputting the first subset into an input layer of a trained one-dimensional data subnetwork, and inputting each element in the second subset into another input layer of the trained two-dimensional data subnetwork, so that the two-dimensional data subnetwork and the one-dimensional data subnetwork are matched with each other for recognition, and an evaluation result of a driving environment is output;
and determining whether the driving environment of the self-driving vehicle is safe or not based on the evaluation result of the driving environment.
Optionally, normalizing each vector in the first subset and each two-dimensional matrix in the second subset according to a data dimension to map the vector in a fixed range, where obtaining the normalized first subset and the normalized second subset includes:
normalizing each vector in the first subset by using a first normalized formula, and normalizing each two-dimensional matrix in the second subset by using a second normalized formula to obtain a normalized first subset and a normalized second subset;
wherein the first normalized formula is:
Figure BDA0003144686770000031
wherein, a t Representing the vectors in the first subset and,
Figure BDA0003144686770000032
representing the vectors in the normalized first subset,
Figure BDA0003144686770000033
mean represents the vector a t Std is the standard deviation, | a t | represents the dimension of the vector;
the second normalized formula is:
Figure BDA0003144686770000034
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003144686770000035
Figure BDA0003144686770000036
representing a two-dimensional matrix in the normalized second subset, d t Representing a two-dimensional matrix in the second subset,
Figure BDA0003144686770000037
representing a two-dimensional matrix
Figure BDA0003144686770000038
Each element value in (a).
The one-dimensional data sub-network comprises an input layer and 8 hidden layers, the two-dimensional data sub-network comprises an input layer and 5 hidden layers, the 8 th hidden layer of the one-dimensional data sub-network and the 5 th hidden layer of the two-dimensional data sub-network are fully connected with the output layer of the dual-mode neural network model, the output layer outputs a vector, and each dimension of the vector represents the state of a driving environment.
Optionally, the trained one-dimensional data subnetwork and the trained two-dimensional data subnetwork are obtained by training through the following steps:
acquiring a training data set;
the training data set comprises a plurality of samples, one sample comprises a one-dimensional coordinate sequence and a corresponding image sequence, and the samples comprise states containing marked driving environments;
and taking the one-dimensional coordinate sequence as the input of a one-dimensional data subnetwork in the dual-mode neural network model, taking the image sequence corresponding to the one-dimensional coordinate sequence as the input of a two-dimensional data subnetwork, taking the state of the driving environment marked by the sample as the learning target of the dual-mode neural network model, and iteratively training the dual-mode neural network model until the iteration times or the loss function is minimum, so as to obtain the trained one-dimensional data subnetwork and the trained two-dimensional data subnetwork.
Optionally, the excitation function is:
Figure BDA0003144686770000041
where x represents the input, α represents the parameter that the input and output are approximately equal when x >0, resulting in a smaller offset, and R represents the set of real numbers.
Wherein the evaluation result comprises a plurality of components, each component corresponds to a state of a driving environment, and the value of each component is between [0,1 ].
Optionally, the determining whether the driving environment of the self-driving vehicle is safe based on the evaluation result of the driving environment includes:
when the value of the component is larger than the threshold value, determining the state of the driving environment corresponding to the component as the state of the driving environment of the self-driving vehicle;
and determining whether the driving environment is safe or not according to the state of the driving environment of the self-driving vehicle.
Optionally, after determining whether the driving environment of the self-driving vehicle is safe based on the evaluation result of the driving environment, the driving environment evaluation method further includes:
when the driving environment of the self-driving vehicle is unsafe, alarm information is sent to the self-driving vehicle central control unit, so that the central control unit controls the self-driving vehicle to perform related operations, and the self-driving vehicle is in a safe environment state in the shortest time.
In a second aspect, the present invention provides a driving environment assessment apparatus based on a dual-mode neural network model, including:
the acquisition module is used for acquiring a plurality of sets of coordinate sequences and a plurality of sets of image sequences of surrounding driving vehicles;
the image sequence is two-dimensional data, and the coordinate sequence is one-dimensional data;
the composition module is used for composing the plurality of groups of coordinate sequences into a first subset and composing the plurality of groups of image sequences into a second subset;
wherein each element in the first subset is a one-dimensional vector and each element in the second subset is a two-dimensional matrix;
the processing module is used for normalizing each vector in the first subset and each two-dimensional matrix in the second subset according to data dimensionality so as to map the vectors in a fixed range, and obtaining a normalized first subset and a normalized second subset;
the building module is used for building a dual-mode neural network model;
the dual-mode neural network model comprises a one-dimensional data sub-network and a two-dimensional data sub-network, the last hidden layer of the one-dimensional data sub-network and the last hidden layer of the two-dimensional data sub-network are fully connected with an output layer, an input layer of the one-dimensional data sub-network is independent of an input layer of the two-dimensional data sub-network, a hidden layer of the one-dimensional data sub-network is independent of a hidden layer of the two-dimensional data sub-network, and an excitation function of the dual-mode neural network model is a nonlinear function;
the identification module is used for inputting the first subset into an input layer of a trained one-dimensional data subnetwork, and inputting each element in the second subset into another input layer of the trained two-dimensional data subnetwork, so that the two-dimensional data subnetwork and the one-dimensional data subnetwork are matched with each other for identification, and an evaluation result of the driving environment is output;
and the determining module is used for determining whether the driving environment of the self-driving vehicle is safe or not based on the evaluation result of the driving environment.
Optionally, the apparatus further comprises a training module, configured to:
acquiring a training data set;
the training data set comprises a plurality of samples, one sample comprises a one-dimensional coordinate sequence and a corresponding image sequence, and the samples comprise states including marked driving environments;
and taking the one-dimensional coordinate sequence as the input of a one-dimensional data subnetwork in the dual-mode neural network model, taking the image sequence corresponding to the one-dimensional coordinate sequence as the input of a two-dimensional data subnetwork, taking the state of the driving environment marked by the sample as the learning target of the dual-mode neural network model, and iteratively training the dual-mode neural network model until the iteration times or the loss function is minimum, so as to obtain the trained one-dimensional data subnetwork and the trained two-dimensional data subnetwork.
The innovation points of the embodiment of the invention comprise:
1. according to the driving environment evaluation method based on the dual-mode neural network model, provided by the invention, through acquiring a plurality of sets of coordinate sequences and a plurality of sets of image sequences of different driving vehicles around, different dimensional data are respectively subjected to standardization processing, the data are prevented from being processed wrongly due to the fact that the data are clustered, and the accuracy of subsequent identification is improved.
2. According to the driving environment evaluation method based on the dual-mode neural network model, the dual-mode neural network model is established to carry out targeted combination identification on data with different dimensions, and the identification accuracy can be improved.
3. According to the driving environment evaluation method based on the dual-mode neural network model, the dual-mode neural network model has the capability of classifying the non-linear data set by designing the non-linear function as the excitation function of the dual-mode neural network model, so that the recognition accuracy is improved, and the recognition efficiency is greatly improved.
4. The driving environment evaluation method based on the dual-mode neural network model provided by the invention can predict the state of the driving environment in the driving environment according to the dual-mode neural network, thereby realizing the intelligent driving auxiliary function.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flowchart of a driving environment evaluation method based on a dual-mode neural network model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a fully-connected neural network;
FIG. 3a is a map of convolutional layer convolution;
FIG. 3b is a schematic diagram of a convolutional layer connection;
FIG. 4 is a schematic diagram of a three-dimensional convolution spread out in time dimension;
FIG. 5 is a schematic diagram of a dual-mode neural network model;
fig. 6 is a schematic structural diagram of a driving environment evaluation apparatus based on a dual-mode neural network model according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic flow chart of a driving environment evaluation method based on a dual-mode neural network model according to an embodiment of the present invention. The method is applied to an autonomous vehicle. The method specifically comprises the following steps.
S1, acquiring a plurality of sets of coordinate sequences and a plurality of sets of image sequences of surrounding driving vehicles;
the image sequence is two-dimensional data, and the coordinate sequence is one-dimensional data;
it can be understood that the implementation step can acquire original data from data such as an image acquisition device, a laser distance measuring device and a positioning device which are installed along with the vehicle and a signal acquisition device. The invention defines the raw data (raw data) as discrete time series data acquired directly from the device/equipment, and the sampling points in all data time dimensions are the same. The invention divides the original data into two types according to the dimensionality: one is an image sequence formed of two-dimensional image data, having a plurality of images per unit time; the other is a sequence of one-dimensional data, such as latitude and longitude coordinate data.
S2, forming a plurality of groups of coordinate sequences into a first subset, and forming a plurality of groups of image sequences into a second subset;
wherein each element in the first subset is a one-dimensional vector and each element in the second subset is a two-dimensional matrix;
in this implementation step, it is assumed that the original data sequences collected by different sensors of the driving system form a set S, wherein a subset S of the one-dimensional data sequences forming S is output a Outputting a subset S of a sequence of images forming S d 。|S a I and S d And | respectively represents the sizes of the two subsets, namely the number of the types of the data sources.
At S a In the method, an element a is arbitrarily taken and is a sequence formed by one-dimensional data, and if the length of the element a is t, a can be denoted as { a } 1 ,a 2 ,...,a t Each of which is an element a t Is a one-dimensional vector.
Also at S d Wherein an arbitrary element d is a sequence of two-dimensional image data, and d can be designated as { d, assuming that it is t in length 1 ,d 2 ,...,d t In which each element d t Is a two-dimensional matrix.
S3, normalizing each vector in the first subset and each two-dimensional matrix in the second subset according to the data dimension so as to map the vectors in a fixed range, and obtaining a normalized first subset and a normalized second subset;
the invention can respectively design different standard methods aiming at different data, and provides a cushion for the subsequent design of the neural network model.
S4, constructing a dual-mode neural network model;
the dual-mode neural network model comprises a one-dimensional data sub-network and a two-dimensional data sub-network, the last hidden layer of the one-dimensional data sub-network and the last hidden layer of the two-dimensional data sub-network are fully connected with an output layer, an input layer of the one-dimensional data sub-network is independent of an input layer of the two-dimensional data sub-network, a hidden layer of the one-dimensional data sub-network is independent of a hidden layer of the two-dimensional data sub-network, and an excitation function of the dual-mode neural network model is a nonlinear function; the one-dimensional data sub-network comprises an input layer and 8 hidden layers, the two-dimensional data sub-network comprises an input layer and 5 hidden layers, the 8 th hidden layer of the one-dimensional data sub-network and the 5 th hidden layer of the two-dimensional data sub-network are fully connected with an output layer of the dual-mode neural network model, the output layer outputs a vector, and each dimension of the vector represents the state of a driving environment.
As shown in fig. 2, three leftmost nodes X 1X 2 1 is an input layer node of the dual-mode neural network model, a right side node y is an output layer node, h 1 ,h 2 ,h 3 For hiding layer nodes, sigma represents an excitation function and has the function of enabling the neural network to have nonlinear classification capability. The relationship between the output and the input of the neural network is defined by the following equation:
a 1 =w 1-11 x 1 +w 1-21 x 2 +b 1-1
a 2 =w 1-12 x 1 +w 1-22 x 2 +b 1-2
a 3 =w 1-13 x 1 +w 1-23 x 2 +b 1-3
y=σ(w 2-1 σ(a 1 )+w 2-2 σ(a 2 )+w 2-3 σ(a 3 ))
wherein, a 1 Denotes h 1 Relation of input to output of a node, a 2 Denotes h 2 Node input and output relationship, a 3 Represents h 3 Node input and output relationships, w-lower subscripts represent channel weights between nodes. b lower-angle labels indicate the parameters between node 1 and the excitation function. The σ parenthesis indicates the excitation function of the channel indicated in the parenthesis.
Fig. 2 shows a fully-connected neural network (the most complete form), that is, each node of the hidden layer is connected to any node of the previous layer (without considering the excitation function), in practical applications, there may be a plurality of hidden layers, and the connection relationship between the number of nodes of each layer and the previous layer may be freely defined on the premise of implementation permission, that is, the combination or deletion of the connections is performed on the basis of the full connection.
For one-dimensional input
Figure BDA0003144686770000091
Assuming that there are several classes of one-dimensional input data, the number of classes is | S a L, the time sampling number is T, one-dimensional vectors with the same unit time are expanded, a new vector with the length of N1 is formed, and an N1 multiplied by T matrix M is formed a . A one-dimensional data subnetwork network NW connected thereto is set up as follows a
(11) One-dimensional data subnetwork NW a The first hidden layer Ha1 is defined as follows.
Figure BDA0003144686770000101
Ha1 is based on the input layer M a Data by convolution kernels
Figure BDA0003144686770000102
The latter result. The connection situation when p =3 and q =3 is given in fig. 3a and 3b, and each node is connected to the layer above it (i.e. the input matrix M) a ) 3x3 nodes at corresponding positions are connected; the weights of the 3x3 connections are defined as row-column order
Figure BDA0003144686770000103
And each node v of Ha1 is connected to 3x3 points of the input, whose weight average at the corresponding position is the same. b denotes the parameters of the first hidden layer with subscripts, starting with the first hidden layer with subscript 0, and pq denotes the size of the convolution kernel.
In particular, the present invention defines the window size of the layer as 5x5, i.e., 1. Ltoreq. P, q. Ltoreq.5.
(12) Network NW a The second hidden layer Ha2 is defined as follows.
Figure BDA0003144686770000104
Ha2 is the result of dividing the maximum value within the convolution window pxq by the window size from the output data of the hidden layer Ha 1. Fig. 4 shows a connection case where p =4,q =4, and 4 × 4 nodes corresponding thereto are connected, and the weight is fixed to 1/16.
In particular, the present invention defines the layer convolution window to be 4x4, i.e., 1. Ltoreq. P, q. Ltoreq.4.
(13) Network NW a The third hidden layer Ha3 is defined as follows:
Figure BDA0003144686770000105
the Ha3 layer is formed by convolution kernel according to output data of the Ha2 layer
Figure BDA0003144686770000106
The latter result.
In particular, the present invention defines the present layer window p = q =5.
(14) Network NW a The fourth hidden layer Ha4 is defined as follows.
Figure BDA0003144686770000111
The Ha4 layer is formed by convolution kernel according to output data of the Ha3 layer
Figure BDA0003144686770000112
The latter result. And convolution kernel
Figure BDA0003144686770000113
Is a symmetric matrix.
In particular, the present invention defines the window size of the layer as 5x5, i.e., 1. Ltoreq. P, q. Ltoreq.5.
(15) Network NW a The fifth hidden layer Ha5 is defined as follows.
Figure BDA0003144686770000114
And when p > q,
Figure BDA0003144686770000115
the Ha5 layer is formed by convolution kernel according to output data of the Ha4 layer
Figure BDA0003144686770000116
The latter result. And convolution kernel
Figure BDA0003144686770000117
Is an upper (lower) triangular matrix.
In particular, the present invention defines the window size of the layer as 5x5, i.e., 1. Ltoreq. P, q. Ltoreq.5.
(16) Network NW a The sixth hidden layer Ha6 is defined as follows.
Figure BDA0003144686770000118
And when p < q,
Figure BDA0003144686770000119
the Ha6 layer is formed by performing convolution kernel according to output data of the Ha5 layer
Figure BDA00031446867700001110
The latter result. And a convolution kernel
Figure BDA00031446867700001111
Is a and convolution kernel
Figure BDA00031446867700001112
The opposite lower (upper) triangular matrix.
In particular, the window size of the layer is defined as 5x5, namely 1 ≦ p and q ≦ 5.
(17) Network NW a The seventh hidden layer Ha7 is defined as follows.
Figure BDA00031446867700001113
Ha7 is the result of dividing the maximum value within the window pxq by the window size from the output data of the hidden layer Ha 6.
In particular, the present invention defines the window size of the layer as 5x5, i.e., 1. Ltoreq. P, q. Ltoreq.4.
(18) Network NW a The eighth hidden layer Ha8 is a fully connected layer, and there is a connection between each node of Ha8 and each node of Ha7, and the connection weights are independent.
(19) Network NW a After the eighth hidden layer Ha8, the output layer Y is connected in a full-link form.
Input of two-dimensional images
Figure BDA0003144686770000121
Assuming that there are several classes of input images, the number of classes is | S d If the time sampling number is T, the input images have the same size and the widths and heights are W and H respectively, then the input data M d The dimension is W × H × T. A two-dimensional data subnetwork NW connected thereto is established as follows d
(21) Network NW d The first hidden layer Hd1 is defined as follows.
Figure BDA0003144686770000122
Hd1 is according to the input layer M d Data, by convolution kernel
Figure BDA0003144686770000123
The latter result. Convolution kernel
Figure BDA0003144686770000124
Is three-dimensional and pqr is the size of the three-dimensional convolution kernel. The schematic diagram of the three-dimensional convolution spread out in the time dimension is shown in fig. 4. In fig. 4, the connection in two dimensions other than the time dimension is the same as the two-dimensional convolution diagrams (fig. 3a and 3 b); in the time dimension, the weight of nodes with equal offset in the time window is the same, refer to fig. 4.
In particular, the window size of the layer is defined as 7x7x7, namely 1 ≦ p, q, r ≦ 7.
(22) Network NW d The second hidden layer Hd2 is defined as follows.
Figure BDA0003144686770000125
Hd2 is the result of finding the maximum value within the window p × q × r from the output data of the hidden layer Hd 1.
In particular, the window size of the layer is defined as 4x4x4, namely 1 ≦ p, q, r ≦ 4.
(23) Network NW d The third hidden layer Hd3 is defined as follows.
Figure BDA0003144686770000131
Hd3 is the output from the hidden layer Hd2 by convolution kernel
Figure BDA0003144686770000132
The latter result.
In particular, the window size of the layer is defined as 5x5x5, namely 1 ≦ p, q, r ≦ 5.
(24) Network NW d The fourth hidden layer Hd4 is defined as follows.
Figure BDA0003144686770000133
Hd4 is the result of finding the maximum value within the window p × q × r from the output data of the hidden layer Hd 3.
In particular, the window size of the layer is defined as 4x4x4, namely 1 ≦ p, q, r ≦ 4.
(25) Network NW d The fifth hidden layer Hd5 is a fully connected layer, and there is a connection between each node of Hd5 and each node of Hd4, and the connection weights are independent.
(26) Network NW d After the fifth hidden layer Hd5The output layer Y is connected in a full connection form. The output layer Y here coincides with (19) the output layer Y.
Network NW a And a network NW d In combination, the dual-mode neural network model structure of the invention can process one-dimensional time series data and two-dimensional image time series data, and the structural diagram is shown in fig. 5.
Each dimension of the output layer Y vector represents a "safe state," a value of 1 indicating a state "normal," and a value of 0 indicating a state "abnormal.
The excitation function is:
Figure BDA0003144686770000134
where x represents the input, α represents the parameter that the input and output are approximately equal when x >0, yielding a smaller offset, and R represents the real number set.
The effect of the present invention is verified by actual data, see table 1.
TABLE 1 Experimental comparison results
Figure BDA0003144686770000141
According to the dual-mode neural network model constructed in the invention, aiming at a special input data structure, the characteristics of different dimensional data are respectively extracted at the head part of the model, the different dimensional data are fused at the tail part of the model, and a specially designed nonlinear processing unit is adopted, so that compared with a classical algorithm, the operation efficiency can be greatly improved while the decision correctness is maintained.
S5, inputting the first subset into an input layer of the trained one-dimensional data subnetwork, and inputting each element in the second subset into another input layer of the trained two-dimensional data subnetwork, so that the two-dimensional data subnetwork and the one-dimensional data subnetwork are matched with each other for recognition, and an evaluation result of the driving environment is output;
and S6, determining whether the driving environment of the self-driving vehicle is safe or not based on the evaluation result of the driving environment.
According to the value of the output vector Y, a determination is made as to whether or not a number of driving environments in which the vehicle is driven, i.e., whether or not the driven vehicle is in a "safe state". The evaluation result comprises a plurality of components, each component corresponds to the state of a driving environment, the value of each component is between [0 and 1], and the components are independent from each other, so that each 'safe state' is judged according to the value of the component.
"safe state" refers to a predefined set of flag bits according to real application, each flag representing a clearly separable binary real state, such as "obstacles exist (do not exist) within 5m of the surroundings", "vehicle speed exceeds (does not exceed) 100km/h", and the like.
According to the driving environment evaluation method based on the dual-mode neural network model, through acquiring multiple groups of coordinate sequences and multiple groups of image sequences of different driving vehicles around, different dimensional data are respectively subjected to standardization processing, so that the phenomenon that data are mistakenly processed due to the fact that the data are clustered is avoided, and the accuracy of subsequent identification is improved; the dual-mode neural network model is constructed to carry out targeted combined identification on data with different dimensions, so that the identification accuracy can be improved; by designing a nonlinear function as an excitation function of the dual-mode neural network model, the dual-mode neural network model has the capability of classifying a nonlinear data set, so that the identification accuracy is improved, and the identification efficiency is greatly improved; and predicting the state of the driving environment in the driving environment according to the dual-mode neural network to realize an intelligent driving assistance function.
As an optional implementation manner of the present invention, normalizing each vector in the first subset and each two-dimensional matrix in the second subset according to a data dimension, so that the vector is mapped in a fixed range, and obtaining the normalized first subset and the normalized second subset includes:
normalizing each vector in the first subset by using a first normalization formula, and normalizing each two-dimensional matrix in the second subset by using a second normalization formula to obtain a normalized first subset and a normalized second subset;
wherein the first normalized formula is:
Figure BDA0003144686770000151
wherein, a t Representing the vectors in the first subset and,
Figure BDA0003144686770000152
representing the vectors in the normalized first subset,
Figure BDA0003144686770000153
mean represents the vector a t Std is the standard deviation, | a t | represents the dimension of the vector;
the second normalized formula is:
Figure BDA0003144686770000154
wherein the content of the first and second substances,
Figure BDA0003144686770000155
Figure BDA0003144686770000156
representing the two-dimensional matrix in the normalized second subset, d t Representing the two-dimensional matrix in the second subset,
Figure BDA0003144686770000157
representing a two-dimensional matrix
Figure BDA0003144686770000158
Per element value.
The normalized first subset and the normalized second subset are obtained through the embodiment and are used as the input of the subsequent dual-mode neural network model.
As an alternative embodiment of the present invention, the trained one-dimensional data subnetwork and the trained two-dimensional data subnetwork are trained by the following steps:
the method comprises the following steps: acquiring a training data set;
the training data set comprises a plurality of samples, one sample comprises a one-dimensional coordinate sequence and a corresponding image sequence, and the samples comprise states including an annotated driving environment;
step two: and taking the one-dimensional coordinate sequence as the input of a one-dimensional data sub-network in the dual-mode neural network model, taking the image sequence corresponding to the one-dimensional coordinate sequence as the input of a two-dimensional data sub-network, taking the state of the driving environment marked by the sample as a learning target of the dual-mode neural network model, and iteratively training the dual-mode neural network model until the iteration times or the loss function is minimum, thereby obtaining the trained one-dimensional data sub-network and the trained two-dimensional data sub-network.
It can be understood that the training is a process of solving the weights w of each node in the network according to the training data, i.e. the one-dimensional data sequence and the image sequence, and the corresponding values of the output layer Y, and the goal is to find a set of weights w, so that the network can output an estimated value of the input x
Figure BDA0003144686770000161
The error with y is minimal.
As an alternative embodiment of the present invention, the determining whether the driving environment of the self-driving vehicle is safe based on the evaluation result of the driving environment includes:
the method comprises the following steps: when the value of the component is larger than the threshold value, determining the state of the driving environment corresponding to the component as the state of the driving environment of the self-driving vehicle;
step two: and determining whether the driving environment is safe or not according to the state of the driving environment of the self-driving vehicle.
Illustratively, when the threshold value is 0.5, if the value of a component is greater than 0.5, the state represented by the component is considered to be "normal", otherwise it is "abnormal".
As an alternative embodiment of the present invention, after determining whether the driving environment of the self-driving vehicle is safe based on the evaluation result of the driving environment, the driving environment evaluation method further includes:
when the driving environment of the self-driving vehicle is unsafe, alarm information is sent to the self-driving vehicle central control unit, so that the central control unit controls the self-driving vehicle to perform related operations, and the self-driving vehicle is in a safe environment state in the shortest time.
As shown in fig. 6, the driving environment assessment apparatus based on the dual-mode neural network model provided by the present invention includes:
an obtaining module 61, configured to obtain multiple sets of coordinate sequences and multiple sets of image sequences of surrounding driving vehicles;
the image sequence is two-dimensional data, and the coordinate sequence is one-dimensional data;
a composing module 62, configured to compose the sets of coordinate sequences into a first subset, and compose the sets of image sequences into a second subset;
wherein each element in the first subset is a one-dimensional vector and each element in the second subset is a two-dimensional matrix;
a processing module 63, configured to normalize, according to a data dimension, each vector in the first subset and each two-dimensional matrix in the second subset, so that the vectors are mapped in a fixed range, and obtain a normalized first subset and a normalized second subset;
a construction module 64 for constructing a dual-mode neural network model;
the dual-mode neural network model comprises a one-dimensional data sub-network and a two-dimensional data sub-network, the last hidden layer of the one-dimensional data sub-network and the last hidden layer of the two-dimensional data sub-network are fully connected with an output layer, an input layer of the one-dimensional data sub-network is independent of an input layer of the two-dimensional data sub-network, a hidden layer of the one-dimensional data sub-network is independent of a hidden layer of the two-dimensional data sub-network, and an excitation function of the dual-mode neural network model is a nonlinear function;
a recognition module 65, configured to input the first subset into an input layer of the trained one-dimensional data subnetwork, and input each element in the second subset into another input layer of the trained two-dimensional data subnetwork, so that the two-dimensional data subnetwork and the one-dimensional data subnetwork cooperate with each other to perform recognition, and output an evaluation result of the driving environment;
a determination module 66 is configured to determine whether the driving environment of the self-driving vehicle is safe based on the evaluation of the driving environment.
Optionally, the apparatus further comprises a training module configured to:
acquiring a training data set;
the training data set comprises a plurality of samples, one sample comprises a one-dimensional coordinate sequence and a corresponding image sequence, and the samples comprise states including an annotated driving environment;
and taking the one-dimensional coordinate sequence as the input of a one-dimensional data sub-network in the dual-mode neural network model, taking the image sequence corresponding to the one-dimensional coordinate sequence as the input of a two-dimensional data sub-network, taking the state of the driving environment marked by the sample as a learning target of the dual-mode neural network model, and iteratively training the dual-mode neural network model until the iteration times or the loss function is minimum, thereby obtaining the trained one-dimensional data sub-network and the trained two-dimensional data sub-network.
The device embodiment corresponds to the method embodiment, and has the same technical effects as the method embodiment, and the specific description refers to the method embodiment. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A driving environment assessment method based on a dual-mode neural network model is characterized by comprising the following steps:
acquiring a plurality of sets of coordinate sequences and a plurality of sets of image sequences of surrounding driving vehicles;
the image sequence is two-dimensional data, and the coordinate sequence is one-dimensional data;
forming a first subset by the multiple groups of coordinate sequences and forming a second subset by the multiple groups of image sequences;
wherein each element in the first subset is a one-dimensional vector and each element in the second subset is a two-dimensional matrix;
normalizing each vector in the first subset and each two-dimensional matrix in the second subset according to a data dimension to map the vectors in a fixed range, so as to obtain a normalized first subset and a normalized second subset;
constructing a dual-mode neural network model;
the dual-mode neural network model comprises a one-dimensional data sub-network and a two-dimensional data sub-network, the last hidden layer of the one-dimensional data sub-network and the last hidden layer of the two-dimensional data sub-network are fully connected with an output layer, an input layer of the one-dimensional data sub-network is independent of an input layer of the two-dimensional data sub-network, a hidden layer of the one-dimensional data sub-network is independent of a hidden layer of the two-dimensional data sub-network, and an excitation function of the dual-mode neural network model is a nonlinear function;
inputting the first subset into an input layer of a trained one-dimensional data subnetwork, and inputting each element in the second subset into another input layer of the trained two-dimensional data subnetwork, so that the two-dimensional data subnetwork and the one-dimensional data subnetwork are matched with each other for recognition, and an evaluation result of the driving environment is output;
determining whether the driving environment of the self-driving vehicle is safe based on the evaluation result of the driving environment;
the trained one-dimensional data sub-network and the trained two-dimensional data sub-network are obtained by training through the following steps:
acquiring a training data set;
the training data set comprises a plurality of samples, one sample comprises a one-dimensional coordinate sequence and a corresponding image sequence, and the samples comprise states containing marked driving environments;
and taking the one-dimensional coordinate sequence as the input of a one-dimensional data sub-network in the dual-mode neural network model, taking the image sequence corresponding to the one-dimensional coordinate sequence as the input of a two-dimensional data sub-network, taking the state of the driving environment marked by the sample as a learning target of the dual-mode neural network model, and iteratively training the dual-mode neural network model until the iteration times or the loss function is minimum, so as to obtain the trained one-dimensional data sub-network and the trained two-dimensional data sub-network.
2. The driving environment assessment method according to claim 1, wherein the normalizing each vector in the first subset and each two-dimensional matrix in the second subset by data dimension to map the vector within a fixed range, obtaining the normalized first subset and the normalized second subset comprises:
normalizing each vector in the first subset by using a first normalization formula, and normalizing each two-dimensional matrix in the second subset by using a second normalization formula to obtain a normalized first subset and a normalized second subset;
wherein the first normalized formula is:
Figure FDA0003900986970000021
wherein, a t Representing the vectors in the first subset and,
Figure FDA0003900986970000022
representing the vectors in the normalized first subset,
Figure FDA0003900986970000023
mean represents the vector a t Std is the standard deviation, | a t | represents the dimension of the vector;
the second normalized formula is:
Figure FDA0003900986970000024
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003900986970000025
Figure FDA0003900986970000026
representing a two-dimensional matrix in the normalized second subset, d t Representing the two-dimensional matrix in the second subset,
Figure FDA0003900986970000027
representing a two-dimensional matrix
Figure FDA0003900986970000028
Per element value.
3. The driving environment assessment method according to claim 1, wherein the one-dimensional data sub-network comprises an input layer and 8 hidden layers, the two-dimensional data sub-network comprises an input layer and 5 hidden layers, the 8 th hidden layer of the one-dimensional data sub-network and the 5 th hidden layer of the two-dimensional data sub-network are fully connected to the output layer of the dual-mode neural network model, the output layer outputs a vector, and each dimension of the vector represents a state of a driving environment.
4. The driving environment evaluation method according to claim 1, wherein the excitation function is:
Figure FDA0003900986970000031
where x represents the input, α represents a parameter that controls the excitation function to produce a bias such that the input and output are approximately equal when x >0, and R represents a real number set.
5. The driving environment assessment method according to claim 1, wherein the assessment result comprises a plurality of components, each component corresponds to a state of a driving environment, and the value of each component is between [0,1 ].
6. The driving environment evaluation method according to claim 5, wherein the determining whether the driving environment of the self-driving vehicle is safe based on the evaluation result of the driving environment includes:
when the value of the component is larger than the threshold value, determining the state of the driving environment corresponding to the component as the state of the driving environment of the self-driving vehicle;
and determining whether the driving environment is safe or not according to the state of the driving environment of the self-driving vehicle.
7. The driving environment evaluation method according to claim 1, wherein after the determination of whether the driving environment of the self-driving vehicle is safe based on the evaluation result of the driving environment, the driving environment evaluation method further comprises:
when the driving environment of the self-driving vehicle is unsafe, alarm information is sent to the self-driving vehicle central control unit, so that the central control unit controls the self-driving vehicle to perform related operations, and the self-driving vehicle is in a safe environment state in the shortest time.
8. A driving environment evaluation apparatus based on a dual-mode neural network model, the apparatus comprising:
the acquisition module is used for acquiring a plurality of sets of coordinate sequences and a plurality of sets of image sequences of surrounding driving vehicles;
the image sequence is two-dimensional data, and the coordinate sequence is one-dimensional data;
the composition module is used for composing the plurality of groups of coordinate sequences into a first subset and composing the plurality of groups of image sequences into a second subset;
wherein each element in the first subset is a one-dimensional vector and each element in the second subset is a two-dimensional matrix;
the processing module is used for normalizing each vector in the first subset and each two-dimensional matrix in the second subset according to data dimensionality so as to map the vectors in a fixed range, and obtaining a normalized first subset and a normalized second subset;
the building module is used for building a dual-mode neural network model;
the dual-mode neural network model comprises a one-dimensional data sub-network and a two-dimensional data sub-network, the last hidden layer of the one-dimensional data sub-network and the last hidden layer of the two-dimensional data sub-network are fully connected with an output layer, an input layer of the one-dimensional data sub-network is independent of an input layer of the two-dimensional data sub-network, a hidden layer of the one-dimensional data sub-network is independent of a hidden layer of the two-dimensional data sub-network, and an excitation function of the dual-mode neural network model is a nonlinear function;
the recognition module is used for inputting the first subset into an input layer of a trained one-dimensional data subnetwork and inputting each element in the second subset into another input layer of the trained two-dimensional data subnetwork, so that the two-dimensional data subnetwork and the one-dimensional data subnetwork are matched with each other for recognition, and an evaluation result of the driving environment is output;
a determination module for determining whether a driving environment of the self-driving vehicle is safe based on an evaluation result of the driving environment;
the apparatus further comprises a training module to:
acquiring a training data set;
the training data set comprises a plurality of samples, one sample comprises a one-dimensional coordinate sequence and a corresponding image sequence, and the samples comprise states containing marked driving environments;
and taking the one-dimensional coordinate sequence as the input of a one-dimensional data subnetwork in the dual-mode neural network model, taking the image sequence corresponding to the one-dimensional coordinate sequence as the input of a two-dimensional data subnetwork, taking the state of the driving environment marked by the sample as the learning target of the dual-mode neural network model, and iteratively training the dual-mode neural network model until the iteration times or the loss function is minimum, so as to obtain the trained one-dimensional data subnetwork and the trained two-dimensional data subnetwork.
CN202110747135.4A 2021-07-02 2021-07-02 Driving environment evaluation method and device based on dual-mode neural network model Expired - Fee Related CN113537002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110747135.4A CN113537002B (en) 2021-07-02 2021-07-02 Driving environment evaluation method and device based on dual-mode neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110747135.4A CN113537002B (en) 2021-07-02 2021-07-02 Driving environment evaluation method and device based on dual-mode neural network model

Publications (2)

Publication Number Publication Date
CN113537002A CN113537002A (en) 2021-10-22
CN113537002B true CN113537002B (en) 2023-01-24

Family

ID=78097527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110747135.4A Expired - Fee Related CN113537002B (en) 2021-07-02 2021-07-02 Driving environment evaluation method and device based on dual-mode neural network model

Country Status (1)

Country Link
CN (1) CN113537002B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985269A (en) * 2018-08-16 2018-12-11 东南大学 Converged network driving environment sensor model based on convolution sum cavity convolutional coding structure
WO2020168660A1 (en) * 2019-02-19 2020-08-27 平安科技(深圳)有限公司 Method and apparatus for adjusting traveling direction of vehicle, computer device and storage medium
CN111738037A (en) * 2019-03-25 2020-10-02 广州汽车集团股份有限公司 Automatic driving method and system and vehicle
CN111860269A (en) * 2020-07-13 2020-10-30 南京航空航天大学 Multi-feature fusion tandem RNN structure and pedestrian prediction method
CN112987713A (en) * 2019-12-17 2021-06-18 杭州海康威视数字技术股份有限公司 Control method and device for automatic driving equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046235B (en) * 2015-08-03 2018-09-07 百度在线网络技术(北京)有限公司 The identification modeling method and device of lane line, recognition methods and device
CN108944930B (en) * 2018-07-05 2020-04-21 合肥工业大学 Automatic car following method and system for simulating driver characteristics based on LSTM
CN109572706B (en) * 2018-12-12 2020-12-08 西北工业大学 Driving safety evaluation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985269A (en) * 2018-08-16 2018-12-11 东南大学 Converged network driving environment sensor model based on convolution sum cavity convolutional coding structure
WO2020168660A1 (en) * 2019-02-19 2020-08-27 平安科技(深圳)有限公司 Method and apparatus for adjusting traveling direction of vehicle, computer device and storage medium
CN111738037A (en) * 2019-03-25 2020-10-02 广州汽车集团股份有限公司 Automatic driving method and system and vehicle
CN112987713A (en) * 2019-12-17 2021-06-18 杭州海康威视数字技术股份有限公司 Control method and device for automatic driving equipment and storage medium
CN111860269A (en) * 2020-07-13 2020-10-30 南京航空航天大学 Multi-feature fusion tandem RNN structure and pedestrian prediction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Environment Influences on Uncertainty of Object Detection for Automated Driving Systems";Lei Ren et al.;《2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)》;20191021;第1-5页 *
"基于深度学习的行为检测方法综述";高陈强 等;《重庆邮电大学学报( 自然科学版)》;20201231;第1-12页 *

Also Published As

Publication number Publication date
CN113537002A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
US10846567B2 (en) Scene understanding using a neurosynaptic system
EP4198820A1 (en) Training method for semi-supervised learning model, image processing method, and device
Makantasis et al. Tensor-based classification models for hyperspectral data analysis
CN109766992B (en) Industrial control abnormity detection and attack classification method based on deep learning
Jasim et al. Contact-state monitoring of force-guided robotic assembly tasks using expectation maximization-based Gaussian mixtures models
US9798972B2 (en) Feature extraction using a neurosynaptic system for object classification
Yandouzi et al. Review on forest fires detection and prediction using deep learning and drones
CN112990211A (en) Neural network training method, image processing method and device
JP6867054B2 (en) A learning method and a learning device for improving segmentation performance used for detecting a road user event by utilizing a double embedding configuration in a multi-camera system, and a testing method and a testing device using the learning method and a learning device. {LEARNING METHOD AND LEARNING DEVICE FOR IMPROVING SEGMENTATION PERFORMANCE TO BE USED FOR DETECTING ROAD USER EVENTS USING DOUBLE EMBEDDING CONFIGURATION IN MULTI-CAMERA SYSTEM AND TESTING METHOD AND TESTING DEVICE USING THE SAME}
Li et al. A novel spatial-temporal graph for skeleton-based driver action recognition
CN113159283A (en) Model training method based on federal transfer learning and computing node
Utebayeva et al. Multi-label UAV sound classification using Stacked Bidirectional LSTM
Akbal et al. An automated accurate sound-based amateur drone detection method based on skinny pattern
CN116827685B (en) Dynamic defense strategy method of micro-service system based on deep reinforcement learning
Henderson et al. Detection and classification of drones through acoustic features using a spike-based reservoir computer for low power applications
CN113537002B (en) Driving environment evaluation method and device based on dual-mode neural network model
CN116486238B (en) Target fine granularity identification method combining point set representation and graph classification
CN111798518A (en) Mechanical arm posture detection method, device and equipment and computer storage medium
de Almeida et al. Context-free self-conditioned gan for trajectory forecasting
Cirneanu et al. CNN based on LBP for evaluating natural disasters
CN116229128A (en) Clustering method and device for entity images, electronic equipment and storage medium
Knysh et al. Development of an image segmentation model based on a convolutional neural network
CN109003254B (en) Method, device, equipment, system and medium for detecting icing based on logistic regression
CN114330634A (en) Neural network processing method and related equipment
McCoy et al. Ensemble Learning for UAV Detection: Developing a Multi-Class Multimodal Dataset

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230124