CN115438765A - Moving target identification method, device and equipment based on evolutionary neural network - Google Patents

Moving target identification method, device and equipment based on evolutionary neural network Download PDF

Info

Publication number
CN115438765A
CN115438765A CN202211082225.7A CN202211082225A CN115438765A CN 115438765 A CN115438765 A CN 115438765A CN 202211082225 A CN202211082225 A CN 202211082225A CN 115438765 A CN115438765 A CN 115438765A
Authority
CN
China
Prior art keywords
neural network
training
evolutionary
signal
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211082225.7A
Other languages
Chinese (zh)
Inventor
王楠
邢堃盛
王伟
洪华杰
何科延
黄杰
王建华
庙要要
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202211082225.7A priority Critical patent/CN115438765A/en
Publication of CN115438765A publication Critical patent/CN115438765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H17/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Abstract

The application relates to a moving target identification method, a device and equipment based on an evolutionary neural network, wherein the method comprises the following steps: collecting sensing signals of vehicles and pedestrians during movement through an unattended sensor system and performing signal preprocessing; the sensing signals comprise sound signals and ground vibration signals; constructing a training set and a testing set according to the sound signal and the ground vibration signal; the signal data in the training set and the test set are different; training in a training set by adopting an evolutionary neural network to obtain an optimal feature extraction method and a neural network model; according to the optimal feature extraction method and the neural network model, identifying and classifying the environmental noise, the pedestrians and the moving vehicles in the test set to obtain an identified moving target; the moving objects include pedestrians and moving vehicles. Higher recognition accuracy can be achieved with the minimum feature vectors and the lowest network complexity.

Description

Moving target identification method, device and equipment based on evolutionary neural network
Technical Field
The invention belongs to the technical field of target identification, and relates to a moving target identification method, a device and equipment based on an evolutionary neural network.
Background
The unattended ground sensor network is a wireless network which is composed of a plurality of stationary unattended sensors randomly scattered in a working area in a self-organizing and multi-hop mode and is used for detecting abnormal events in the working area. The nodes are isomorphic, the cost is low, the size is small, and most of the nodes can work for a long time. Unattended sensor networks are widely used for border control, key facility protection and intrusion prevention in natural protected areas. The detection means of the unattended sensor mainly comprises: visible light, passive infrared, radar, magnetic field, ground vibration, and sound. Among the detection means, the sound sensor and the seismic sensor are a hot research direction in the field of unattended sensor systems due to the characteristics of long detection distance, low cost, light weight, low power consumption and the like. The targets identified by the unattended sensors based on sound and ground vibration signals generally comprise people, various vehicles and ultra-low-altitude aircrafts. In consideration of the fact that vibration and sound signals generated by ground moving objects in urban areas are easily affected by environmental noise and the requirement for low power consumption of an unattended ground sensor, the current unattended sensor system still has insufficient recognition capability on ground moving objects.
Disclosure of Invention
In view of the problems in the conventional methods, the invention provides a moving target identification method based on an evolutionary neural network, a moving target identification device based on the evolutionary neural network, a computer device and a computer readable storage medium, which can realize higher identification accuracy with the minimum feature vector and the minimum network complexity, and significantly improve the identification capability of a ground moving target.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in one aspect, a moving target identification method based on an evolutionary neural network is provided, which includes:
collecting sensing signals of vehicles and pedestrians during movement through an unattended sensor system and performing signal preprocessing; the sensing signals comprise sound signals and ground vibration signals;
constructing a training set and a testing set according to the sound signal and the ground vibration signal; the signal data in the training set and the test set are different;
training in a training set by adopting an evolutionary neural network to obtain an optimal feature extraction method and a neural network model;
according to the optimal feature extraction method and the neural network model, identifying and classifying the environmental noise, the pedestrians and the moving vehicles in the test set to obtain an identified moving target; the moving objects include pedestrians and moving vehicles.
In one embodiment, the signal pre-processing includes normalizing the signal units and removing dc components based on a fast fourier transform.
In one embodiment, the training set includes sampled signals for sub-sampling the sound signals and the ground vibration signals acquired at different times, and the test set includes sampled signals for sub-sampling the sound signals and the ground vibration signals acquired at different times.
In one embodiment, a process for training in a training set using an evolutionary neural network includes:
initializing an evolutionary neural network and generating an initial population;
calculating the fitness of each individual in the initial population by using a training set according to the fitness function of the evolutionary neural network, and performing optimal solution search by taking the fitness as a criterion;
carrying out evolution operation on the evolved neural network until the training termination condition is reached, and outputting an optimal feature extraction method and a neural network model; the evolution operations include selection, crossover and mutation operations.
In one embodiment, the fitness function is:
Figure BDA0003833762260000031
wherein, acc (x) i ) Representing the ith individual x in a cycle i The AN represents the maximum number of nodes of the optimized full-connection layer network, SN (x) i ) Denotes the ith individual x i The total number of nodes of the neural network.
In one embodiment, the interleaving operation comprises:
traversing each parent individual and randomly selecting another parent individual from the parent population for pairing to form N cross combinations; n cross combinations are used for generating offspring, and N is the number of individuals in the parent population;
each cross combination randomly generates a mask with the same length as the individual; a mask of 1 indicates gene information of parent 1 in the cross-over combination that the child will inherit, and a mask of 1 indicates gene information of parent 0 in the cross-over combination that the child will inherit.
In one embodiment, the number of training rounds for training the evolutionary neural network using the training set is 1500 epochs, and the evolutionary algebra is 200 generations.
In another aspect, an apparatus for identifying a moving target based on an evolutionary neural network is also provided, which includes:
the signal acquisition module is used for acquiring sensing signals of vehicles and pedestrians during movement through the unattended sensor system and performing signal preprocessing; the sensing signals comprise sound signals and ground vibration signals;
the data set construction module is used for constructing a training set and a test set according to the sound signal and the ground vibration signal; the signal data in the training set and the test set are different;
the model training module is used for training in a training set by adopting an evolutionary neural network to obtain an optimal feature extraction method and a neural network model;
the recognition and classification module is used for recognizing and classifying the environmental noise, the pedestrians and the mobile vehicles in the test set according to the optimal feature extraction method and the neural network model to obtain recognized mobile targets; the moving objects include pedestrians and moving vehicles.
In still another aspect, a computer device is provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above-mentioned moving object identification method based on the evolutionary neural network when executing the computer program.
In still another aspect, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps of the above-mentioned evolved neural network-based moving object identification method.
One of the above technical solutions has the following advantages and beneficial effects:
according to the moving target identification method, device and equipment based on the evolutionary neural network, after sound signals and ground vibration signals when vehicles and pedestrians move are collected through the unattended sensor system, a training set and a testing set are constructed based on collected signal data, then evolutionary neural network training is conducted through the training set to output an optimal feature extraction method and a neural network model, and finally environmental noise, pedestrians and moving vehicles in the testing set are identified and classified according to the optimal feature extraction method and the neural network model to obtain the identified moving target. Based on the evolutionary neural network, the selection of the optimal feature extraction method and the design of the neural network structure are realized, the obtained optimal feature extraction method and the neural network structure have the minimum feature vector and the lowest network complexity, and the actual measurement shows that the moving target recognition is carried out on the test set by using the obtained minimum feature vector optimal feature extraction method and the neural network, so that the accuracy of the recognition of the target is remarkably improved, and the aim of remarkably improving the recognition capability of the ground moving target is fulfilled.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the conventional technologies of the present application, the drawings used in the descriptions of the embodiments or the conventional technologies will be briefly introduced below, it is obvious that the drawings in the following descriptions are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a moving object identification method based on an evolutionary neural network in one embodiment;
FIG. 2 is a schematic diagram of a training process for an evolved neural network in one embodiment;
FIG. 3 is a diagram illustrating a specific process of training an evolved neural network according to an embodiment;
FIG. 4 is a schematic diagram of the composition of a population in one embodiment;
FIG. 5 is a diagram illustrating relationships between parents and children, in accordance with an embodiment;
FIG. 6 is a schematic representation of genetic information encoding in one embodiment;
FIG. 7 is a diagram showing a scheme of encoding and converting genetic information in one embodiment;
FIG. 8 is a schematic diagram showing an example of a genetic code of length 9 in one embodiment;
FIG. 9 is a diagram illustrating an optimized fully-connected neural network architecture according to an embodiment;
FIG. 10 is a schematic diagram showing the components of a signal acquisition device according to an embodiment;
FIG. 11 is a graph illustrating an accuracy curve and a loss curve for a training set and a test set during a training process, wherein (a) is the accuracy curve and (b) is the loss curve, under an embodiment;
FIG. 12 is a diagram of a confusion matrix for a model on a test set, according to one embodiment;
FIG. 13 is a block diagram illustrating an embodiment of an apparatus for identifying moving objects based on an evolutionary neural network.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
It should be appreciated that reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
One skilled in the art will appreciate that the embodiments described herein can be combined with other embodiments. The term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The following detailed description of embodiments of the invention will be made with reference to the accompanying drawings.
In one embodiment, as shown in fig. 1, a moving object identification method based on an evolutionary neural network of the present invention includes the following steps S12 to S18:
s12, collecting sensing signals when the vehicle and the pedestrian move through an unattended sensor system and carrying out signal preprocessing; the sensing signals comprise sound signals and ground vibration signals.
It can be understood that, in practical application, the data source may be a sound signal and a ground vibration signal acquired from the unattended sensor system in real time or history when the vehicle and the pedestrian move, and may be acquired directly on line with the unattended sensor system, or may be acquired indirectly through an intermediate device, and the specific acquisition mode may be selected according to the conditions of the practical application scene.
S14, constructing a training set and a testing set according to the sound signal and the ground vibration signal; the signal data in the training set and the test set are not the same.
It can be understood that the data set used in the method of the present embodiment is composed of a training set and a test set, and the training set is used for training and iteration of evolution of the evolved neural network to find the required feature extraction method and the neural network structure. The test set is used for identifying the moving target, and the signal data included in the training set and the test set are signal data acquired in batches at different times in the monitoring area.
And S16, training in the training set by adopting the evolutionary neural network to obtain an optimal feature extraction method and a neural network model.
It is understood that Evolutionary Neural Networks (ENNs) are an entirely new network model created by organically fusing both evolutionary computations and neural networks together, and utilize the principles of natural biological evolution to search for neural networks in a feasible domain space that perform well for a given task. In this embodiment, the evolutionary neural network uses a genetic algorithm as an optimization method, uses the selection of a feature extraction method and a fully-connected neural network structure as optimization objects, and uses the highest network classification accuracy and the lowest network complexity as optimization targets. And inputting the obtained training set into an evolutionary neural network for training and evolutionary iteration, and finally outputting an optimal feature extraction method and a neural network model.
S18, according to the optimal feature extraction method and the neural network model, identifying and classifying the environmental noise, the pedestrians and the mobile vehicles in the test set to obtain an identified mobile target; the moving objects include pedestrians and moving vehicles.
It can be understood that after the optimal feature extraction method and the neural network model are obtained, the test set is input into the obtained neural network model and the obtained optimal feature extraction method is used for feature extraction processing, the model extracts image features from the test set, identifies and classifies noise, vehicles and pedestrians in the test set, and finally outputs identification results, such as vehicles and pedestrians in a monitoring area.
According to the moving target identification method based on the evolutionary neural network, after sound signals and ground vibration signals when vehicles and pedestrians move are collected through an unattended sensor system, a training set and a testing set are constructed based on collected signal data, then evolutionary neural network training is carried out through the training set to output an optimal feature extraction method and a neural network model, and finally environmental noise, pedestrians and moving vehicles in the testing set are identified and classified according to the optimal feature extraction method and the neural network model to obtain identified moving targets. Based on the evolutionary neural network, the selection of the optimal feature extraction method and the design of the neural network structure are realized, the obtained optimal feature extraction method and the neural network structure have the minimum feature vector and the lowest network complexity, and the actual measurement shows that the moving target recognition is carried out on the test set by using the obtained minimum feature vector optimal feature extraction method and the neural network, so that the accuracy of the recognition of the target is remarkably improved, and the aim of remarkably improving the recognition capability of the ground moving target is fulfilled.
In one embodiment, the signal pre-processing includes normalizing the signal units and removing the dc component based on a fast fourier transform.
It is understood that the signal pre-processing includes both normalizing the signal units and removing the dc component based on the fast fourier transform. For the normalized signal unit part: the ground vibration Signal (marked as Raw _ Signal) of a geophone in the unattended sensor system is acquired by an AD acquisition card and has the unit of V (volt), the ground vibration Signal (marked as velocity after conversion) is converted into mm/s in a target, and the speed Signal conversion formula (1) of ground vibration can be obtained according to the sensitivity (such as 100 mv/m/s) of the geophone. The unit of a sound Signal (marked as Raw Signal) collected by a sound sensor in the unattended sensor system through an AD acquisition card is V, and the aim is to convert the sound Signal into a voltage Signal (marked as Sensitivity) collected by a standard microphone. According to the microphone sensitivity formula (2) and the amplification factor (such as 110 dB) of the amplifying circuit, the Standard microphone output signal Standard signals calculation formula (3) can be obtained.
velocity mm/s =Raw_Signal V /100 (1)
Figure BDA0003833762260000081
Figure BDA0003833762260000082
For the removal of the dc component, the signal is removed from the dc offset generated by the acquisition instrument. Firstly, performing fast Fourier transform on a sensing signal after a signal unit is normalized, and removing a zero-frequency signal; and then, obtaining the sound signal and the ground vibration signal after the direct current component is removed by utilizing the inverse fast Fourier transform.
Through the preprocessing, the sound signal and the ground vibration signal data which can be standardized are used for quickly constructing a standardized data set and improving the efficiency of network training evolution.
In one embodiment, the training set includes sampled signals that sub-sample the plurality of different time-acquired sound signals and the ground vibration signals, and the test set includes sampled signals that sub-sample the plurality of different time-acquired sound signals and the ground vibration signals.
It can be understood that table 1 shows VI (vibration intensity) and SPL (sound pressure level) indexes of environmental noise each time data is collected, temperature and humidity of the environment at the time of collection, and date of collecting data, and table 1 is only one of the data collection examples, and more batches of data can be collected according to identification requirements in practical applications. The data set comprises a training set and a testing set, wherein the training set is a data sample used for model fitting, and the testing set is used for evaluating the generalization ability of the final model and carrying out target recognition. In some embodiments, the optimal feature extraction method and the neural network model obtained by training evolution can be directly utilized, and after the sound signal and the ground vibration signal collected on site are preprocessed, the sound signal and the ground vibration signal are directly input into the neural network model to complete the on-site moving target recognition or the real-time target recognition.
In some embodiments, in order to ensure that the generalization ability of the obtained neural network model is more effectively evaluated, the training set and the test set are respectively formed by sound and vibration signals acquired at different times, for example, the various target acoustic-vibration signals acquired in the first to fourth acquisition experiments shown in table 1 form the training set of the data set, and the various target acoustic-vibration signals acquired in the fifth acquisition experiment form the test set of the data set. The data set may be formed by sub-sampling the acquired sound signal and the ground vibration signal for 4 seconds in length and 1 second in step size. Table 2 shows the size of the processed data set in seconds.
TABLE 1
VI/dB SPL/dB Temperature of Date
First collection experiment 36.88 25.71 26℃ 05.30.2022
Second collection experiment 38.82 26.32 25℃ 05.31.2022
Third collection experiment 39.03 24.49 23℃ 06.01.2022
Fourth time of collection experiment 41.99 28.44 23℃ 06.11.2022
Fifth Collection experiment 40.83 26.69 27℃ 06.15.2022
TABLE 2
Figure BDA0003833762260000091
In one embodiment, as shown in fig. 2, the process of training in the training set using the evolutionary neural network includes the following steps S04 to S08:
s04, initializing an evolutionary neural network and generating an initial population;
s06, calculating the fitness of each individual in the initial population by using a training set according to the fitness function of the evolutionary neural network, and performing optimal solution search by taking the fitness as a criterion;
s08, carrying out evolution operation on the evolved neural network until the training termination condition is reached, and outputting an optimal feature extraction method and a neural network model; the evolutionary operations include selection, crossover, and mutation operations.
It can be understood that the target identification method deployed in the unattended sensor system has the characteristics of high identification accuracy and low complexity of a processing flow, which brings great challenges to the structural design of a neural network and the optimization and selection of a feature extraction method. To overcome this problem, the present application proposes the above-mentioned moving target identification based on the evolutionary neural network.
The principle of the method is as follows: the evolutionary neural network comprises four parts, namely an initialization process, an evaluation process, an evolutionary process, a termination condition judgment and the like, and is shown in fig. 3. In the evolutionary neural network, the evolutionary neural network comprises a population, an individual, a genotype, genetic information, a parent generation, a child generation and other concepts, wherein the genetic information refers to a binary code of a feature extraction method and a neural network structure; genotype refers to a combination of genetic information encoding; an individual refers to a neural network with a genotype; population refers to the collection of multiple individuals, as shown in FIG. 4; the parent individuals refer to previous generation individuals which are used for transmitting genetic information to the next generation in the evolution process; the filial generation individuals refer to the next generation individuals which receive the genetic information of the previous generation in the evolution process; the parent refers to the population of the previous generation in the evolution process; progeny refers to the population of the next generation in the genetic process as shown in FIG. 5.
Specifically, the initialization process: firstly, according to the performance of a computer and the scale of a data set during experiment, the population size, the selection probability, the cross probability, the mutation probability, the evolution algebra and the like are set. And then coding the problem to be optimized, converting the optimizable object into binary genetic codes, and enabling any solution of the optimization problem to have unique corresponding binary codes. The genetic information coding in the embodiment comprises three parts of feature selection coding, full-connection network layer number, each layer node number and the likePart of the information, as shown in fig. 6. The genetic information code of the evolutionary neural network is composed of a string with the length of d + M 1 +M 2 ·M 3 Is constructed by binary coding of (a), wherein, M 1 And M 2 Is a parameter to be set and the maximum number of layers of the fully-connected neural network
Figure BDA0003833762260000111
And maximum number of nodes per layer
Figure BDA0003833762260000112
And (4) correlating. M 1 And M 2 The two parameters are specifically dependent on the complexity of the training data of the fully-connected neural network, and the higher the complexity is, the more the maximum number of layers of the evolved neural network and the maximum number of nodes of each layer are. M 3 Is the number of layers of the fully connected network.
The first code to the d-th code represent feature selection results, wherein d is the number of features to be selected. Each numerical value in the characteristic selection code corresponds to one characteristic respectively, and if the numerical value is 1, the characteristic is selected; if the value is 0, it indicates that the feature is culled. D +1 th code to M 1 + d codes represent the number of network layers for a fully-connected neural network
Figure BDA0003833762260000113
Wherein x i Is the ith value in the segment of binary code. D + M 1 +(n-1)×M 2 Is coded to d + M 1 +n×M 2 Each code represents the number of nodes of the n-th full connection layer
Figure BDA0003833762260000114
Wherein x is i Is the ith value in the segment of binary code.
For example, the setting parameter d =5, M 1 =4 and M 2 The total coding length is 69, and the genetic information coding conversion mode is shown in fig. 7, where the 1 st coding to the 5 th coding are feature selection coding, the 6 th coding to the 9 th coding are full-connection network layer number coding, and the 10+ (n-1) × 4 to 10+ n × 4 are sections of the nth layer networkCoding the number of points; feature represents a Feature, enable represents selected, disable represents unselected. The feature selection code 01001 can be translated into the second and fifth features being selected, the remaining features being discarded. The number of fully connected network layers 0011 can be translated into a layer 3 network. The number of nodes in the first layer network 1011 can be translated into 11 nodes in the first layer network.
And finally, randomly generating a series of binary codes as an initial population, and starting from the initial population, selecting out the best binary codes and approaching an optimal solution.
And (3) evaluation process: the evaluation process is to calculate the fitness of each individual in the population according to the fitness function and search the optimal solution by taking the fitness as a criterion. The fitness function is typically set based on a solution objective for the optimal solution. The evolutionary neural network aims at the highest network classification accuracy and the lowest network complexity, and the accuracy of network classification is heavily considered. Therefore, the fitness function comprises two parts, namely a network classification accuracy score and a network complexity score, and the two parts are endowed with different weight coefficients.
Further, the fitness function is as follows:
Figure BDA0003833762260000121
in the formula, acc (x) i ) Is the ith individual x in a cycle i The classification performance of the neural network classifier is expressed by the classification performance of the neural network classifier. The AN represents the maximum number of nodes of the optimized full-connection layer network. SN (x) i ) Denotes the ith individual x i The total number of nodes of the neural network.
The evolution process is as follows: the method mainly comprises three parts of selection, crossing and variation, and as shown in figure 3, the selection step is to eliminate individuals with poor fitness performance in a population and reserve the individuals with good fitness performance. In the selection process, the population is randomly and repeatedly extracted for N times as the genotype which can be inherited to the next generation (the same genotype can be extracted for multiple times), wherein N is the number of individuals in the population. The probability of the individual being selected during the selection process is determined byThe proportion of the fitness value of the population in the total fitness value is determined. The population size is set to be M, and the fitness of a certain individual i is set to be f i Then the probability of its being selected is expressed as:
Figure BDA0003833762260000122
obviously, the higher the fitness of an individual in a population, the greater the probability of inheritance of a gene to the next generation. Individuals with the selected genotype as parents are subjected to the next crossover and mutation operation.
Further, the interleaving operation may specifically include the following processing:
traversing each parent individual and randomly selecting another parent individual from the parent population for pairing to form N cross combinations; n cross combinations are used for generating offspring, and N is the number of individuals in the parent population;
each cross combination randomly generates a mask with the same length as the individual; a mask of 1 indicates gene information of parent 1 in the cross-over combination that the child will inherit, and a mask of 1 indicates gene information of parent 0 in the cross-over combination that the child will inherit.
Specifically, in the crossing process, each parent individual is traversed and one individual is randomly selected from the parent population and paired with the parent individual to form N crossing combinations for generating offspring, wherein N is the number of individuals in the parent population. Each cross-combination randomly generates a 0-1 mask of equal length to the individual. If the mask is 1, the filial generation will inherit the gene information of the parent 1 in the cross combination; if the mask is 0, the offspring will inherit the gene information of the parent 0 in the cross-over combination.
During mutation, the genetic information codes of the offspring select the code position of a certain individual according to a set small probability to carry out variable conversion, such as 0 to 1,1 to 0. The sub-individuals after the 3 genetic evolution processes form a next generation population, and the genetic evolution processes are circularly repeated until the set optimization criteria are met. As shown in fig. 8, taking a length of 9 genetic information code as an example, assuming the mask is 000101010, the 4 th, 6 th and 8 th bits in parent 1 and the 1 st, 2 nd, 3 th, 6 th and 8 th bits in parent 2 will be selected to constitute the gene of the next generation; wherein Mask Code represents Mask, parent Generation represents Parent, child Generation represents child, mute represents mutation, and Crossover represents Crossover. Assuming that position 5 is chosen to become an ectopic site, the gene code for the final offspring should be 111011110. In FIG. 8, the 4 th, 6 th and 8 th positions in the progeny code are inherited by the parent individual 1, the 1 st, 2 nd, 3 rd, 7 th and 9 th positions are inherited by the parent individual 2, and the 5 th position is a variant.
And (4) optimizing the result: the optimization result of the evolutionary neural network comprises the selection of a feature extraction method and the structure of a full-connection neural network. Through the processing, the training evolution of the evolutionary neural network can be effectively realized.
In one embodiment, the number of training rounds for training the evolutionary neural network using the training set is 1500 epochs, and the evolutionary algebra is 200 generations.
Specifically, the above mentioned data set is used as a measure for fitness calculation in the evaluation process of the evolved neural network, and the implementation process is as follows:
1) 120 features of the training set and test set data are calculated separately and formed into feature vectors.
2) Parameters of the evolutionary neural network are designed, wherein the population size is 50, the cross probability is 0.8, the mutation probability is 0.03, and the evolutionary algebra is 200 generations. A primary population is randomly generated.
3) And generating a feature vector and a neural network corresponding to each individual for the genetic information in the population, and calculating the fitness of each individual. In the fitness calculation process, a training set is used for training the neural network for 1500 epochs, and the classification score of the trained neural network on a test set is the score of the classification performance in the fitness calculation of the neural network. An epoch indicates that all data is sent into the network, and a forward calculation and backward propagation process is completed.
4) And selecting, crossing and mutating the parent population to generate the offspring population.
5) And judging whether the requirement of evolution algebra 200 generations is met. If the fitness is satisfied, outputting the individual with the highest fitness, and if the fitness is not satisfied, returning to execute the step 3).
After 200 generations of evolution, the individual with the optimal fitness adopts 1 ground vibration signal characteristic and 1 sound signal characteristic, and the number of layers of the neural network is 2. The selected characteristics are kurtosis (marked as kurtosis, shown as formula (6)) and Vibration energy (marked as Vibration energy, shown as formula (7)), and the optimized fully-connected neural network structure is shown as fig. 9.
Figure BDA0003833762260000141
Figure BDA0003833762260000142
Wherein x is i Is the ith data point in x and n is the length of signal x. The fully-connected neural network uses cross entropy loss as an objective function, so the classifier loss function of the fully-connected neural network is:
Figure BDA0003833762260000143
where N is the number of samples, C is the number of target types, t ij Representing the probability that the ith sample belongs to the category j, Θ represents the set of all parameters.
Through the training evolution, the applicable optimal feature extraction method and the neural network structure can be obtained with the minimum calculation cost, and the calculation resources and the time cost are further saved.
In one example, one of the experimental test examples is provided for clarity and understanding of the above methods of the present application. It should be noted that the experimental test examples in the present application are only used for assisting the description and understanding, and are not intended to limit the implementation of the above-mentioned method in the present application.
Evaluation indexes are as follows: in an unattended sensor system, the advantages and disadvantages of a model are generally evaluated by using three indexes of accuracy, a false alarm rate and a false alarm rate. The accuracy rate refers to the proportion of all correctly predicted samples in the total number of samples, and the calculation formula is ACC = (TP + TN)/(TP + TN + FP + FN). Among them, correctly classifying an intrusion event is called True Positive (TP), correctly classifying a noise sample is called True Negative (TN), missing intrusion events are called False Negative (FN), and misjudging noise is called False Positive (FP). The false alarm rate refers to the proportion of the samples which misjudge that noise is an intrusion event in the total number of the intrusion events. The calculation formula is FAR = FP/(TP + TN). The false negative rate is the proportion of the samples which misjudge the intrusion event as noise in the total number of the intrusion event judged, and the calculation formula is UR = FN/(TP + TN).
Data set: the data set used in this example was collected in a single sound and vibration signal target identification experiment. The experimental sites were located on a campus and various experiments were conducted over a two week period. The signal acquisition device is placed on one side of a certain road in the campus, and the pedestrian and the vehicle move along the road, and the pedestrian moves in the scope apart from unmanned sensor 15m, and the vehicle moves in the scope apart from unmanned sensor 30 m. As shown in fig. 10, the signal acquisition device is composed of a sensor module, an AD signal acquisition module, and a notebook computer. The sensor module includes a geophone and an acoustic sensor. The natural frequency of the geophone is 4.5Hz, and the sensitivity is 100mv/m/s. The acoustic transducer includes an analog MEMS microphone with a sensitivity of-42 dbv and an amplifier circuit with a amplification of 110 db. The AD signal acquisition module is a four-way AD acquisition card with the acquisition precision of 24 bits, and the sampling frequency is set to 10000Hz. The types of data collected include pedestrians, small vehicles and background noise (no target).
The detection distance of the unattended sensor is closely related to the noise intensity of the distributed area, the warning distance of the unattended sensor is shorter in a noisy urban area, and the warning distance of the unattended sensor is longer in a quiet rural area. For a vibration signal, its noise level is usually described in terms of vibration intensity. For the Sound signal, sound Pressure Level (SPL) is adopted as a measure unit of the noise level. The vibration intensity VI formula and the sound pressure level SPL are defined as follows:
Figure BDA0003833762260000161
Figure BDA0003833762260000162
wherein S is RMS Root mean square, S, of vibration signal ref And taking 10e-5 as a relative reference value of the vibration intensity. P is RMS Root mean square, P, of the sound signal ref For a relative reference value of the sound pressure level, 10e-5 is taken.
The experimental results are as follows: and (3) taking the training set in the acquired data set as the input of the optimized model, and training the model for 500 epochs. The model reached essentially steady state after 100 epoch. The classification ability of the trained model is then evaluated using the data of the test set. Median results from 11 replicates were selected for discussion. FIG. 11 is a graph of accuracy (a) and loss (b) for a training set and a test set of models during training. FIG. 12 is a confusion matrix of models on a test set.
The overall recognition accuracy of the three categories was calculated to be 98.04% by statistics. For the result of a single object type, the recognition classification accuracy values of small vehicles and pedestrians are high and exceed 99%. The recognition and classification accuracy of the noise is 94.79%, the false alarm rate of the model is 1.71%, and the false alarm rate is 0.24%.
Table 3 is a comparison between the recognition classification performance of advanced machine learning algorithm proposed by other scholars and the classification performance of the above method proposed in the present application in the field of vibration signal-based ground moving object recognition. These advanced machine learning algorithms include genetic algorithm optimization support vector machines (GA-SVM), modified BP neural networks, and Vib-CNN. The GA-SVM is a traditional machine learning method, and the Vib-CNN is an end-to-end deep learning method. The GA-SVM adopts a wavelet decomposition energy ratio, zero crossing rate, mean value, peak value and waveform factor characteristic extraction method, the improved BP neural network adopts the peak value, zero crossing rate and ratio of high-frequency energy and low-frequency energy as the characteristic extraction method, and adopts LFCC (linear frequency cepstrum coefficient) as the characteristic extraction method of the network. To fairly evaluate the performance of these methods, each model will be run through 11 experiments, each of which will be trained at 200 epoch. And (3) taking the median of the classification accuracy of the test set of 11 experiments as the classification effect of the model for discussion, and recording the operation time of the algorithm on the test set.
TABLE 3
Model (model) Rate of accuracy False alarm rate Rate of missing report Operation time
GA-SVM 98.56% 0.96% 0.48% 20.27s
Improved BP neural network 97.47% 0.49% 2.04% 18.37S
Vib-CNN 98.16% 0.43% 1.41% 20083ms
The method of the present application 98.21% 1.31% 0.49% 4.54S
The classification effect of each model is shown in table 3. The classification effect of each model on the test set is similar, but the difference is larger in operation time. The operation time of the Vib-CNN is far longer than that of other models, but the classification accuracy rate of the Vib-CNN is not different from that of other models. The classification performance and the operation time of the GA-SVM and the improved BP neural network on the test set are similar, and the classification accuracy and the operation time of the GA-SVM are slightly higher than those of the improved BP neural network. Compared with the traditional machine learning method and other deep learning methods, the moving target identification method based on the evolutionary neural network has lower algorithm complexity and higher identification accuracy, and can be deployed in hardware equipment with lower cost and lower power consumption.
It should be understood that although the various steps in the flow diagrams of fig. 1-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps of fig. 1-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 13, a moving object recognition apparatus 100 based on an evolutionary neural network is provided, which includes a signal acquisition module 11, a data set construction module 13, a model training module 15 and a recognition classification module 17. The signal acquisition module 11 is used for acquiring sensing signals of vehicles and pedestrians during movement through an unattended sensor system and performing signal preprocessing; the sensing signals comprise sound signals and ground vibration signals. The data set construction module 13 is used for constructing a training set and a test set according to the sound signal and the ground vibration signal; the signal data in the training set and the test set are not the same. The model training module 15 is configured to train in a training set by using an evolved neural network to obtain an optimal feature extraction method and a neural network model. The recognition and classification module 17 is used for recognizing and classifying environmental noise, pedestrians and moving vehicles in the test set according to the optimal feature extraction method and the neural network model to obtain recognized moving targets; the moving objects include pedestrians and moving vehicles.
According to the moving target recognition device 100 based on the evolutionary neural network, after sound signals and ground vibration signals generated when vehicles and pedestrians move are collected through an unattended sensor system, a training set and a testing set are constructed based on collected signal data, then evolutionary neural network training is carried out through the training set to output an optimal feature extraction method and a neural network model, and finally environmental noise, pedestrians and moving vehicles in the testing set are recognized and classified according to the optimal feature extraction method and the neural network model to obtain a recognized moving target. Based on the evolutionary neural network, selection of an optimal feature extraction method and design of a neural network structure are achieved, the optimal feature extraction method and the neural network structure have the minimum feature vector and the network complexity is the lowest, and actual measurement shows that moving target identification is carried out on a test set by using the minimum feature vector optimal feature extraction method and the neural network, the accuracy of target identification is remarkably improved, and therefore the purpose of remarkably improving the identification capability of ground moving targets is achieved.
In one embodiment, the signal pre-processing includes normalizing the signal units and removing the dc component based on a fast fourier transform.
In one embodiment, the training set includes sampled signals that sub-sample the plurality of different time-acquired sound signals and the ground vibration signals, and the test set includes sampled signals that sub-sample the plurality of different time-acquired sound signals and the ground vibration signals.
In one embodiment, the model training module 15 is specifically configured to initialize the evolutionary neural network and generate an initial population in the training set by using the evolutionary neural network; calculating the fitness of each individual in the initial population by using a training set according to the fitness function of the evolutionary neural network, and performing optimal solution search by taking the fitness as a criterion; carrying out evolution operation on the evolved neural network until a training termination condition is reached, and outputting an optimal feature extraction method and a neural network model; the evolution operations include selection, crossover and mutation operations.
In one embodiment, the fitness function is:
Figure BDA0003833762260000191
wherein, acc (x) i ) Representing the ith individual x in a cycle i The AN represents the maximum number of nodes of the optimized full-connection layer network, SN (x) i ) Denotes the ith individual x i The total number of nodes of the neural network.
In one embodiment, the interleaving operation comprises:
traversing each parent individual and randomly selecting another parent individual from the parent population for pairing to form N cross combinations; n cross combinations are used for generating offspring, and N is the number of individuals in the parent population;
each cross combination randomly generates a mask with the same length as the individual; a mask of 1 indicates the gene information of the parent 1 in which the child will be genetically cross-combined, and a mask of 1 indicates the gene information of the parent 0 in which the child will be genetically cross-combined.
In one embodiment, the number of training rounds for training the evolutionary neural network using the training set is 1500 epochs, and the evolutionary algebra is 200 generations.
For specific limitations of the moving object recognition apparatus 100 based on the evolved neural network, reference may be made to the corresponding limitations of the moving object recognition method based on the evolved neural network, and details thereof are not repeated here. The modules in the above-mentioned moving object identifying device 100 based on the evolutionary neural network may be wholly or partially implemented by software, hardware and their combination. The modules may be embedded in a hardware form or a device independent of a specific data processing function, or may be stored in a memory of the device in a software form, so that a processor can call and execute operations corresponding to the modules, where the device may be, but is not limited to, various computer devices existing in the art.
In one embodiment, there is also provided a computer device comprising a memory and a processor, the memory storing a computer program, the processor when executing the computer program implementing the steps of: collecting sensing signals of vehicles and pedestrians during movement through an unattended sensor system and performing signal preprocessing; the sensing signals comprise sound signals and ground vibration signals; constructing a training set and a testing set according to the sound signal and the ground vibration signal; the signal data in the training set and the test set are different; training in a training set by adopting an evolutionary neural network to obtain an optimal feature extraction method and a neural network model; according to the optimal feature extraction method and the neural network model, identifying and classifying the environmental noise, the pedestrians and the moving vehicles in the test set to obtain an identified moving target; the moving objects include pedestrians and moving vehicles.
It is to be understood that the computer device may include, in addition to the aforementioned memory and processor, other software and hardware components not listed in this specification, which may be determined according to the model of the specific computer device in different application scenarios, and the detailed description is not repeated in this specification.
In one embodiment, the processor when executing the computer program may further implement the additional steps or sub-steps in the embodiments of the above-mentioned evolved neural network-based moving object recognition method.
In one embodiment, there is also provided a computer readable storage medium having a computer program stored thereon, the computer program when executed by a processor implementing the steps of: collecting sensing signals of vehicles and pedestrians during movement through an unattended sensor system and performing signal preprocessing; the sensing signals comprise sound signals and ground vibration signals; constructing a training set and a testing set according to the sound signal and the ground vibration signal; the signal data in the training set and the test set are different; training in a training set by adopting an evolutionary neural network to obtain an optimal feature extraction method and a neural network model; according to the optimal feature extraction method and the neural network model, identifying and classifying the environmental noise, the pedestrians and the mobile vehicles in the test set to obtain identified mobile targets; the moving objects include pedestrians and moving vehicles.
In one embodiment, the computer program, when executed by the processor, may further implement the additional steps or sub-steps in the embodiments of the evolved neural network-based moving object recognition method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus DRAM (RDRAM), and interface DRAM (DRDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, without departing from the concept of the present application, several variations and modifications can be made without departing from the spirit of the present application. Therefore, the protection scope of the present patent should be subject to the appended claims.

Claims (10)

1. A moving target identification method based on an evolutionary neural network is characterized by comprising the following steps:
collecting sensing signals of vehicles and pedestrians during movement through an unattended sensor system and performing signal preprocessing; the sensing signals comprise sound signals and ground vibration signals;
constructing a training set and a testing set according to the sound signal and the ground vibration signal; the signal data in the training set and the test set are different;
training in the training set by adopting an evolutionary neural network to obtain an optimal feature extraction method and a neural network model;
according to the optimal feature extraction method and the neural network model, identifying and classifying the environmental noise, the pedestrians and the mobile vehicles in the test set to obtain an identified mobile target; the moving objects include pedestrians and moving vehicles.
2. The evolutionary neural network-based moving object identification method of claim 1, wherein the signal preprocessing comprises normalizing signal units and removing dc components based on fast fourier transform.
3. The evolutionary neural network-based moving target identification method according to claim 1 or 2, wherein the training set comprises sampled signals obtained by sub-sampling the sound signal and the ground vibration signal acquired at different times, and the test set comprises sampled signals obtained by sub-sampling the sound signal and the ground vibration signal acquired at different times.
4. The method for identifying moving objects based on the evolutionary neural network as claimed in claim 3, wherein the process of training in the training set by using the evolutionary neural network comprises:
initializing an evolutionary neural network and generating an initial population;
calculating the fitness of each individual in the initial population by using the training set according to a fitness function of an evolutionary neural network, and performing optimal solution search by taking the fitness as a criterion;
carrying out evolution operation on the evolved neural network until the optimal feature extraction method and the neural network model are output when a training termination condition is reached; the evolutionary operations include selection, crossover, and mutation operations.
5. The evolutionary neural network-based moving target identification method of claim 4, wherein the fitness function is:
Figure FDA0003833762250000021
wherein, acc (x) i ) Representing the ith individual x in a cycle i The AN represents the maximum number of nodes of the optimized full-connection layer network, SN (x) i ) Denotes the ith individual x i The total number of nodes of the neural network.
6. The evolutionary neural network-based moving object recognition method of claim 4, wherein the interleaving operation comprises:
traversing each parent individual and randomly selecting another parent individual from the parent population for pairing to form N cross combinations; the N cross combinations are used for generating offspring, and N is the number of individuals in the parent population;
each said cross-over combination randomly generates a mask of equal length to the individual; the mask of 1 indicates the gene information of the parent 1 in the filial generation will be inherited and crossed, and the mask of 1 indicates the gene information of the parent 0 in the filial generation will be inherited and crossed.
7. The evolved neural network-based moving target recognition method of claim 4, wherein the number of training rounds for training the evolved neural network by using the training set is 1500epoch, and the number of evolutionary generations is 200.
8. An apparatus for identifying a moving object based on an evolutionary neural network, comprising:
the signal acquisition module is used for acquiring sensing signals of vehicles and pedestrians during movement through the unattended sensor system and performing signal preprocessing; the sensing signals comprise sound signals and ground vibration signals;
the data set construction module is used for constructing a training set and a test set according to the sound signal and the ground vibration signal; the signal data in the training set and the test set are different;
the model training module is used for training in the training set by adopting an evolutionary neural network to obtain an optimal feature extraction method and a neural network model;
the recognition and classification module is used for recognizing and classifying the environmental noise, the pedestrians and the mobile vehicles in the test set according to the optimal feature extraction method and the neural network model to obtain recognized mobile targets; the moving objects include pedestrians and moving vehicles.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202211082225.7A 2022-09-06 2022-09-06 Moving target identification method, device and equipment based on evolutionary neural network Pending CN115438765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211082225.7A CN115438765A (en) 2022-09-06 2022-09-06 Moving target identification method, device and equipment based on evolutionary neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211082225.7A CN115438765A (en) 2022-09-06 2022-09-06 Moving target identification method, device and equipment based on evolutionary neural network

Publications (1)

Publication Number Publication Date
CN115438765A true CN115438765A (en) 2022-12-06

Family

ID=84247685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211082225.7A Pending CN115438765A (en) 2022-09-06 2022-09-06 Moving target identification method, device and equipment based on evolutionary neural network

Country Status (1)

Country Link
CN (1) CN115438765A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116299705A (en) * 2023-03-21 2023-06-23 北京航天极峰科技有限公司 Vibration detection method and system
CN116738376A (en) * 2023-07-06 2023-09-12 广东筠诚建筑科技有限公司 Signal acquisition and recognition method and system based on vibration or magnetic field awakening

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116299705A (en) * 2023-03-21 2023-06-23 北京航天极峰科技有限公司 Vibration detection method and system
CN116738376A (en) * 2023-07-06 2023-09-12 广东筠诚建筑科技有限公司 Signal acquisition and recognition method and system based on vibration or magnetic field awakening
CN116738376B (en) * 2023-07-06 2024-01-05 广东筠诚建筑科技有限公司 Signal acquisition and recognition method and system based on vibration or magnetic field awakening

Similar Documents

Publication Publication Date Title
CN115438765A (en) Moving target identification method, device and equipment based on evolutionary neural network
CN110570613A (en) Fence vibration intrusion positioning and mode identification method based on distributed optical fiber system
Scarpiniti et al. Deep Belief Network based audio classification for construction sites monitoring
CN109473119B (en) Acoustic target event monitoring method
CN104751580A (en) Distributed optical fiber sensing signal mode identifying method and system
KR102276964B1 (en) Apparatus and Method for Classifying Animal Species Noise Robust
CN111859010B (en) Semi-supervised audio event identification method based on depth mutual information maximization
CN110751108A (en) Subway distributed vibration signal similarity determination method
CN114169374B (en) Cable-stayed bridge stay cable damage identification method and electronic equipment
Küçükbay et al. Use of acoustic and vibration sensor data to detect objects in surveillance wireless sensor networks
CN111209853B (en) Optical fiber sensing vibration signal mode identification method based on AdaBoost-ESN algorithm
CN116129314A (en) Perimeter intrusion recognition method based on multi-source information fusion
Bin et al. Moving target recognition with seismic sensing: A review
Ye et al. A deep learning-based method for automatic abnormal data detection: Case study for bridge structural health monitoring
CN112329974A (en) LSTM-RNN-based civil aviation security event behavior subject identification and prediction method and system
Wang et al. A novel underground pipeline surveillance system based on hybrid acoustic features
Noumida et al. Deep learning-based automatic bird species identification from isolated recordings
Smailov et al. A Novel Deep CNN-RNN Approach for Real-time Impulsive Sound Detection to Detect Dangerous Events
CN114266271A (en) Distributed optical fiber vibration signal mode classification method and system based on neural network
JP2019139651A (en) Program, device and method for classifying unknown multi-dimensional vector data groups into classes
Tang et al. Deep CNN framework for environmental sound classification using weighting filters
Khan et al. Prior recognition of flash floods: Concrete optimal neural network configuration analysis for multi-resolution sensing
Sun et al. Vehicle acoustic and seismic synchronization signal classification using long-term features
Albaji et al. Investigation on Machine Learning Approaches for Environmental Noise Classifications
RU59842U1 (en) DEVICE FOR SEISMOACOUSTIC DETECTION AND CLASSIFICATION OF MOVING OBJECTS (OPTIONS)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination