CN112380959A - Univariate time series classification method based on graph neural network - Google Patents

Univariate time series classification method based on graph neural network Download PDF

Info

Publication number
CN112380959A
CN112380959A CN202011250933.8A CN202011250933A CN112380959A CN 112380959 A CN112380959 A CN 112380959A CN 202011250933 A CN202011250933 A CN 202011250933A CN 112380959 A CN112380959 A CN 112380959A
Authority
CN
China
Prior art keywords
time series
graph
data
network
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011250933.8A
Other languages
Chinese (zh)
Inventor
宣琦
裘坤锋
周锦超
项靖阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202011250933.8A priority Critical patent/CN112380959A/en
Publication of CN112380959A publication Critical patent/CN112380959A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a univariate time sequence classification method based on a graph neural network, which comprises the steps of preliminarily processing univariate time sequence data by using a plurality of one-dimensional convolutional layers with different convolutional kernel sizes, then thinning the obtained feature vector by using a ReLU activation function, then constructing a weighted undirected network graph by using the obtained feature vector according to a certain rule, and finally classifying the obtained network graph by using a network graph classification model in the graph neural network field, thereby realizing the classification task of a univariate time sequence data set. The classification precision is improved, on the single variable time sequence data set of Bonn epilepsy electroencephalogram, the classification precision achieved by the method is superior to that obtained by using a visual graph networking method, and the processing speed is obviously improved.

Description

Univariate time series classification method based on graph neural network
Technical Field
The invention relates to network science, data mining and data analysis technologies, in particular to a univariate time series classification method based on a graph neural network.
Background
A time series is a series of data points in the time domain that contains important information, where there is often the same time interval between adjacent data points in the temporal hierarchy. Currently, general time series data can be divided into univariate time series and multivariate time series, wherein elements in the univariate time series are from measured values of the same variable, that is, one time point corresponds to only one numerical value, and each time point in the multivariate time series corresponds to a plurality of values. The time sequence is visible everywhere in life, and the application thereof is quite wide, such as weather prediction, financial prediction, electrocardio classification, electroencephalogram classification, radio modulation signal classification and the like, and the application on the time sequence can be divided into a prediction task and a classification task probably. The invention mainly researches the classification problem on univariate time series.
For the classification problem of time series, because time series data and traditional space point data are different in nature, a numerical value corresponding to each time point is equivalent to one dimension in a Euclidean space, and the relevance between the time points in the time series has important information, the traditional classification algorithm based on Euclidean distance cannot be well applied to the time series, so that researchers provide a time series distance measurement index called dynamic time warping, and classify by combining traditional machine learning classification models such as a random forest and a support vector machine, which is an earlier time series classification method. Besides, the corresponding statistics of the time series are extracted, and the classification is carried out by a method of extracting the features and then classifying through a classification model. However, the two conventional time series classification methods need manual design, are relatively complex in specific implementation process, have high requirements on professional knowledge of related researchers, and are not very good in classification accuracy.
With the development of deep learning technology, researchers apply convolutional neural networks to time-series data and achieve better effects in view of the successful application of the convolutional neural networks in the image field. Besides, related personnel classify the time-series data by using a recurrent neural network, and the classification precision is better than that of the traditional method. In addition, with the development of the field of complex networks, some people also map the time series into the network diagrams by using some visual diagram networking methods, then classify the network diagrams by using a method in the field of network diagram classification, and finally realize the goal of classifying the time series.
The typical method for mapping time series into network graphs and then classifying the network graphs is not rare at present, but most of the mapping methods are not flexible, and the whole classification process is complex. Such as the technical scheme disclosed in the patent application No. CN201610889168.1, and an electroencephalogram signal analysis method based on a complex network and application thereof. The method classifies an electroencephalogram signal sequence data set, constructs a level limited traversing visual graph complex network for each multi-scale electroencephalogram signal after preprocessing operations such as normalization and filtering are carried out on original electroencephalograms, then calculates and extracts characteristic indexes of each network graph, and finally realizes classification of electroencephalogram signal sequences by combining a support vector machine classifier in machine learning.
Disclosure of Invention
The invention aims to provide a univariate time sequence classification method based on a graph neural network, which can achieve the end-to-end signal classification aim by only processing the univariate time sequence by a constructed graph neural network model without mastering professional knowledge in the relevant time sequence field and some characteristic indexes of a network graph, wherein the preliminary characteristics of the time sequence are automatically extracted by a plurality of one-dimensional convolution and nonlinear activation functions, then a characteristic matrix is constructed according to the obtained characteristics to obtain the network graph, and finally the network graph is classified by a classical network graph classification model, thereby reducing the computational complexity of the existing network graph reclassification method by mapping the time sequence into the network graph and improving the classification precision.
In order to achieve the purpose, the invention provides the following scheme:
the invention discloses a univariate time series classification method based on a graph neural network, which comprises the following steps of:
s1, collecting a sample data set, and carrying out normalization processing to obtain time sequence data;
s2, constructing k-1 one-dimensional convolutional layers, training the one-dimensional convolutional layers to obtain trained one-dimensional convolutional layers, and processing the time sequence data through the trained one-dimensional convolutional layers and a nonlinear activation function to obtain k-1 characteristic sequences, wherein k is more than or equal to 2;
the one-dimensional convolutional layer comprises a plurality of convolutional kernels, the length of each convolutional kernel is m, m is an integer, and m is more than or equal to 2 and less than or equal to k;
s3, arranging and combining the characteristic sequences to form a characteristic matrix, and mapping the sample data set into a network graph based on the characteristic matrix;
and S4, training the network graph classification model to obtain the trained network graph classification model, and classifying the network graph based on the trained network graph classification model.
Preferably, the classification is carried out by taking the Bonn epilepsy electroencephalogram data set as a sample data set.
Preferably, the normalization processing method is as follows:
s1.1, establishing a sample data set, wherein the sample data set comprises the number of time points and numerical values corresponding to the time points;
and S1.2, performing normalization processing based on the maximum sample data value and the minimum sample data value of the sample data set.
Preferably, the step size of the one-dimensional convolutional layer is 1.
Preferably, the network graph has N nodes, and the number of the nodes is equal to the length of the time series;
the size of the feature matrix is N × N.
Preferably, the feature matrix is composed of feature elements, an ith feature element obtained by processing the time sequence by using a one-dimensional convolution layer with a convolution kernel length of m is constructed in the ith row, the (i + m-1) th column and the (i + m-1) th row, the ith column of the feature matrix.
Preferably, the network graph classification model is DiffPool.
Preferably, the one-dimensional convolutional layer and the network graph classification model are capable of being trained together.
The invention discloses the following technical effects:
the method makes full use of the self-learning ability of the neural network, enables the time sequence to be mapped into a suitable network map in a learning way, fully retains the characteristic information contained in the original time sequence in the mapping process, reduces the operation complexity, accelerates the processing speed and improves the classification precision compared with a visual map networking method for fixedly converting the time sequence into the network map, and on the single variable time sequence data set of the Bonn epilepsy electroencephalogram, the classification precision achieved by the method is superior to that obtained by the visual map networking method, and the processing speed is obviously improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a diagram of the method steps of the present invention;
FIG. 2 is a one-dimensional convolution and DiffPool sorting of time series data according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses a univariate time series classification method based on a graph neural network, which takes a Bonn epilepsy electroencephalogram sequence classification method based on the graph neural network as an example for explanation, a Bonn epilepsy electroencephalogram time series data set is preliminarily processed by a plurality of one-dimensional convolutional layers with different convolutional kernel sizes, a feature vector obtained by thinning a ReLU activation function is used, a weighted undirected network diagram is constructed by using the obtained feature vector according to a certain rule, and finally, the obtained network diagram is classified by using a network diagram classification model in the field of the graph neural network, so that the classification task of the Bonn epilepsy electroencephalogram data set is realized.
As shown in fig. 1-2, the specific steps of the method of the present invention are a univariate time series classification method based on a graph neural network, taking a born epilepsy electroencephalogram sequence classification method based on a graph neural network as an example, the method includes the following steps:
s1: preprocessing a Bonn epilepsy electroencephalogram data set, performing the same normalization processing on each sample in the time sequence data set to obtain normalized time sequence data, and randomly dividing to obtain training set data;
s2: processing the normalized training set time sequence data by using a plurality of one-dimensional convolutional layers and a nonlinear activation function to obtain a plurality of corresponding characteristic sequences, wherein the sizes of convolutional kernels of different one-dimensional convolutional layers are different;
s3: arranging and combining the characteristic sequences after the one-dimensional convolution processing to form a characteristic matrix capable of representing a network diagram, and mapping each time sequence sample into a corresponding network diagram;
s4: classifying the obtained network diagram by using a network diagram classification model in the field of a graph neural network, wherein all one-dimensional convolution layers can be trained together with the network diagram classification model, then randomly taking 50 samples from training set samples as a verification set, and storing the model with the best classification precision of the verification set;
s5: and (4) removing the training set samples in the step S1 from the normalized and preprocessed Bonn epilepsy electroencephalogram data set, taking the remaining time sequence samples as a test set, and processing the test set sequence samples by using the stored model to obtain the final classification precision.
Further, the step S1 specifically includes:
s1.1: acquiring univariate time series data through a Bonn epilepsy electroencephalogram data set, and performing a classification experiment, wherein the normalization processing is as follows:
let each sample in the raw time series data set be represented as:
T1×N=[T1,T2,…,TN] (1)
wherein N represents the length of the time series, i.e. the number of time points, TnRepresenting the value, T, corresponding to the nth time point1×NRepresenting a time series sample of length N, the min-max normalization method is used to normalize the time series sample T1×NPreprocessing is carried out to obtain normalized time series data t1×N=[t1,t2,…,tN]The calculation formula is as follows:
Figure BDA0002771558480000051
wherein T ismaxAnd TminRespectively representing time series samples T1×NMaximum and minimum values of (a).
S1.2: randomly dividing to obtain training set samples:
the born epilepsy electroencephalogram data set used in this embodiment is acquired in a germany born epilepsy research room, and totally contains 5 types of electroencephalogram signals, the class labels are Z, O, N, F and S respectively, each type of signal sample has 100 single-channel signal samples, and the length of each signal sample is 4097, wherein the sub-data set Z records electroencephalogram data of 5 healthy volunteers when the volunteers open their eyes in a waking state, the sub-data set O records electroencephalogram data of the above 5 healthy volunteers when the volunteers close their eyes in a waking state, the sub-data set N records electroencephalogram data of 5 epileptic patients during an interval of a seizure, the sub-data set F records electroencephalogram data of 5 epileptic patients during a period without an epileptic seizure, and the sub-data set S records electroencephalogram data of 5 epileptic patients during an epileptic seizure. And randomly dividing each type of signal according to a ratio of 9:1 to obtain a training set and a test set, and primarily obtaining 450 sequence samples as the training set, wherein a random seed value during random division is set to 2020.
Further, the step S2 specifically includes:
s2.1: given a plurality of one-dimensional convolution layers with different convolution kernel sizes:
setting k-1 one-dimensional convolution layers with different convolution kernel sizes, which are respectively marked as Conv1DmWherein m is not less than 2 and not more than k, and m represents the corresponding one-dimensional convolutional layer Conv1DmThe convolution kernel size of (2) is m, and the step size of the convolution kernels of all the one-dimensional convolution layers is set to 1.
S2.2: conv1D using the above one-dimensional convolution layermProcessing normalized training set time series data t1×N=[t1,t2,…,tN]Respectively obtaining corresponding time sequence data after convolution
Figure BDA0002771558480000052
Figure BDA0002771558480000053
The treatment process comprises the following steps:
Figure BDA0002771558480000054
processing the convolved time series data phi by using the ReLU activation function1×(N+1-m)Obtaining corresponding characteristic sequences
Figure BDA0002771558480000061
The treatment process comprises the following steps:
Figure BDA0002771558480000062
further, the step S3 specifically includes:
s3.1: the characteristic sequences are arranged and combined to obtain a characteristic matrix M with the size of NxN, and the arrangement rule is that the characteristic sequences are arranged
Figure BDA0002771558480000063
Characteristic element of
Figure BDA0002771558480000064
The characteristic matrix M is arranged in the ith row, the (i + M-1) th column and the (i + M-1) th row and the ith column in the characteristic matrix M, the rest positions of the matrix are 0, and the obtained characteristic matrix M is as follows:
Figure BDA0002771558480000065
s3.2: and forming a network graph G ═ V, E > according to the obtained feature matrix M, wherein V is a node set of the network graph, N nodes are shared, and E is a continuous edge set.
Further, the step S4 specifically includes:
s4.1: and (3) dividing a verification set from the training set:
and randomly taking 50 samples from the training set samples as a verification set, and simultaneously training the one-dimensional convolutional layer and network diagram classification models by using the remaining 400 brain electrical sequence samples.
S4.2: and classifying 50 Bonn epilepsy electroencephalogram sequence verification samples by using the model, and storing the model with the highest classification precision of the verification set.
Further, the step S5 specifically includes:
s5.1: acquiring a Bonn epilepsy electroencephalogram sequence test set sample:
and removing 450 training set samples in the step S1 from the normalized and preprocessed Bonn epilepsy electroencephalogram data set, and taking the remaining 50 time sequence samples as a test set.
S5.2: and classifying the Bonn epilepsy electroencephalogram sequence test set samples by using the model stored in the step S4.
When the method is used for carrying out classification experiments on the public Bonn epilepsy electroencephalogram data set, compared with the limited-crossing visual graph networking method provided in the thesis of time series network model based on limited-crossing visual graph, the classification precision is improved, and the time consumed by the method is far less than the time required by the limited-crossing visual graph networking method.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (8)

1. A univariate time series classification method based on a graph neural network comprises the following steps:
s1, obtaining each sample data T of a univariate time series data set1×NAnd performing the same normalization processing to obtain normalized time series data t1×N
Let each sample in the univariate time series data set be represented as:
T1×N=[T1,T2,…,TN] (1)
wherein N represents the length of the time series, i.e. the number of time points, TnRepresenting the value, T, corresponding to the nth time point1×NRepresenting a time series sample with the length of N, and normalizing the time series sample T by using a min-max normalization method1×NPreprocessing is carried out to obtain normalized time series data t1×N=[t1,t2,…,tN]The normalized calculation formula is as follows:
Figure FDA0002771558470000011
wherein T ismaxAnd TminRespectively represent time sequencesColumn sample T1×NMaximum and minimum values of;
s2, simultaneously using a plurality of one-dimensional convolution layers with different convolution kernel sizes m and the nonlinear activation function ReLU to process the normalized time sequence data t1×N=[t1,t2,…,tN]Obtaining a plurality of corresponding characteristic sequences
Figure FDA0002771558470000018
Setting k-1 one-dimensional convolution layers with different convolution kernel sizes, which are respectively marked as Conv1DmWherein m is not less than 2 and not more than k, and m represents the corresponding one-dimensional convolutional layer Conv1DmThe convolution kernel size of (2) is m, and the step size of the convolution kernels of all the one-dimensional convolution layers is set to 1.
Conv1D using the above one-dimensional convolution layermProcessing normalized time series data t1×N=[t1,t2,…,tN]Respectively obtaining corresponding time sequence data after convolution
Figure FDA0002771558470000012
Figure FDA0002771558470000013
The treatment process comprises the following steps:
Figure FDA0002771558470000014
processing the convolved time series data phi by using the ReLU activation function1×(N+1-m)Obtaining corresponding characteristic sequences
Figure FDA0002771558470000015
The treatment process comprises the following steps:
Figure FDA0002771558470000016
s3, passing the above one-dimensional solution through a one-dimensional filterConvolutional layer and ReLU activation function processed feature sequence
Figure FDA0002771558470000017
And arranging and combining to form a feature matrix M capable of representing the network graph G, wherein each time series sample is mapped into a corresponding network graph.
The characteristic sequence
Figure FDA0002771558470000021
Arranging and combining to obtain a feature matrix M with the size of NxN, wherein the arrangement rule is that the feature sequence is arranged
Figure FDA0002771558470000022
Characteristic element of
Figure FDA0002771558470000023
The characteristic matrix M is arranged in the ith row, the (i + M-1) th column and the (i + M-1) th row and the ith column in the characteristic matrix M, the rest positions of the matrix are 0, and the obtained characteristic matrix M is as follows:
Figure FDA0002771558470000024
forming a network graph G (V, E) according to the obtained characteristic matrix M, wherein V is a node set of the network graph, N nodes are shared, and E is a continuous edge set;
and S4, taking the network graph G (V, E) obtained by mapping all the original time sequence data as input, and classifying the obtained network graph by using a network graph classification model DiffPool in the field of a graph neural network to achieve the purpose of classifying the time sequence data, wherein all the one-dimensional convolutional layers can be trained together with the network graph classification model.
2. The univariate time series classification method based on the graph neural network as claimed in claim 1, wherein: the specific way of acquiring the univariate time-series data in step S1 is:
acquiring univariate time series data through a Bonn epilepsy electroencephalogram data set, wherein the data set is acquired in a Germany Bonn epilepsy research room and totally comprises 5 types of electroencephalogram signals, the class labels are Z, O, N, F and S respectively, each class comprises 100 single-channel signal samples, and the length of each signal sample is 4097, wherein the sub-data set Z records electroencephalogram data of 5 healthy volunteers when the eyes are open in a waking state, the sub-data set O records electroencephalogram data of the 5 healthy volunteers when the eyes are closed in the waking state, the sub-data set N records electroencephalogram data of 5 epileptic patients at a seizure interval, the sub-data set F records electroencephalogram data of the 5 epileptic patients during a period without epileptic seizure, and the sub-data set S records electroencephalogram data of the 5 epileptic patients during epileptic seizure. In each type of signal, training sets and test sets are obtained through random division according to a ratio of 9:1, wherein random seed values in random division are set to 2020.
3. The univariate time series classification method based on the graph neural network according to claim 1, characterized in that:
the normalization processing method in step S1 includes:
s1.1, establishing a sample data set, wherein the sample data set comprises the number of time points and numerical values corresponding to the time points;
and S1.2, performing normalization processing based on the maximum sample data value and the minimum sample data value of the sample data set.
4. The univariate time series classification method based on the graph neural network according to claim 1, characterized in that:
the step size of all the one-dimensional convolution layers described in step S2 is 1.
5. The univariate time series classification method based on the graph neural network according to claim 1, characterized in that:
the number of nodes of the network graph in the step S3 is equal to the time series length, and the network graph has N nodes;
the size of the feature matrix M corresponding to the network map is N multiplied by N.
6. The univariate time series classification method based on the graph neural network according to claim 1, characterized in that:
step S3 is to form the feature matrix from feature elements, process the ith feature element obtained by the time series with a one-dimensional convolutional layer having a convolutional kernel length of m, and construct the ith feature element in the ith row, the (i + m-1) th column and the (i + m-1) th row, the ith column of the feature matrix.
7. The univariate time series classification method based on the graph neural network according to claim 1, characterized in that:
step S4 shows that the network map classification model is DiffPool.
8. The univariate time series classification method based on the graph neural network according to claim 1, characterized in that:
the one-dimensional convolutional layer and the network graph classification model can be trained together.
CN202011250933.8A 2020-11-11 2020-11-11 Univariate time series classification method based on graph neural network Withdrawn CN112380959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011250933.8A CN112380959A (en) 2020-11-11 2020-11-11 Univariate time series classification method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011250933.8A CN112380959A (en) 2020-11-11 2020-11-11 Univariate time series classification method based on graph neural network

Publications (1)

Publication Number Publication Date
CN112380959A true CN112380959A (en) 2021-02-19

Family

ID=74578571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011250933.8A Withdrawn CN112380959A (en) 2020-11-11 2020-11-11 Univariate time series classification method based on graph neural network

Country Status (1)

Country Link
CN (1) CN112380959A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011495A (en) * 2021-03-18 2021-06-22 郑州大学 GTN-based multivariate time series classification model and construction method thereof
CN116019461A (en) * 2023-03-01 2023-04-28 厦门大学 Epileptic type detection method combining eye movement and electroencephalogram
CN116977708A (en) * 2023-06-14 2023-10-31 北京建筑大学 Bearing intelligent diagnosis method and system based on self-adaptive aggregation visual view
CN117076994A (en) * 2023-10-18 2023-11-17 清华大学深圳国际研究生院 Multi-channel physiological time sequence classification method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011495A (en) * 2021-03-18 2021-06-22 郑州大学 GTN-based multivariate time series classification model and construction method thereof
CN116019461A (en) * 2023-03-01 2023-04-28 厦门大学 Epileptic type detection method combining eye movement and electroencephalogram
CN116977708A (en) * 2023-06-14 2023-10-31 北京建筑大学 Bearing intelligent diagnosis method and system based on self-adaptive aggregation visual view
CN116977708B (en) * 2023-06-14 2024-04-12 北京建筑大学 Bearing intelligent diagnosis method and system based on self-adaptive aggregation visual view
CN117076994A (en) * 2023-10-18 2023-11-17 清华大学深圳国际研究生院 Multi-channel physiological time sequence classification method
CN117076994B (en) * 2023-10-18 2024-01-26 清华大学深圳国际研究生院 Multi-channel physiological time sequence classification method

Similar Documents

Publication Publication Date Title
CN112380959A (en) Univariate time series classification method based on graph neural network
Burrello et al. One-shot learning for iEEG seizure detection using end-to-end binary operations: Local binary patterns with hyperdimensional computing
CN111134666B (en) Emotion recognition method of multi-channel electroencephalogram data and electronic device
Wang et al. Best basis-based wavelet packet entropy feature extraction and hierarchical EEG classification for epileptic detection
CN110070105B (en) Electroencephalogram emotion recognition method and system based on meta-learning example rapid screening
CN112699960B (en) Semi-supervised classification method, equipment and storage medium based on deep learning
Taqi et al. Classification and discrimination of focal and non-focal EEG signals based on deep neural network
CN112766355B (en) Electroencephalogram signal emotion recognition method under label noise
CN112200016A (en) Electroencephalogram signal emotion recognition based on ensemble learning method AdaBoost
Zangeneh Soroush et al. A novel EEG-based approach to classify emotions through phase space dynamics
CN113554110B (en) Brain electricity emotion recognition method based on binary capsule network
CN109325410B (en) Electroencephalogram EEG (electroencephalogram) feature extraction method based on convolutional neural network
CN115804602A (en) Electroencephalogram emotion signal detection method, equipment and medium based on attention mechanism and with multi-channel feature fusion
CN112932501A (en) Method for automatically identifying insomnia based on one-dimensional convolutional neural network
CN115414051A (en) Emotion classification and recognition method of electroencephalogram signal self-adaptive window
Tripathi et al. Multivariate time series classification with an attention-based multivariate convolutional neural network
Maharaj et al. Discrimination of locally stationary time series using wavelets
CN113011330B (en) Electroencephalogram signal classification method based on multi-scale neural network and cavity convolution
CN112259228B (en) Depression screening method by dynamic attention network non-negative matrix factorization
KR102298709B1 (en) Device and method for learning connectivity
Jia et al. Deep learning with convolutional neural networks for sleep arousal detection
CN115316955A (en) Light-weight and quick decoding method for motor imagery electroencephalogram signals
CN114997230A (en) Signal-oriented characteristic positioning display and quantitative analysis method and device
Wei et al. A study on the universal method of EEG and ECG prediction
Melek et al. Roza: a new and comprehensive metric for evaluating classification systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210219