CN115969329A - Sleep staging method, system, device and medium - Google Patents

Sleep staging method, system, device and medium Download PDF

Info

Publication number
CN115969329A
CN115969329A CN202310094153.6A CN202310094153A CN115969329A CN 115969329 A CN115969329 A CN 115969329A CN 202310094153 A CN202310094153 A CN 202310094153A CN 115969329 A CN115969329 A CN 115969329A
Authority
CN
China
Prior art keywords
result
convolution
feature
neural network
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310094153.6A
Other languages
Chinese (zh)
Inventor
宫玉琳
李天星
陈晓娟
胡命嘉
景治新
张福君
王慧杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202310094153.6A priority Critical patent/CN115969329A/en
Publication of CN115969329A publication Critical patent/CN115969329A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a sleep staging method, a system, a device and a medium, which are applied to the field of sleep staging, and the method comprises the following steps: acquiring physiological signals related to sleep; acquiring a first output characteristic of the physiological signal after characteristic extraction through a graph convolution neural network; acquiring a second output characteristic of the physiological signal after feature extraction through an attention mechanism convolution neural network; obtaining a feature fusion result after feature fusion is carried out on the first output feature and the second output feature; and acquiring a sleep stage result according to the feature fusion result. The method has the advantages that the problem that feature extraction can only be carried out on Euclidean space at present is solved by adding the graph convolution neural network for feature extraction, meanwhile, the attention mechanism convolution neural network is added, the global feature is extracted by using the attention mechanism, and the influence of local optimum caused by individual difference is reduced to the maximum extent. Not only the sleep staging is realized, but also the accuracy of the sleep staging is higher.

Description

Sleep staging method, system, device and medium
Technical Field
The present application relates to the field of sleep staging, and in particular, to a sleep staging method, system, apparatus, and medium.
Background
With the improvement of living standard, more and more people pay attention to sleep quality, the analysis of the sleep quality is more digital, and because electroencephalogram signals are sequence signals essentially, researchers use a Recurrent Neural Network (RNN) to stage sleep, and extract time domain and frequency domain characteristics at present, but the RNN has the defects of complex calculation and non-parallelism. Therefore, some researchers use a Convolutional Neural Network (CNN) as a structure of the self-encoder to perform sleep staging, thereby greatly reducing model parameters and improving performance.
At present, the RNN and the CNN are used for extracting electroencephalogram characteristics on a time domain and a frequency domain in an Euclidean space, but because a brain region is in a non-Euclidean space, sleep staging performed by adopting the method is not accurate, in addition, the RNN and the CNN are used for delaying sleep, the influence caused by individual difference is large, the two neural networks depend on a characteristic space of a sleep expert extraction structure, but because sleep characteristics required by different sleep classification systems are different, even the performances of the same sleep classification system, different data sets and different channels are also different greatly, the generalization capability is poor, and the accuracy of a sleep staging algorithm is low.
How to solve the problems that the generalization ability is poor when the RNN and the CNN are adopted for sleep staging at present, the accuracy of the sleep staging algorithm is low, and the electroencephalogram signal characteristics on a time domain and a frequency domain can only be extracted on an Euclidean space is a problem to be solved urgently by a person skilled in the art.
Disclosure of Invention
The method, the system, the device and the medium solve the problem that feature extraction can only be performed in Euclidean space at present by adding the graph convolutional neural network for feature extraction, and simultaneously, the attention mechanism convolutional neural network is added for extracting global features by using the attention mechanism, so that the influence of local optimization caused by individual difference is reduced to the maximum extent.
In order to solve the above technical problem, the present application provides a sleep staging method, including:
acquiring physiological signals related to sleep;
calling the graph convolutional neural network and the attention mechanism convolutional neural network to obtain a first output characteristic corresponding to the graph convolutional neural network and a second output characteristic corresponding to the attention mechanism convolutional neural network;
performing feature fusion on the first output feature and the second output feature to obtain a feature fusion result;
and determining a sleep staging result according to the feature fusion result.
Preferably, after acquiring the physiological signal related to sleep, the method further comprises:
and carrying out sample enhancement processing on the physiological signal.
Preferably, the sample enhancement processing of the physiological signal comprises:
by using d ni =d i +rand(0,1)×(d mi -d i ) Synthesizing the physiological signals into a sample set;
wherein d is ni For the ith sample of n samples in the set, d i For each individual sample, d mi Is the ith sample of the m nearest neighbors.
Preferably, calling the graph convolution neural network to obtain the first output characteristic corresponding to the graph convolution neural network comprises:
calling an adjacency matrix and a characteristic matrix;
extracting a first graph convolution result through the first graph convolution layer by using the adjacency matrix and the characteristic matrix;
pooling the first graph convolution result through a graph pooling layer to obtain a pooling result;
calling a second graph convolution layer to extract the pooling result to obtain a second graph convolution result;
obtaining a reading result obtained by reading the second graph convolution result by the reading layer;
sensing the read result through a multilayer sensing machine to obtain a sensing result;
and outputting the first output characteristic according to the sensing result.
Preferably, invoking the adjacency matrix comprises:
by using
Figure BDA0004071185740000021
Constructing an original adjacency matrix;
wherein
Figure BDA0004071185740000022
For the constructed adjacency matrix, A is an original adjacency matrix, I is a unit matrix with the same dimension, and lambda is a constant coefficient;
by using
Figure BDA0004071185740000023
Carrying out symmetrical normalization processing on the constructed adjacency matrix;
wherein
Figure BDA0004071185740000024
For the symmetric and normalized adjacency matrix, <' >>
Figure BDA0004071185740000025
For the constructed adjacency matrix>
Figure BDA0004071185740000026
The degree matrix of (c).
Preferably, invoking the attention-based convolutional neural network to obtain a second output characteristic corresponding to the attention-based convolutional neural network comprises:
acquiring a first sample set;
calling the first convolution layer to extract the first sample set to obtain a first convolution result;
pooling the first convolution result by using a first pooling layer to obtain a first pooling result;
acquiring a processing result obtained by processing the first pooling result by the discarding layer;
calling the second convolution layer to extract the processing result to obtain a second convolution result;
convolving the second convolution result through a convolution attention module to obtain an attention convolution result;
pooling the attention convolution result by using a second pooling layer to obtain a second pooling result;
and outputting a second output characteristic according to the second pooling result.
Preferably, the convolution attention module comprises: calling a third convolution layer to process the second convolution result by using a channel attention function and a space attention function to obtain a third convolution result, wherein the third convolution result comprises the following steps:
performing one-dimensional convolution on the third convolution result by utilizing a channel attention function to obtain a channel convolution result;
performing two-dimensional convolution on the channel convolution result through a space attention function to obtain a space convolution result;
and outputting an attention convolution result according to the space convolution result.
Preferably, the feature fusion of the first output feature and the second output feature to obtain the feature fusion result comprises:
by using
Figure BDA0004071185740000031
Performing feature fusion on the first output feature and the second output feature;
wherein y is i For the final fusion feature, C is the number of classes of all training samples, and Softmax is the classification function.
In order to solve the above technical problem, the present application further provides a sleep staging system, including:
an acquisition module for acquiring physiological signals related to sleep;
the calling module is used for calling the graph convolution neural network and the attention mechanism convolution neural network to obtain a first output characteristic corresponding to the graph convolution neural network and a second output characteristic corresponding to the attention mechanism convolution neural network;
the fusion module is used for carrying out feature fusion on the first output feature and the second output feature to obtain a feature fusion result;
and the determining module is used for determining the sleep staging result according to the feature fusion result.
In order to solve the above technical problem, the present application further provides a sleep staging apparatus, including a memory for storing a computer program;
a processor for implementing the steps of the sleep staging method as described above when executing the computer program.
To solve the above technical problem, the present application further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the sleep staging method as described above.
The sleep staging method provided by the application comprises the following steps: acquiring physiological signals related to sleep; acquiring a first output characteristic of the physiological signal after characteristic extraction through a graph convolution neural network; acquiring a second output characteristic of the physiological signal after feature extraction through an attention mechanism convolution neural network; obtaining a feature fusion result after feature fusion is carried out on the first output feature and the second output feature; and acquiring a sleep staging result according to the feature fusion result. The method has the advantages that the problem that feature extraction can only be carried out on Euclidean space at present is solved by adding the graph convolution neural network for feature extraction, meanwhile, the attention mechanism convolution neural network is added, the global feature is extracted by using the attention mechanism, and the influence of local optimum caused by individual difference is reduced to the maximum extent. Not only the sleep staging is realized, but also the accuracy of the sleep staging is higher.
Drawings
In order to more clearly illustrate the embodiments of the present application, the drawings needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a flowchart of a sleep staging method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating the operation of a convolutional neural network provided in an embodiment of the present application;
FIG. 3 is a block diagram of a graph convolution neural network provided in an embodiment of the present application;
FIG. 4 is a flowchart illustrating operation of an attention-driven convolutional neural network provided by an embodiment of the present application;
FIG. 5 is a block diagram of an attention-driven convolutional neural network provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of the operation of a convolution attention module according to an embodiment of the present application;
FIG. 7 is a schematic diagram of the operation of the channel attention function provided in the embodiments of the present application;
FIG. 8 is a schematic diagram illustrating the spatial attention function provided by an embodiment of the present application;
fig. 9 is a block diagram of a sleep staging system provided in an embodiment of the present application;
fig. 10 is a structural diagram of a sleep staging device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the present application.
The core of the application is to provide a sleep stage method, the problem that feature extraction can only be carried out on Euclidean space at present is solved by adding a graph convolution neural network for feature extraction, and meanwhile, an attention mechanism convolution neural network is added, global features are extracted by using the attention mechanism, and the influence of local optimization caused by individual difference is lightened to the maximum extent.
The operations of obtaining, calling, utilizing, determining and the like in the sleep staging method provided by the application can be realized by a controller in an upper computer, for example, the controller can be a Micro Controller Unit (MCU), and certainly can be realized by other controllers except the MCU, which is not limited in the application.
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings.
Fig. 1 is a flowchart of a sleep staging method according to an embodiment of the present application, and as shown in fig. 1, the method includes:
s10: physiological signals related to sleep are acquired.
Specifically, in this embodiment, the MCU collects and monitors electroencephalogram (EEG) and Electrooculogram (EOG) signals recorded during Sleep, both the EEG signal and the EOG signal are sampled at 100Hz, then the Sleep data of the whole night is divided into several data segments of 30s per frame, and the data segments are divided into the following data segments according to the Sleep standard of the American Academy of Sleep Medicine (AASM): awake (W), rapid Eye Movement (REM), and non-Rapid Eye Movement (N). In the non-rapid eye movement period, the sleep state changes from shallow to deep, and the sleep state is further divided into three sleep stages, namely a sleep stage I (N1), a sleep stage II (N2) and a sleep stage III (N3). It should be noted that, the sampling frequency and the slicing frequency can be set according to the user's requirement, and the embodiment is not particularly limited.
As can be seen, the MCU collects EEG signals and EOG signals related to sleep, and divides the collected physiological signals according to the AASM standard, so as to realize accurate division of the sleep process, and perform specific analysis on physiological signals in different periods.
S11: calling the graph convolutional neural network and the attention mechanism convolutional neural network to obtain a first output characteristic corresponding to the graph convolutional neural network and a second output characteristic corresponding to the attention mechanism convolutional neural network.
In particular, due to currently employed RNsN and CNN are used for extracting electroencephalogram characteristics on a time domain and a frequency domain in Euclidean space, but because a brain region is in non-Euclidean space, sleep staging performed by adopting the method is not accurate, in addition, the influence caused by individual difference of sleep staging performed by adopting RNN and CNN is large, the two neural networks depend on a characteristic space of a sleep expert extraction structure, but because sleep characteristics required by different sleep classification systems are different, even performances of the same sleep classification system, different data sets and different channels are also different greatly, the generalization capability is poor, and the accuracy of calculating a sleep staging algorithm is low. In this embodiment, the obtained physiological signal is converted into the first output characteristic by the graph convolution neural network and the second output characteristic by the attention mechanism convolution neural network. Wherein, the graph convolution neural network constructs a graph convolution neural network structure by using an undirected graph, and the undirected graph describes a formula as follows: g = (V, E), where G represents an undirected graph, V represents a node set, each node in the network represents one electrode channel, and E represents a set of edges, which are used to represent a connection relationship between different nodes, where | V | = N, N is a node number (node) of a convolutional neural network, each node has a corresponding feature vector, features of N nodes form an N × D dimensional feature matrix X, and D represents a feature number; the adjacency matrix of the graph convolution neural network is represented by a, and the adjacency matrix is an N × N-dimensional matrix formed according to the interrelation between each node. X and A are the initial inputs to the model, and define the feature matrix X for each sample i Determining sleep stage y by X, where X represents X i With y representing X i A sleep stage class label. And performing feature extraction on the acquired physiological signal through the graph convolution neural network structure, and taking an extracted result as a first output feature. For Attention-based Convolutional neural networks, a Convolutional neural network model based on Convolutional Attention Module (CBAM) is constructed, and the Convolutional Block Attention function is a lightweight Attention algorithm for feedforward CNN. The function includes channel notesAnd generating an attention drawing from two different dimensions of a channel and a space in turn by two subfunctions of an attention function and a space attention function, multiplying the attention drawing and an input feature diagram in a bitwise manner, performing adaptive feature optimization, and taking an optimized result as a second output feature.
Therefore, the graph convolution neural network is very suitable for describing the data structure of the electroencephalogram signal characteristics and is used for extracting deep information of physiological signal data because different areas of the brain are not in Euclidean space. The attention mechanism convolution neural network improves the influence of individual difference, increases a channel attention function and a space attention function, and outputs the extracted features after optimization.
S12: and performing feature fusion on the first output feature and the second output feature to obtain a feature fusion result.
Specifically, in the embodiment, the first output feature and the second output feature are combined and connected, and then the Softmax classifier is used to perform sleep stage classification by using the final sleep fusion feature as an input.
S13: and determining a sleep staging result according to the feature fusion result.
Specifically, in this embodiment, the output obtained feature fusion result is automatically interpreted by the MCU according to sleep stages, and the accuracy, precision, recall, and other indicators are calculated until the model converges and the accuracy of the test set is stable.
As can be seen, the sleep staging method provided in this embodiment includes: acquiring physiological signals related to sleep; acquiring a first output characteristic of the physiological signal after characteristic extraction through a graph convolution neural network; acquiring a second output characteristic of the physiological signal after characteristic extraction through an attention mechanism convolutional neural network; obtaining a feature fusion result after feature fusion is carried out on the first output feature and the second output feature; and acquiring a sleep staging result according to the feature fusion result. The method has the advantages that the problem that feature extraction can only be carried out on Euclidean space at present is solved by adding the graph convolution neural network for feature extraction, meanwhile, the attention mechanism convolution neural network is added, the global feature is extracted by using the attention mechanism, and the influence of local optimum caused by individual difference is reduced to the maximum extent. Not only the sleep staging is realized, but also the accuracy of the sleep staging is higher.
On the basis of the above embodiment, as a preferred embodiment, after acquiring the physiological signal related to sleep, the method further includes:
and carrying out sample enhancement processing on the physiological signal.
Specifically, in this embodiment, because the data durations in different sleep stages have a large difference, and the durations of the contemporaneous data of different individuals also have a difference, when a physiological signal is processed, imbalance of samples is easily caused, so that learning of signal features is insufficient, and further, the problem of poor classification effect in the sleep stage with fewer samples is caused, so that data enhancement processing needs to be performed on the physiological signal data, and data enhancement is performed by adopting a boundary artificial synthesis minority oversampling algorithm (Borderline SMOTE), so that imbalance of sample data is overcome. For the boundary artificial synthesis minority class oversampling algorithm, original minority samples are grouped into a sample set with a larger number, so that subsequent signal feature learning is more sufficient and effective.
Therefore, the capacity expansion of a few sample sets in the physiological signals is realized through the enhancement processing of the physiological signals, the balance of the samples can be ensured, and the signal characteristic learning is more sufficient and effective.
On the basis of the above embodiment, as a preferred embodiment, the sample enhancement processing of the physiological signal includes:
by using d ni =d i +rand(0,1)×(d mi -d i ) Synthesizing the physiological signals into a sample set;
wherein d is ni For the ith sample, d, of the n samples in the set i For each individual sample, d mi Is the ith sample of the m nearest neighbors.
Specifically, the pair of the present embodimentsEach minority class sample M i Computing distance M from the entire dataset using K-nearest neighbor algorithm i The distance calculation formula of the k nearest neighbor samples is:
Figure BDA0004071185740000081
where d (X, Y) is the Euclidean distance between two sample points, X i And y i Two sample points in n-dimensional space. Wherein the samples are divided into: safety samples, hazard samples, noise samples. The k nearest neighbor samples around the safety sample have more than 1/2 of the samples as a minority sample; more than 1/2 of k nearest neighbor samples around the dangerous samples are a plurality of types of samples; the k nearest neighbor samples around the noise sample are all of the majority class samples. The safety samples are samples in a few decision-making areas and can be removed; dangerous samples may be located near the decision boundary, which is retained; noise samples are most likely in most decision-like regions and need to be removed. For each hazard sample d i Randomly selecting m nearest neighbors from k nearest neighbors, randomly synthesizing n new samples between the m nearest neighbor samples and the original dangerous samples, and merging the newly synthesized samples into the original samples to participate in combination to form a new sample set. Wherein d is utilized ni =d i +rand(0,1)×(d mi -d i ) Synthesizing physiological signals into a sample set; wherein d is ni For the ith sample of n samples in the set, d i For each individual sample, d mi Is the ith sample of the m nearest neighbors. By the processing, a few types of samples can be subjected to enhancement processing, and better samples are provided for the graph convolution neural network and the attention mechanism convolution neural network.
Therefore, the problem that the signal characteristic learning is insufficient due to imbalance of samples caused by large difference of data duration ratios in different sleep stages and the like is solved by enhancing the samples, and better samples are provided for the graph convolution neural network and the attention system convolution neural network.
On the basis of the foregoing embodiment, as a preferred embodiment, fig. 2 is a structural diagram of a graph convolution neural network provided in an embodiment of the present application, and as shown in fig. 2, the obtaining of a first output characteristic corresponding to the graph convolution neural network by calling the graph convolution neural network, where the graph convolution neural network is composed of 2 graph convolution layers, 1 graph pooling layer, 1 readout layer, and 1 multi-layer perceptron, includes:
s14: the adjacency matrix and the feature matrix are invoked.
Specifically, in the present embodiment, the feature matrix and the adjacency matrix are input as input layers, and the input layer of the graph convolutional neural network is constructed by the feature matrix and the adjacency matrix.
S15: a first graph convolution result is extracted through the first graph convolution layer using the adjacency matrix and the feature matrix.
Specifically, the convolutional layer in this embodiment sequentially performs 3 operations: convolution operation, batch normalization and function activation. For the activation function, a ReLU function and a Softmax function are respectively adopted, and the overall forward propagation formula is as follows: z = f (X, A),
Figure BDA0004071185740000091
z is a formula propagating forward, X is a feature matrix, A is an adjacency matrix, and ` H `>
Figure BDA0004071185740000092
For the constructed adjacency matrix, W (0) And W (1) The ReLU is a ReLU function. And expressing a loss function by using cross entropy, wherein the loss function calculates the difference between the forward calculation result of each iteration of the neural network and the true value, so as to guide the next training to be carried out in the correct direction. Calculating a loss function for all the labeled nodes, wherein the expression is as follows:
Figure BDA0004071185740000093
l represents a loss function, y l Is an index set of nodes with labels, l denotes the ith node, F denotes the output characteristics, Y lf Representing the true state value, Z, of the tag node lf RepresentA predicted state value of the tag node.
S16: and pooling the first graph convolution result through the graph pooling layer to obtain a pooling result.
Specifically, in this embodiment, the first graph convolution result is pooled by the graph pooling layer, which serves as a downsampling to generate a deeper sub-structure, and the result after the pooling operation is reduced compared to its input. The introduction of the pooling layer is to perform dimension reduction and abstraction on the visual input object by imitating a human visual system, so that overfitting can be prevented to a certain degree.
S17: and calling the second graph convolution layer to extract the pooling result to obtain a second graph convolution result.
Specifically, in this embodiment, the second graph convolution layer is adopted to convolve the pooling result to obtain a second graph convolution result.
S18: and acquiring a read-out result obtained by reading out the second image convolution result by the read-out layer.
Specifically, the readout layer in this embodiment reads out the second graph convolution result to obtain a readout result, and the readout layer characterizes the final graph representation by reading the sum of the hidden representations of the sub-structures.
S19: and sensing the read result by a Multilayer Perceptron (multilayered Perceptron) to obtain a sensing result.
Specifically, in this embodiment, the multilayer perceptron perceives the read result to obtain a perception result, and a series of processed image data is converted into a feature vector through the multilayer perceptron, and the feature vector is used as a final output feature of the image convolution neural network feature extraction module.
S20: and outputting the first output characteristic according to the sensing result.
Specifically, in the present embodiment, the first output feature is output according to the sensing result as an output result through the graph convolution neural network.
As shown in fig. 3, the graph convolution neural network includes a feature matrix X and an adjacent matrix a as an input layer, a first graph convolution layer, a graph pooling layer, a second graph convolution layer, a readout layer, and a multilayer perceptron, where an output result of each stage enters a next stage for processing, and a perception result obtained by the multilayer perceptron outputs a first output feature as an output result passing through the graph convolution neural network.
It can be seen that a graph convolution neural network composed of 2 graph convolution layers, 1 graph pooling layer, 1 readout layer and 1 multilayer perceptron inputs a feature matrix and an adjacent matrix in an input layer, extracts a high-level representation of the feature through the graph convolution layers, and performs a down-sampling operation through the graph pooling layers to generate a deeper sub-structure. The result is read through the readout layer to read the sum of the hidden representations of the sub-structures to delineate the final graph representation. Finally, the graph data after a series of processing is converted into a feature vector through a multilayer perceptron, and the feature vector is used as the final output feature of the graph convolution neural network feature extraction module.
On the basis of the above embodiment, as a preferred embodiment, invoking the adjacency matrix includes:
by using
Figure BDA0004071185740000101
Constructing an original adjacency matrix;
wherein
Figure BDA0004071185740000102
For the constructed adjacency matrix, A is an original adjacency matrix, I is a unit matrix with the same dimension, and lambda is a constant coefficient;
by using
Figure BDA0004071185740000103
Carrying out symmetrical normalization processing on the constructed adjacency matrix;
wherein
Figure BDA0004071185740000104
For a symmetric and normalized adjacency matrix, <' >>
Figure BDA0004071185740000105
For constructed adjacency matrices>
Figure BDA0004071185740000106
The degree matrix of (c).
Specifically, in this embodiment, since the diagonal elements of the adjacent matrix are all 0, when the node is multiplied by the feature matrix, only the weighted sum of the features of each node and other adjacent nodes except the node itself can be calculated, but the feature of the node itself cannot participate in the calculation. In order to overcome the defect that the unit node characteristics of the adjacent matrix can not participate in the operation, the reparameterization technique is adopted, the unit matrix with the same dimension is added, and the adjacent matrix is constructed
Figure BDA0004071185740000107
Taking lambda as 1, the default node has the same importance of its own characteristics as its neighbors, and then, the value is greater than or equal to>
Figure BDA0004071185740000108
Non-normalized matrix->
Figure BDA0004071185740000109
Multiplication with the feature matrix changes the original distribution of the feature, and therefore requires that the constructed adjacency matrix be treated>
Figure BDA00040711857400001010
And (6) carrying out standardization treatment. Using weighted averaging as a normalization strategy, giving those nodes with lower degrees more weight, the weighted averaging will cause the nodes with lower degrees to have a larger effect on their neighbors; while higher degree nodes will have less effect because their effect is spread out over many neighbors. First of all an adjacency matrix is constructed>
Figure BDA00040711857400001011
Is greater than or equal to>
Figure BDA00040711857400001012
The degree matrix is a diagonal matrix, elements on the diagonal are degrees of each vertex, other elements are zero, and the formula of the degree matrix is as follows: />
Figure BDA00040711857400001013
Wherein it is present>
Figure BDA00040711857400001014
Represents the constructed degree matrix, i represents the ith row, j represents the jth column, and/or ` H `>
Figure BDA00040711857400001015
Indicates that the adjacency matrix pick>
Figure BDA0004071185740000111
Each element of (1); by using the Chebyshev approximation, information of the first order neighbors is extracted for each node, resulting in a symmetric and normalized matrix->
Figure BDA0004071185740000112
Figure BDA0004071185740000113
H represents the characteristic of each layer in the network structure, and the expression of propagation between characteristic layers of the convolutional neural network is obtained as follows: />
Figure BDA0004071185740000114
Wherein H (l+1) Is a characteristic of layer l +1, σ is a nonlinear activation function, H (l) Is a feature of the l-th layer, W (l) Is the l-th layer in the weight coefficient matrix. In this case, the input layer H is the feature matrix.
It can be seen that the embodiment is implemented by constructing the adjacency matrix
Figure BDA0004071185740000115
Overcomes the defect that the unit node characteristics of the adjacent matrix can not participate in the operation, but the un-normalized matrix is subjected to the judgment of the judgment result>
Figure BDA0004071185740000116
Multiplication with the feature matrix changes the original distribution of the feature, and therefore requires that the constructed adjacency matrix be treated>
Figure BDA0004071185740000117
To carry outAnd (6) carrying out standardization processing. And then make a pair->
Figure BDA0004071185740000118
And carrying out symmetrical normalization processing, and finally outputting a relatively ideal adjacency matrix and a relatively ideal feature matrix as input items of the input layer for inputting.
On the basis of the foregoing embodiment, as a preferred embodiment, fig. 4 is a flowchart illustrating a working process of an attention-based convolutional neural network provided in an embodiment of the present application, and as shown in fig. 4, invoking the attention-based convolutional neural network to obtain a second output characteristic corresponding to the attention-based convolutional neural network includes:
s21: a first set of samples is obtained.
Specifically, in the present embodiment, a first sample set subjected to sample enhancement processing is acquired.
S22: and calling the first convolution layer to extract the first sample set to obtain a first convolution result.
Specifically, in this embodiment, the first convolution layer extracts the first sample set to obtain the first convolution result. Wherein each convolutional layer sequentially performs 3 operations: convolution operation, batch normalization and function activation. The nonlinear function ReLU function is used in the function activation to increase the nonlinear fitting capability of the network. The expression of the ReLU activation function is: reLU (x) = max (0,x). Where ReLU is the ReLU function and x is the input value.
S23: and pooling the first convolution result by using the first pooling layer to obtain a first pooling result.
Specifically, in this embodiment, the first pooling layer is used to pool the first convolution result to obtain the first pooled result, the pooling layer plays a role of down-sampling to generate a deeper sub-structure, and the result after the pooling operation is smaller than the input result. The pooling layer is introduced to perform dimension reduction and abstraction on the visual input object by imitating a human visual system, and overfitting can be prevented to a certain extent.
S24: and acquiring a processing result obtained by processing the first pooling result by the discarding layer.
Specifically, in order to reduce the overfitting of the network and reduce the time consumption of the network model, a discarding layer is arranged after the first max pooling layer. A Dropout function is set, a certain amount of zeroing is performed on the information input to the discarding layer, a parameter p represents the zeroing probability, and a certain amount of zeroing is performed on the information by taking p as 0.5.
S25: and calling the second convolution layer to extract the processing result to obtain a second convolution result.
Specifically, in this embodiment, the second convolutional layer includes a plurality of sub-convolutional layers, the functions are the same, feature extraction is performed on the result obtained at the previous stage to obtain a convolutional result, and in consideration of that no overfitting phenomenon occurs in the network structure, within a certain range, the more sub-convolutional layers are included, the better the processing effect on the data is, as shown in fig. 5, the second convolutional layer includes two sub-convolutional layers.
S26: and convolving the second convolution result by the convolution attention module to obtain an attention convolution result.
Specifically, the volume attention module in the present embodiment includes: and the channel attention function and the space attention function are used for obtaining an attention convolution result by convolving the second convolution result by the functions.
S27: and pooling the attention convolution result by using a second pooling layer to obtain a second pooling result.
Specifically, in this embodiment, the second pooling layer is used to pool the attention convolution result to obtain a second pooled result, the pooling layer plays a role of sampling, a deeper sub-structure is generated, and the result after the pooling operation is reduced compared with the input result. The introduction of the pooling layer is to perform dimension reduction and abstraction on the visual input object by imitating a human visual system, so that overfitting can be prevented to a certain degree.
S28: and outputting a second output characteristic according to the second pooling result.
Specifically, the second pooled result is used as the output second output characteristic in this embodiment.
As shown in fig. 5, the attention mechanism convolutional neural network includes a first convolutional layer, a first pooling layer, a discarding layer, a first sub-convolutional layer, a second sub-convolutional layer, a convolutional attention module, and a second pooling layer, where the first sample set subjected to sample enhancement processing is used as input data, an output result of each stage enters a next stage for processing, and a second pooling result output by the second pooling layer is used as an output second output characteristic.
Therefore, by extracting the global features by using the attention mechanism, the influence of local optimization caused by individual difference is lightened to the maximum extent. Not only the sleep staging is realized, but also the accuracy of the sleep staging is higher.
On the basis of the above embodiment, as a preferred embodiment, fig. 6 is a structural diagram of a convolution attention module provided in an embodiment of the present application, and as shown in fig. 6, the convolution attention module includes: the channel attention function and the space attention function, and the step of calling the second convolution layer to extract the processing result to obtain a second convolution result comprises the following steps:
performing one-dimensional convolution on the second convolution result by using a channel attention function to obtain a channel convolution result;
performing two-dimensional convolution on the channel convolution result through a space attention function to obtain a space convolution result;
and outputting an attention convolution result according to the space convolution result.
Specifically, in this embodiment, first, the feature F is input, and one-dimensional convolution is performed through the channel attention function to obtain the channel attention map M C Channel attention map M C Bit-wise multiplying the original input features F to obtain channel attention output features F ', taking the output features F' of the channel attention function as the input of the space attention function, performing two-dimensional convolution operation of the space attention function, and generating a space attention diagram M S And bit-wise multiplied with the spatial attention function input F' to obtain the output characteristic F ″ of the spatial attention function, i.e., the final output characteristic. The channel attention output characteristic F 'and the spatial attention output characteristic F' are expressed as:
Figure BDA0004071185740000131
wherein F' is the channel attention function output quantity, M C Is a channel attention diagram, and F is an input characteristic; f' is a space attention functionNumber output quantity, M S A spatial attention map is presented. Fig. 7 is a schematic diagram of the operation of the channel attention function provided in the embodiment of the present application, and as shown in fig. 7, the channel attention function is established, and the channel attention function is generated by using the inter-channel relationship of the features. First, the spatial information of the feature map is aggregated in parallel using both the average pooling and maximum pooling methods to generate two different spatial descriptors, wherein ^ is greater than or equal to ^ i>
Figure BDA0004071185740000132
For averaging pooling>
Figure BDA0004071185740000133
For maximum pooling, both descriptors are then sent to a shared network. The average pooling is used for calculating an average value of the image area as a pooled value of the area, the maximum pooling is used for selecting a maximum value of the image area as a pooled value of the area, and then the two descriptors are sent to a shared network. The shared network consists of multiple layers of perceptrons and a hidden layer. In order to reduce the parameter overhead, the number of channels is compressed to the original 1/r, r is a compression coefficient, and then the number of channels is expanded to the original number of channels. And obtaining two activated results through a ReLU activation function, adding the two results according to bits, outputting a feature vector, and standardizing the feature vector through a sigmoid function to obtain an output result of the channel attention module. Channel attention map M C The expression is as follows:
Figure BDA0004071185740000134
/>
wherein M is C (F) For channel attention map M C When the input quantity is the channel attention map when the characteristic F is input, sigma is a nonlinear activation function, W 1 Is a weight coefficient, W, of layer 1 0 Is a weight coefficient of layer 0. Fig. 8 is a working diagram of a spatial attention function provided in an embodiment of the present application, and as shown in fig. 8, the spatial attention function is established, and a spatial attention map is generated by using a spatial relationship between features. Computing a spatial attention map using first two pooling along the channel axisOperate to aggregate the channel information of the feature map, generating two-dimensional maps:
Figure BDA0004071185740000135
for pooling the corresponding mappings on average>
Figure BDA0004071185740000136
The mapping corresponding to the maximum pooling is used to represent the average pooling characteristic and the maximum pooling characteristic across the channels, respectively. The characteristic information is spliced in series and convolution operation is carried out through a convolution layer of 7 multiplied by 7 to generate a two-dimensional space attention map, and finally the final attention map M is obtained through the standardization of a sigmoid function S The expression is as follows:
Figure BDA0004071185740000141
wherein M is S (F) For space attention map M S The input quantity of (a) is a space attention map, F, when the feature F is input 7×7 This shows that 7 × 7 convolution operations are performed.
It can be seen that by performing serial sequential layout on the two attention functions, the channel attention function is in front of the spatial attention function, and then the spatial attention function is behind the channel attention function, so as to form an attention convolution neural network structure. The influence of local optimization caused by individual difference is solved. Not only the sleep staging is realized, but also the accuracy of the sleep staging is higher.
On the basis of the foregoing embodiment, as a preferred embodiment, performing feature fusion on the first output feature and the second output feature to obtain a feature fusion result includes:
by using
Figure BDA0004071185740000142
Performing feature fusion on the first output feature and the second output feature;
wherein y is i For the final fusion feature, C is the number of classes of all training samples, and Softmax is the classification function.
Specifically, in this embodiment, feature fusion is performed on the acquired first output feature and the second output feature, so that the two features are merged and connected, and then a Softmax classifier is used to perform sleep stage classification by using the final sleep fusion feature as an input. And outputting the automatic interpretation result of the sleep stage, and calculating indexes such as accuracy, precision, recall rate and the like until the model converges and the accuracy of the test set is stable.
Therefore, the acquired features obtained by the graph convolution neural network and the attention mechanism convolution neural network are fused, so that the function of finally staging the sleep is realized.
In the above embodiments, the sleep staging method is described in detail, and the present application also provides embodiments corresponding to the sleep staging system. It should be noted that the present application describes embodiments of the system part from two perspectives, one from the perspective of the function module and the other from the perspective of the hardware.
Based on the angle of the functional module, fig. 9 is a structural diagram of a sleep staging system provided in the embodiment of the present application, and as shown in fig. 9, the sleep staging system includes:
an acquisition module 10 for acquiring physiological signals related to sleep;
the calling module 11 is configured to call the graph convolutional neural network and the attention mechanism convolutional neural network to obtain a first output feature corresponding to the graph convolutional neural network and a second output feature corresponding to the attention mechanism convolutional neural network;
the fusion module 12 is configured to perform feature fusion on the first output feature and the second output feature to obtain a feature fusion result;
and the determining module 13 is configured to determine a sleep staging result according to the feature fusion result.
Since the embodiments of the apparatus portion and the method portion correspond to each other, please refer to the description of the embodiments of the method portion for the embodiments of the apparatus portion, which is not repeated here.
The sleep staging system provided by the embodiment corresponds to the method, so the sleep staging system has the same beneficial effects as the method.
Fig. 10 is a structural diagram of a sleep staging device according to an embodiment of the present application, and as shown in fig. 10, the sleep staging device includes: a memory 20 for storing a computer program;
a processor 21 for implementing the steps of the sleep staging method as mentioned in the above embodiments when executing the computer program.
The sleep staging apparatus provided by the present embodiment may include, but is not limited to, an apparatus capable of implementing a sleep staging method, and the like.
The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The Processor 21 may be implemented in hardware using at least one of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), and a Programmable Logic Array (PLA). The processor 21 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a Graphics Processing Unit (GPU) which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 21 may further include an Artificial Intelligence (AI) processor for processing computational operations related to machine learning.
The memory 20 may include one or more computer-readable storage media, which may be non-transitory. Memory 20 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 20 is at least used for storing the following computer program 201, wherein after being loaded and executed by the processor 21, the computer program can implement the relevant steps of the sleep staging method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 20 may also include an operating system 202, data 203, and the like, and the storage manner may be a transient storage manner or a permanent storage manner. Operating system 202 may include, among other things, windows, unix, linux, etc. Data 203 may include, but is not limited to, data related to sleep staging methods, and the like.
In some embodiments, the sleep staging device may also include a display screen 22, an input-output interface 23, a communication interface 24, a power supply 25, and a communication bus 26.
Those skilled in the art will appreciate that the configuration shown in fig. 10 is not intended to be limiting of sleep staging devices and may include more or fewer components than those shown.
The sleep staging device provided by the embodiment of the application comprises a memory and a processor, wherein the processor can realize the method when executing the program stored in the memory.
Finally, the application also provides a corresponding embodiment of the computer readable storage medium. The computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps as set forth in the above-mentioned method embodiments.
It is to be understood that if the method in the above embodiments is implemented in the form of software functional units and sold or used as a stand-alone product, it can be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application may be substantially or partially implemented in the form of a software product, which is stored in a storage medium and executes all or part of the steps of the methods of the embodiments of the present application, or all or part of the technical solutions. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The sleep staging methods, systems, devices and media provided herein are described in detail above. The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.

Claims (11)

1. A sleep staging method, comprising:
acquiring physiological signals related to sleep;
calling a graph convolutional neural network and an attention mechanism convolutional neural network to obtain a first output characteristic corresponding to the graph convolutional neural network and a second output characteristic corresponding to the attention mechanism convolutional neural network;
performing feature fusion on the first output feature and the second output feature to obtain a feature fusion result;
and determining a sleep staging result according to the feature fusion result.
2. The sleep staging method as claimed in claim 1, further comprising, after the acquiring the sleep-related physiological signal:
and carrying out sample enhancement processing on the physiological signal.
3. The sleep staging method of claim 2, wherein the sample enhancement processing of the physiological signal comprises:
by using d ni =d i +rand(0,1)×(d mi -d i ) Synthesizing the physiological signals into a sample set;
wherein d is ni For the ith sample of n samples in the set, d i For each individual sample, d mi Is the ith sample of the m nearest neighbors.
4. The sleep staging method of claim 1, wherein invoking the atlas neural network to obtain the first output feature corresponding to the atlas neural network comprises:
calling an adjacency matrix and a characteristic matrix;
extracting a first graph convolution result by utilizing the adjacency matrix and the characteristic matrix to pass through a first graph convolution layer;
pooling the first graph convolution result through a graph pooling layer to obtain a pooling result;
calling a second graph convolution layer to extract the pooling result to obtain a second graph convolution result;
acquiring a read-out result obtained by reading out the second graph convolution result by the read-out layer;
sensing the read result through a multilayer sensing machine to obtain a sensing result;
and outputting the first output characteristic according to the sensing result.
5. The sleep staging method of claim 4, wherein the invoking the adjacency matrix comprises:
by using
Figure FDA0004071185710000011
Constructing an original adjacency matrix;
wherein
Figure FDA0004071185710000021
For the constructed adjacency matrix, A is an original adjacency matrix, I is a unit matrix with the same dimension, and lambda is a constant coefficient;
by using
Figure FDA0004071185710000022
Carrying out symmetrical normalization processing on the constructed adjacency matrix;
wherein
Figure FDA0004071185710000023
For the symmetric and normalized adjacency matrix, <' >>
Figure FDA0004071185710000024
For the constructed adjacency matrix->
Figure FDA0004071185710000025
The degree matrix of (c).
6. The sleep staging method of claim 3, wherein the invoking the attention-based convolutional neural network to obtain the second output characteristic corresponding to the attention-based convolutional neural network comprises:
obtaining the first sample set;
calling a first convolution layer to extract the first sample set to obtain a first convolution result;
pooling the first convolution result by using a first pooling layer to obtain a first pooling result;
acquiring a processing result obtained by processing the first pooling result by a discarding layer;
calling a second convolution layer to extract the processing result to obtain a second convolution result;
convolving the second convolution result by a convolution attention module to obtain an attention convolution result;
pooling the attention convolution result by using a second pooling layer to obtain a second pooling result;
and outputting the second output characteristic according to the second pooling result.
7. The sleep staging method of claim 6, wherein the convolutional attention module comprises: the calling the second convolution layer to extract the processing result to obtain a second convolution result comprises:
performing one-dimensional convolution on the second convolution result by using the channel attention function to obtain a channel convolution result;
performing two-dimensional convolution on the channel convolution result through the space attention function to obtain a space convolution result;
and outputting the attention convolution result according to the space convolution result.
8. The sleep staging method of claim 1, wherein the feature fusing the first output feature and the second output feature to obtain a feature fusion result comprises:
by using
Figure FDA0004071185710000026
Performing feature fusion on the first output feature and the second output feature;
wherein y is i For the final fusion feature, C is the number of classes of all training samples, and Softmax is the classification function.
9. A sleep staging system, comprising:
an acquisition module for acquiring physiological signals related to sleep;
the calling module is used for calling the graph convolution neural network and the attention mechanism convolution neural network to obtain a first output characteristic corresponding to the graph convolution neural network and a second output characteristic corresponding to the attention mechanism convolution neural network;
the fusion module is used for carrying out feature fusion on the first output feature and the second output feature to obtain a feature fusion result;
and the determining module is used for determining the sleep staging result according to the feature fusion result.
10. A sleep staging apparatus comprising a memory for storing a computer program;
a processor for implementing the steps of the sleep staging method according to any one of claims 1 to 8 when executing said computer program.
11. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps of the sleep staging method according to any one of claims 1 to 8.
CN202310094153.6A 2023-02-08 2023-02-08 Sleep staging method, system, device and medium Pending CN115969329A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310094153.6A CN115969329A (en) 2023-02-08 2023-02-08 Sleep staging method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310094153.6A CN115969329A (en) 2023-02-08 2023-02-08 Sleep staging method, system, device and medium

Publications (1)

Publication Number Publication Date
CN115969329A true CN115969329A (en) 2023-04-18

Family

ID=85963277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310094153.6A Pending CN115969329A (en) 2023-02-08 2023-02-08 Sleep staging method, system, device and medium

Country Status (1)

Country Link
CN (1) CN115969329A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116421152A (en) * 2023-06-13 2023-07-14 长春理工大学 Sleep stage result determining method, device, equipment and medium
CN116720545A (en) * 2023-08-10 2023-09-08 中国医学科学院药用植物研究所 Information flow control method, device, equipment and medium of neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110811558A (en) * 2019-11-18 2020-02-21 郑州大学 Sleep arousal analysis method based on deep learning
CN115054270A (en) * 2022-06-17 2022-09-16 上海大学绍兴研究院 Sleep staging method and system for extracting sleep spectrogram features based on GCN
CN115349821A (en) * 2022-06-15 2022-11-18 深圳技术大学 Sleep staging method and system based on multi-modal physiological signal fusion
CN115530847A (en) * 2022-09-30 2022-12-30 哈尔滨理工大学 Electroencephalogram signal automatic sleep staging method based on multi-scale attention
CN115620484A (en) * 2022-09-30 2023-01-17 重庆长安汽车股份有限公司 Fatigue driving alarm method and device, vehicle-mounted unit system, vehicle and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110811558A (en) * 2019-11-18 2020-02-21 郑州大学 Sleep arousal analysis method based on deep learning
CN115349821A (en) * 2022-06-15 2022-11-18 深圳技术大学 Sleep staging method and system based on multi-modal physiological signal fusion
CN115054270A (en) * 2022-06-17 2022-09-16 上海大学绍兴研究院 Sleep staging method and system for extracting sleep spectrogram features based on GCN
CN115530847A (en) * 2022-09-30 2022-12-30 哈尔滨理工大学 Electroencephalogram signal automatic sleep staging method based on multi-scale attention
CN115620484A (en) * 2022-09-30 2023-01-17 重庆长安汽车股份有限公司 Fatigue driving alarm method and device, vehicle-mounted unit system, vehicle and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LI, MENGLEI; CHEN, HONGBO; CHENG, ZIXUE: "An Attention-Guided Spatiotemporal Graph Convolutional Network for Sleep Stage Classification", LIFE-BASEL, vol. 12, no. 5, 21 April 2022 (2022-04-21), pages 2 - 10 *
李青香: "基于深度学习的睡眠分期方法研究", 中国优秀硕士学位论文全文数据库 医药卫生科技辑, no. 1, 15 January 2022 (2022-01-15) *
郑和裕, 林美娜: "基于空洞卷积和注意力机制的睡眠呼吸暂停检测方法", 自动化与信息工程, vol. 43, no. 2, 28 April 2022 (2022-04-28) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116421152A (en) * 2023-06-13 2023-07-14 长春理工大学 Sleep stage result determining method, device, equipment and medium
CN116421152B (en) * 2023-06-13 2023-08-22 长春理工大学 Sleep stage result determining method, device, equipment and medium
CN116720545A (en) * 2023-08-10 2023-09-08 中国医学科学院药用植物研究所 Information flow control method, device, equipment and medium of neural network
CN116720545B (en) * 2023-08-10 2023-10-27 中国医学科学院药用植物研究所 Information flow control method, device, equipment and medium of neural network

Similar Documents

Publication Publication Date Title
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
CN115969329A (en) Sleep staging method, system, device and medium
CN111144329B (en) Multi-label-based lightweight rapid crowd counting method
US20230334632A1 (en) Image recognition method and device, and computer-readable storage medium
CN111291604A (en) Face attribute identification method, device, storage medium and processor
CN106874921A (en) Image classification method and device
CN110321805B (en) Dynamic expression recognition method based on time sequence relation reasoning
CN112990008B (en) Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network
CN112465069A (en) Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN
CN110674774A (en) Improved deep learning facial expression recognition method and system
CN114677730A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN113554084A (en) Vehicle re-identification model compression method and system based on pruning and light-weight convolution
Asghar et al. Semi-skipping layered gated unit and efficient network: hybrid deep feature selection method for edge computing in EEG-based emotion classification
Asyhar et al. Implementation LSTM Algorithm for Cervical Cancer using Colposcopy Data
CN113420651B (en) Light weight method, system and target detection method for deep convolutional neural network
CN117574059A (en) High-resolution brain-electrical-signal deep neural network compression method and brain-computer interface system
CN113158970A (en) Action identification method and system based on fast and slow dual-flow graph convolutional neural network
Zhang et al. A novel DenseNet Generative Adversarial network for Heterogenous low-Light image enhancement
Subbarao et al. Detection of Retinal Degeneration via High-Resolution Fundus Images using Deep Neural Networks
CN115330759B (en) Method and device for calculating distance loss based on Hausdorff distance
CN116898451A (en) Method for realizing atrial fibrillation prediction by using neural network with multi-scale attention mechanism
CN110569790B (en) Residential area element extraction method based on texture enhancement convolutional network
CN112926502B (en) Micro expression identification method and system based on coring double-group sparse learning
CN114997230A (en) Signal-oriented characteristic positioning display and quantitative analysis method and device
CN114626408A (en) Electroencephalogram signal classification method and device, electronic equipment, medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination