CN114550907A - Epilepsy detection system - Google Patents

Epilepsy detection system Download PDF

Info

Publication number
CN114550907A
CN114550907A CN202210054441.4A CN202210054441A CN114550907A CN 114550907 A CN114550907 A CN 114550907A CN 202210054441 A CN202210054441 A CN 202210054441A CN 114550907 A CN114550907 A CN 114550907A
Authority
CN
China
Prior art keywords
epilepsy
data
monitoring data
dynamic
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210054441.4A
Other languages
Chinese (zh)
Inventor
刘思行
朱李晨
江佩芳
龚健强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xingmai Technology Co ltd
Childrens Hospital of Zhejiang University School of Medicine
Original Assignee
Hangzhou Xingmai Technology Co ltd
Childrens Hospital of Zhejiang University School of Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xingmai Technology Co ltd, Childrens Hospital of Zhejiang University School of Medicine filed Critical Hangzhou Xingmai Technology Co ltd
Priority to CN202210054441.4A priority Critical patent/CN114550907A/en
Publication of CN114550907A publication Critical patent/CN114550907A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4094Diagnosing or monitoring seizure diseases, e.g. epilepsy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Mathematical Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Neurology (AREA)
  • Physiology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Neurosurgery (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Fuzzy Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Signal Processing (AREA)
  • Psychology (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The present application relates to an epilepsy detection system, comprising: the system comprises a data acquisition system and a cloud server connected with the data acquisition system, wherein the data acquisition system is used for acquiring multi-mode data of a user, uploading the multi-mode data to the cloud server, and carrying out epilepsy early warning according to an epilepsy prediction result returned by the cloud server; the multimodal data comprises dynamic monitoring data and static monitoring data; the cloud server is used for inputting the multi-modal data into an epilepsy detection model to obtain an epilepsy prediction result; the epilepsy detection model is a neural network model obtained by training based on multi-modal data samples of a user and corresponding epileptic seizure events. By the method and the device, the pathological analysis capability of the model and the detection accuracy rate of the epilepsy detection can be improved.

Description

Epilepsy detection system
Technical Field
The application relates to the technical field of auxiliary medical treatment, in particular to an epilepsy detection system.
Background
Epilepsy is a neurological disorder, symptoms may include transient loss of consciousness, severe convulsions, etc., and the quality of life of an epileptic patient is significantly reduced due to unpredictability of the disease and limitations in the monitoring conditions. The existing clinical equipment is large in size and inconvenient to carry, and the implementation of remote and real-time monitoring is limited. In recent years, with the development of sensor technology, automatic monitoring devices can provide continuous dynamic monitoring and can accurately capture physiological signals. The health information of the patient is monitored in real time by using automatic monitoring equipment, and an epilepsy prediction algorithm is deployed at the cloud end to realize remote diagnosis, so that personalized epilepsy detection becomes possible.
However, most of the existing algorithms for epilepsy detection based on automatic monitoring devices are based on single signal analysis, such as acceleration signals, electrocardiosignals and the like, and are still insufficient in dimensionality of data acquisition and algorithm processing capacity, and the detection accuracy of clinical epilepsy events is not high.
Disclosure of Invention
The embodiment of the application provides an epilepsy detection system, which at least solves the problem of low detection accuracy rate of epilepsy detection based on a single signal in the related art.
In a first aspect, an embodiment of the present application provides an epilepsy detection system, including: a data acquisition system and a cloud server connected to the data acquisition system, wherein,
the data acquisition system is used for acquiring multi-modal data of a user, uploading the multi-modal data to the cloud server, and carrying out epilepsy early warning according to an epilepsy prediction result returned by the cloud server; the multimodal data comprises dynamic monitoring data and static monitoring data;
the cloud server is used for inputting the multi-modal data into an epilepsy detection model to obtain an epilepsy prediction result; the epilepsy detection model is a neural network model obtained based on multi-modal data samples of a user and corresponding epileptic seizure event training.
In some of these embodiments, the data acquisition system comprises a wearable device and a user terminal, wherein,
the wearable device is used for acquiring dynamic monitoring data of a user in real time and uploading the dynamic monitoring data to the cloud server;
the user terminal is used for acquiring static monitoring data of a user, uploading the static monitoring data to the cloud server, and carrying out epilepsy early warning according to an epilepsy prediction result returned by the cloud server.
In some embodiments, the cloud server is specifically configured to:
acquiring dynamic monitoring data of a user, and determining first characteristic information according to the dynamic monitoring data and a first sub-model; the first sub-model is a model obtained by utilizing a recurrent neural network for training;
acquiring the static monitoring data, and determining second characteristic information according to the static monitoring data, the first characteristic information and a second submodel; the second sub-model is a model obtained by utilizing attention mechanism training;
and generating an epilepsy prediction result according to the second characteristic information.
In some embodiments, the obtaining dynamic monitoring data of the user and determining the first feature information according to the dynamic monitoring data and the first sub-model includes:
acquiring multi-dimensional dynamic monitoring data; wherein the dynamic monitoring data comprises at least two dynamic vectors;
inputting the dynamic monitoring data into a well-trained graph cyclic neural network to obtain a dynamic characteristic matrix; wherein the dimension of the dynamic feature matrix is the same as the dimension of the dynamic monitoring data.
In some embodiments, the inputting the dynamic monitoring data into a well-trained graph-cyclic neural network to obtain a dynamic feature matrix includes:
and acquiring a data relation graph among all corresponding dynamic vectors of the dynamic monitoring data, and inputting the data relation graph and the dynamic monitoring data into the graph recurrent neural network to output the dynamic feature matrix.
In some embodiments, the obtaining the static monitoring data, and determining second feature information according to the static monitoring data, the first feature information, and the second submodel includes:
acquiring static monitoring data;
determining a corresponding static feature vector according to the static monitoring data;
inputting the dynamic feature matrix and the static feature vector into a multi-attention mechanism model to output second feature information.
In some embodiments, the inputting the dynamic feature matrix and the static feature vector into a multi-attention mechanism model to output second feature information comprises:
determining a plurality of feature sub-vectors according to the dynamic feature matrix and the static feature vector;
respectively projecting the plurality of feature sub-vectors to a query space, a key space and a value space to obtain the query vector, the key vector and the value vector corresponding to the feature sub-vectors;
and calculating to obtain second characteristic information according to the query vectors, the key vectors and the value vectors of the plurality of characteristic sub-vectors.
In some of these embodiments, the generating an epilepsy prediction from the second characteristic information comprises:
determining a plurality of pieces of hidden representation information according to the second characteristic information;
calculating corresponding attention weights for the plurality of pieces of hidden representation information respectively;
generating a prediction probability vector according to the plurality of hidden representation information and the corresponding attention weight;
and determining an epilepsy prediction result according to the probability prediction vector and the prediction target.
In some embodiments, the user terminal is further configured to:
feedback information of the user based on the epilepsy early warning is acquired and uploaded to the cloud server, so that the cloud server updates an epilepsy detection model of the user based on the feedback information.
In some embodiments, the cloud server is further configured to:
determining a loss value based on the epilepsy prediction result of the user and the corresponding feedback information;
updating model parameters in the epilepsy detection model until the loss value is less than a desired threshold.
Compared with the related art, the epilepsy detection system provided by the embodiment of the application comprises: the system comprises a data acquisition system and a cloud server connected with the data acquisition system. Multimodal data of a user are collected through the data collection system and uploaded to the cloud server, and epilepsy early warning is carried out according to an epilepsy prediction result returned by the cloud server; and the cloud server is used for inputting the multi-modal data into an epilepsy detection model to obtain an epilepsy prediction result. The epilepsy detection model is a neural network model obtained based on multi-modal data samples of a user and corresponding epileptic seizure event training. The epilepsy detection model is obtained by training the multi-modal data samples, so that the epilepsy detection model is obtained by establishing various physiological parameters by utilizing dynamic monitoring data and static monitoring data, is a multi-data joint analysis model, is more suitable for the actual application requirements, and can improve the pathological analysis capability of the model and the detection accuracy rate of the epilepsy detection.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of the structure of an epilepsy detection system in one embodiment of the present application;
FIG. 2 is a schematic diagram of a training process of an epilepsy detection model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an update process of an epilepsy detection model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of the structure of an epilepsy detection model in one embodiment of the present application;
fig. 5 is a network structure diagram of a first submodel in an embodiment of the present application.
Description of the drawings: 102. a data acquisition system; 1021. a wearable device; 1022. a user terminal; 104. and (4) a cloud server.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but rather can include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Epilepsy detection is implemented by using a computer or other tools, and distinguishing epileptic seizures from non-epileptic seizures by using a detection algorithm. With the continuous development of digital signal processing and computer technology, the epilepsy detection technology is also continuously deepened and innovated, and continuously proposed learning algorithms comprise dictionary learning, artificial neural networks, support vector machines, collaborative representation, Bayesian linear discrimination and the like.
The epilepsy detection system can be applied to the epilepsy detection process. As shown in fig. 1, the data acquisition system 102 communicates with the cloud server 104 over a network. The data acquisition system 102 acquires multi-modal data of the user and uploads the multi-modal data to the cloud server 104, so that data acquisition of the user is realized. The cloud server 104 is deployed with an epilepsy detection algorithm, the cloud server 104 acquires multi-modal data sent by the data acquisition system 102, inputs the multi-modal data into an epilepsy detection model, obtains an epilepsy prediction result and returns the epilepsy prediction result to the data acquisition system 102 for epilepsy early warning, and therefore implementation of remote and real-time monitoring of epilepsy events is achieved.
The following description will be made of embodiments of the present application taking a magnetic resonance system as an example.
The embodiment provides an epilepsy detection system, which is characterized by comprising: a data acquisition system 102 and a cloud server 104 connected to the data acquisition system 102.
The data acquisition system 102 is configured to acquire multi-modal data of a user, upload the multi-modal data to the cloud server 104, and perform epilepsy early warning according to an epilepsy prediction result returned by the cloud server 104. The data acquisition system 102 can acquire multimodal data of a user in continuous real-time. The data acquisition system 102 includes, but is not limited to, any one of or a combination of a user terminal 1022, a tablet, a wearable device 1021, a sensor. Compared with single-signal analysis, the multi-modal data in the embodiment includes dynamic monitoring data and static monitoring data, and the detection accuracy of the clinical event can be obviously improved by performing combined analysis based on the multi-modal data. The dynamic monitoring data comprises time sequence data acquired by different data receiving channels of different monitoring instruments within a preset time period, such as a photoplethysmography (PPG) signal, a triaxial acceleration signal, heart rate data, blood oxygen saturation data, blood pressure data, body temperature data and the like of a user; static monitoring data a static baseline representing individual differences includes, but is not limited to, the user's personal basic information such as name, gender, age, past medical history, and the like.
The cloud server 104 is deployed with an epilepsy detection algorithm, and the cloud server 104 is configured to input the multi-modal data into an epilepsy detection model to obtain an epilepsy prediction result. As shown in fig. 2, the epilepsy detection model is a neural network model trained based on multi-modal data samples of the user and corresponding epileptic seizure events, and the epilepsy detection model can be trained at system initialization. Specifically, in the system initialization phase, the data acquisition system 102 acquires multi-modal data of the user to obtain multi-modal data samples, where the multi-modal data samples include dynamic monitoring data samples (including PPG signals, acceleration signals, physiological parameters, and the like) and static monitoring data samples, and meanwhile, relevant professionals report seizure time and seizure performance of an epileptic event that has occurred to the user, so as to obtain a corresponding epileptic seizure event. And training the pre-trained neural network model by taking the multi-modal data samples as input and taking the epileptic seizure events corresponding to the multi-modal data samples as output to obtain the epileptic detection model.
In summary, the epilepsy detection system provided in the embodiment of the present application includes: the system comprises a data acquisition system and a cloud server connected with the data acquisition system. Multimodal data of a user are collected through the data collection system and uploaded to the cloud server, and epilepsy early warning is carried out according to an epilepsy prediction result returned by the cloud server; and the cloud server is used for inputting the multi-modal data into an epilepsy detection model to obtain an epilepsy prediction result. The epilepsy detection model is a neural network model obtained based on multi-modal data samples of a user and corresponding epileptic seizure event training. The epilepsy detection model is obtained by training the multi-modal data samples, so that the epilepsy detection model is obtained by establishing various physiological parameters by utilizing dynamic monitoring data and static monitoring data, is a multi-data joint analysis model, is more suitable for the actual application requirements, and can improve the pathological analysis capability of the model and the detection accuracy rate of the epilepsy detection.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
On the basis of the above embodiments, in some of the embodiments, the data acquisition system 102 includes a wearable device 1021 and a user terminal 1022. Wherein: the wearable device 1021 and the user terminal 1022 may be two independent devices or may be integrated into a whole, and the present application is not limited thereto.
The wearable device 1021 is used for collecting dynamic monitoring data of the user in real time and uploading the dynamic monitoring data to the cloud server 104. The wearable device 1021 may be a wearable accessory such as a smart watch, a wearable garment, or the like. In some embodiments, sensors may be mounted on the wearable device 1021 to acquire dynamic monitoring data, such as continuous cardiovascular status detection using PPG technology, which is low in cost.
The user terminal 1022 may be a mobile device such as a mobile phone, and the user terminal 1022 may acquire static monitoring data of a user and upload the static monitoring data to the cloud server 104, and perform epilepsy early warning according to an epilepsy prediction result returned by the cloud server 104, thereby implementing real-time automatic detection of epilepsy.
On the basis of the foregoing embodiments, in some embodiments, the cloud server 104 is specifically configured to perform the following steps:
step S201, acquiring dynamic monitoring data of a user, and determining first characteristic information according to the dynamic monitoring data and a first sub-model.
Specifically, after the dynamic monitoring data of the user is acquired, the correlation relationship features between the internal components of the dynamic monitoring data are extracted by using a first sub-model which is completely trained, so that first feature information is obtained. The first sub-model is a model obtained by utilizing a recurrent neural network training, and can be an original Recurrent Neural Network (RNN) (Recurrent neural network) and other improved methods based on the recurrent neural network, such as a long-time memory neural network and a graph recurrent neural network.
Step S202, acquiring the static monitoring data, and determining second characteristic information according to the static monitoring data, the first characteristic information and the second submodel; the second sub-model is a model obtained by training with an attention mechanism.
And S203, generating an epilepsy prediction result according to the second characteristic information.
In the present embodiment, the attention mechanism is used to calculate the contribution of the input data to the output data, i.e., calculate the respective weight values of the input data. The attention mechanism can be a sub-attention mechanism and a multi-head attention mechanism. The static monitoring data may represent static baseline information for individual differences in vector form, including age, gender, etc.
Specifically, after the static monitoring data is acquired, the static monitoring data and the first feature information are input to a second sub-model which is completely trained together, the weight between the static monitoring data and the first target result is calculated through the second sub-model, and then the weighted second feature information is output. The second characteristic information can represent a prediction representation result of the incidence relation between the monitoring data in the multi-modal data, and a corresponding epilepsy prediction result can be determined according to the second characteristic information.
Through the steps, the incidence relation characteristics of the dynamic monitoring data can be extracted by utilizing the first sub-model obtained by the training of the recurrent neural network, and a single linear characteristic calculation mode is broken through, so that the nonlinear incidence relation calculation capability is realized. Meanwhile, the multi-modal data processing method performs feature extraction on dynamic monitoring data, inputs the dynamic monitoring data and the static monitoring data into a second sub-model together for weight distribution, obtains second feature information which can represent a calculation result after weighted distribution of the monitoring data, and accordingly joint representation of the static monitoring data and the dynamic monitoring data in the multi-modal data is achieved.
On the basis of the foregoing embodiments, in some embodiments, the acquiring dynamic monitoring data of the user, and determining first feature information according to the dynamic monitoring data and the first sub-model includes:
step S2011, multi-dimensional dynamic monitoring data is obtained; wherein the dynamic monitoring data comprises at least two dynamic vectors.
In this embodiment, the dynamic monitoring data may be obtained from a time sequence data matrix of different data receiving channels within a preset time period, which includes different index results for measuring the body level, such as heart rate, blood sample saturation (SpO2), photoplethysmography (PPG), and the like. The dynamic monitoring data comprises at least two dynamic vectors, and each dynamic vector represents time sequence data received by one data receiving channel within a preset time period.
Step S2012, inputting the dynamic monitoring data into a graph circulation neural network with complete training to obtain a dynamic characteristic matrix.
In this embodiment, the first target model is a well-trained graph recurrent neural network, input data of the graph recurrent neural network is matrix data formed by dynamic monitoring data, output is a dynamic feature matrix, and a dimension of the output dynamic feature matrix is the same as a dimension of the input dynamic monitoring data.
Illustratively, the dynamic monitoring data may include, for example, multi-channel time series data such as heart rate, blood oxygen saturation, blood pressure, body temperature, photoplethysmography (PPG) signal, which may be recorded as PPG
Figure BDA0003475614530000081
Is represented as follows:
Figure BDA0003475614530000082
where m denotes the number of signals and n denotes the length of the sequence sampled within a time slice t.
In this embodiment, the Graph recurrent neural network can be the Graph long and short memory unit Graph LSTM. The graph recurrent neural network structure comprises a forgetting gate, an input gate, an output gate and an intermediate state. The forgetting gate controls whether to forget the hidden state of the previous layer with a certain probability, and combines the current input data and simultaneously transmits the data to the sigmoid function. The input gate receives the previous layer of hidden state output and the current input information, whether the intermediate state is transmitted to the hidden state output in the current time period or not is controlled according to a certain probability, and the value of the output gate and the value of the intermediate state at the current time are multiplied bit by bit to generate the final output value of the graph length memory unit.
Of course, in other embodiments, the dynamic feature matrix may also be obtained by a gated round robin unit (GRU) or other improved methods based on a recurrent neural network, which will not be described herein again.
Through the steps, the dynamic monitoring data are input into the first sub-model after the graph circulation neural network training, the incidence relation characteristics of the dynamic monitoring data can be extracted, and a single linear characteristic calculation mode is broken through, so that the nonlinear incidence relation calculation capability is realized, the problem of weak incidence of multi-mode representation data is solved, and the purpose of strong incidence of the representation data is achieved.
On the basis of the foregoing embodiments, in some embodiments, the inputting the dynamic monitoring data into a well-trained graph-recurrent neural network to obtain a dynamic feature matrix includes:
and acquiring a data relation graph among all corresponding dynamic vectors of the dynamic monitoring data, and inputting the data relation graph and the dynamic monitoring data into the graph recurrent neural network to output the dynamic feature matrix.
In this embodiment, in order to characterize the interrelation between different signals, a data relationship graph between all corresponding dynamic vectors of dynamic monitoring data may be generated by an adjacency matrix, and the data relationship graph and the dynamic monitoring data are input to the graph recurrent neural network to output the dynamic feature matrix. Wherein, the Adjacency matrix (Adjacency matrix) is an N multiplied by N matrix with the element as the weight, and the Adjacency relation is calculated as the connection vertex viAnd vjEdge e ofij(ii) a If there is an adjacency relation e between motion vectors in different receiving channelsijIf a nodeIn an adjacent relationship, aijNot equal to 0, otherwise 0. The adjacency matrix is defined as:
Figure BDA0003475614530000091
wherein epsilon is any real number,
Figure BDA0003475614530000092
the matrix is a M-order solid matrix, and M represents the type of the received channel data.
The adjacency matrix represents the incidence relation among the data in M different receiving channels, and the data relation graph is generated by the adjacency matrix; the data relation graph is composed of a Vertex (Vertex) representing receiving channel data and an edge (Edges) representing an association relation, and is defined as follows:
G=(V,E)
wherein V ═ ViI ═ 1,2, …, m, and denotes the set of all vertices, vi=[xi1,xi2,…xin]Denotes the ith signal entity, E ═ Eij|(vi,vj) e.V represents the set of edges connected between vertices, eijRepresenting the association between the ith signal entity and the jth signal entity.
It should be noted that, in other embodiments, the adjacency matrix is used to represent functions of receiving channel data, and may also be implemented by using a Degree matrix (Degree matrix) and a Neighborhood (neighbor) and details are not repeated.
Through the steps, the data among all the dynamic vectors of the dynamic monitoring data are subjected to matrix association by utilizing the adjacency matrix, a data relation graph is constructed from the data relation graph to describe the association relation among all the dynamic vectors, and the data relation graph and the dynamic monitoring data are input into the first sub-model after graph recurrent neural network training together, so that the association relation characteristics of the dynamic monitoring data can be extracted more accurately, a single linear characteristic calculation mode is broken through, the nonlinear association relation calculation capability is improved, the problem of weak association of multi-modal characterization data is solved, and the purpose of strong association of the characterization data is achieved.
On the basis of the foregoing embodiments, in some of the embodiments, the inputting the dynamic monitoring data into a well-trained graph recurrent neural network to obtain a dynamic feature matrix includes:
acquiring a low-dimensional type weight matrix; wherein the low-dimensional type weight matrix is used for reducing the data dimension of the dynamic monitoring data and reducing the potential correlation between each dynamic vector;
multiplying all the dynamic vectors by unit vectors, and multiplying the dynamic vectors by the low-dimensional type weight matrix to obtain a low-dimensional embedded matrix;
and inputting the low-dimensional embedded matrix into the graph recurrent neural network to output a dynamic feature matrix.
In this embodiment, in order to reduce the number of parameters and potential dependencies of edge types, Graph LSTM performs low-dimensional embedded representation of the type weight matrix. The low-dimensional type weight matrix is a weight matrix which is completely trained, the low-dimensional type weight matrix is a l × l × m three-dimensional tensor, l is a dimension of current first feature information, m is a type of received channel data, namely the number of dynamic vectors corresponding to dynamic monitoring data is equal to the number of adjacent nodes.
Through the steps, the trained low-dimensional type weight matrix is multiplied by all the dynamic vectors and the unit vectors to obtain a low-dimensional embedded matrix, so that the dimension reduction of the relation among all the dynamic vectors is realized, the extra calculation cost of the adjacent matrix dimension is reduced under the condition of a large number of different channel data types, the problems of high calculation cost and weak relevance of the representation data in multi-mode data processing are solved, and the purposes of reducing the calculation cost of the multi-mode data and improving the prediction efficiency are achieved.
On the basis of the foregoing embodiments, in some embodiments, the obtaining the static monitoring data, and determining second feature information according to the static monitoring data, the first feature information, and the second submodel includes:
step S2021, static monitoring data is acquired.
In the present embodiment, in order to solve both the multi-channel dynamic vector association problem and the problem that medical interpretation of dynamic detection data is affected by individual differences in clinical practice. After the first characteristic information is determined according to the dynamic monitoring data and the first submodel, a second submodel is required to be further introduced to add the static monitoring data of the user, so that the personalized detection performance of the model is further improved. Static monitoring data is information including but not limited to a user's personal basic information such as name, gender, age, past medical history, etc.
Step S2022, determining a corresponding static feature vector according to the static monitoring data.
In some embodiments, a corresponding static feature vector may be derived for the static monitoring data; in other embodiments, the static monitoring data may be projectively transformed into a static feature vector. The present application is not limited thereto.
Step S2023, inputting the dynamic feature matrix and the static feature vector to a multi-head attention mechanism model to output second feature information.
In this embodiment, the dynamic feature matrix and the static feature vector form a multi-modal data hidden state matrix, the multi-modal data hidden state matrix is input to a multi-head attention mechanism model, and the dynamic monitoring data and the static monitoring data are fused through the multi-head attention mechanism to obtain second feature information.
Through the steps, the incidence relation between all the dynamic vectors can be further measured based on the individual difference in the operation of introducing the static monitoring data to the second submodel, and different weight distribution results of the second submodel are output according to the individual difference, so that the simultaneous representation of the static monitoring data and the dynamic monitoring data in the multi-modal data is realized, the problems of high calculation cost and weak relevance of the representation data in multi-modal data processing are solved, the aims of reducing the calculation cost of the multi-modal data and strongly correlating the representation data are fulfilled, and the accuracy and the generalization capability of the model are improved.
On the basis of the foregoing embodiments, in some of the embodiments, the inputting the dynamic feature matrix and the static feature vector into a multi-attention mechanism model to output second feature information includes:
step S2023A, determining a plurality of feature subvectors according to the dynamic feature matrix and the static feature vector.
In this embodiment, the dynamic feature matrix includes at least two dynamic vectors, the static feature vector includes at least one static vector, and the dynamic vector and the static vector include a plurality of feature sub-vectors.
Step S2023B, projecting the plurality of feature sub-vectors to a query space, a key space, and a value space, respectively, to obtain a query vector, a key vector, and a value vector corresponding to the feature sub-vectors.
Illustratively, the query vector, key vector, and value vector are calculated as follows:
qi,ki,vi=Wq·hi,Wk·hi,Wv·hi
wherein h isiIs a feature sub-vector; q. q.si,ki,viWhich are respectively a query vector, a key vector and a value vector, which can be obtained by projecting an input vector into a query space, a key space and a value space, respectively; wq、WkAnd WvThe multi-head attention sublayer is a projection matrix and does not share projection matrix parameters among different multi-head attention sublayers, so that the mutual relation among input vectors can be captured in different spaces, and the output of the multi-head attention sublayer comprises global vector characteristics after splicing and operation of a full connection layer, thereby realizing information fusion.
Step S2023C, calculating to obtain second feature information according to the query vector, the key vector, and the value vector of the plurality of feature sub-vectors.
In this embodiment, the query vector, the key vector, and the value vector of a plurality of feature sub-vectors are calculated to obtain a weight assignment result of each vector attention head, so as to obtain second feature information. Specifically, the attention probability distribution of each vector is calculated according to the query vectors and the key vectors of the plurality of feature sub-vectors, and then the weight distribution result of the attention head of each vector is obtained by using the attention probability distribution and the corresponding value vector, so that the second feature information is obtained.
Illustratively, the second feature information is obtained by the following calculation:
in order to consider the calculation efficiency, the present embodiment adopts a Scaled dot-product (Scaled dot-product) as the attention function, and the specific calculation formula is as follows:
Figure BDA0003475614530000121
Figure BDA0003475614530000122
wherein alpha isiThe attention probability distribution values of m +1 attention heads; q. q.siI, i are respectively a query vector, a key vector and a value vector; d is a radical ofkIs a key vector kiOf (c) is calculated.
Figure BDA0003475614530000123
Figure BDA0003475614530000124
Wherein, U*The second characteristic information is the joint representation of the dynamic monitoring data and the static monitoring data; headiA total of m +1 heads representing self-attention;
Figure BDA0003475614530000125
for a cascading operation, the representation splices together the attention layer outputs of multiple representation subspaces; wOIs a linear projection matrix; h is a total ofiRepresenting a feature subvector; hstaticIs a static feature vector.
Through the above steps, by training with the attention mechanism to obtain the second sub-model, the weight assignment result of the attention head of each vector can be calculated, and the weight assignment result represents the joint representation between the dynamic monitoring data and the static monitoring data. The second sub-model can further measure the incidence relation between all the dynamic vectors of the introduced static monitoring data and the dynamic monitoring data, and different weight distribution results can be obtained according to the second characteristic information output by individual difference, so that the combined representation of the static monitoring data and the dynamic monitoring data in the multi-modal data is realized, the problem of weak relevance of the representation data is solved, the purpose of strong relevance of the representation data is achieved, and the accuracy and the generalization capability of the model are improved.
On the basis of the foregoing embodiments, in some of these embodiments, the generating an epilepsy prediction result according to the second feature information includes:
step S2031, determining a plurality of pieces of hidden representation information according to the second feature information.
In this embodiment, the goal of seizure detection is to utilize the user's multimodal data to predict whether a patient will have a chance of a seizure in a certain time window in the future. This problem can be regarded as a binary task, so we need to make the output tensor U passing through the multi-head attention mechanism*Converted into a two-dimensional vector. In addition, it is noted that the baseline static monitoring data directly affects the expression of dynamic monitoring data during epilepsy diagnosis, and physiological parameters such as heart rate and respiratory rate of infants are significantly higher than those of adults. Therefore, an attention mechanism needs to be introduced into the personalized characterization parameters at the output part of the second submodel.
Wherein the second characteristic information U*The combined representation information obtained by fusing the dynamic monitoring data and the static monitoring data through a multi-head attention mechanism is represented, and the hidden representation information is the representation information of the characteristic sub-vectors corresponding to the dynamic vector and the static vector in the second characteristic information
Figure BDA0003475614530000131
Step S2032 of calculating attention weights corresponding to the plurality of pieces of hidden representation information, respectively.
In this embodiment, different personalized characterization parameters are introduced, a plurality of pieces of the hidden representation information are respectively projected to a query space and a key space, a query vector and a key vector corresponding to the hidden representation information are obtained, and a corresponding attention weight is obtained through calculation according to the query vector and the key vector.
Specifically, based on the second characteristic information U*The determined attention weight calculation formula of (1) is as follows:
Figure BDA0003475614530000132
Figure BDA0003475614530000133
Figure BDA0003475614530000134
Figure BDA0003475614530000135
wherein the content of the first and second substances,
Figure BDA0003475614530000136
Figure BDA0003475614530000137
as second characteristic information U*A plurality of pieces of hidden representation information; query is composed of
Figure BDA0003475614530000138
Is represented by Keys is H*Showing the correlation of the dynamic monitoring data at the static baseline calculated by dot product.
Figure BDA0003475614530000139
And
Figure BDA00034756145300001310
all are projection matrixes, namely completely trained individualized characterization parameters.
Step S2033 is performed to generate a prediction probability vector based on the plurality of pieces of hidden representation information and the corresponding attention weights.
And S2034, determining an epilepsy prediction result according to the probability prediction vector and the prediction target.
Specifically, a plurality of pieces of the hidden representation information and corresponding attention weights are subjected to weighted summation to obtain a prediction probability vector s. The prediction target can be a classification task or a regression task of multi-modal data, such as classification of sign condition, grade type of health state, or prediction of fluid concentration, etc., epilepsy prediction result
Figure BDA00034756145300001314
The calculation formula of (c) is as follows:
Figure BDA00034756145300001311
Figure BDA00034756145300001312
wherein, WfinAnd bfinRespectively a weight matrix and an offset vector, wherein sigma is a sigmod activation function;
Figure BDA00034756145300001313
the outcome is predicted for epilepsy, i.e., y ∈ {0,1 }.
Through the steps, different data prediction results can be respectively generated according to the prediction target based on the second characteristic information, so that data prediction of different prediction targets is realized, the problem that multi-modal data are difficult to realize application is solved, and the purpose of finishing a classification task or a regression task of the multi-modal data is achieved.
In the related technology, the epilepsy detection algorithm does not consider individual differences, and a unified model established for all individuals cannot adapt to individual requirements.
On the basis of the foregoing embodiments, in some of the embodiments, the user terminal 1022 is further configured to: feedback information of the user based on the epilepsy early warning is acquired and uploaded to the cloud server 104, so that the cloud server 104 updates an epilepsy detection model of the user based on the feedback information.
As shown in fig. 3, specifically, in this embodiment, after the cloud server returns the epilepsy prediction result, the feedback information based on the epilepsy early warning may be whether an epilepsy event occurs, and the feedback information may be sent to the remote cloud server 104 by using a user terminal 1022 such as a smart phone. In the online implementation stage of the epilepsy detection system, a personalized model (e.g., user model M1, user model M2, user model M3, etc.) can be generated by adding new user feedback information on the basis of the trained model using fine-tuning (fine-tuning) technology commonly used in deep learning. The fine tuning means that fixed parameters of the epilepsy detection model (i.e. the first submodel and the first submodel) are used for data characterization. At this point, these network parameter matrices are no longer computed using back propagation, but are computed sequentially as known parameters.
Specifically, the cloud server 104 is further configured to perform the following steps:
and step S204, determining a loss value based on the epilepsy prediction result of the user and the corresponding feedback information.
For example, in this embodiment, if the feedback information is represented as yiThe epilepsy prediction result is shown as
Figure BDA0003475614530000141
The number of the epilepsy prediction results is N, and a loss value can be calculated by using Cross Entropy (Cross Entropy) loss:
Figure BDA0003475614530000142
of course, in other embodiments, the loss value may also be calculated by using other loss functions such as mean-square error (MSE), which is not limited herein.
Step S205, updating the model parameters in the epilepsy detection model until the loss value is smaller than the expected threshold value.
In this embodiment, when the cloud server 104 receives the feedback information y of the useriThen, the feedback information y of the user can be obtainediAnd the prediction result of the epilepsy detection model
Figure BDA0003475614530000143
Recalculating network parameters (e.g., parameter matrix W) of an epileptic detection model using a back propagation algorithmq、WkAnd Wv) And updating the epilepsy detection model of the user until the loss value is smaller than a desired threshold value, so as to train personal models of users with different ages and sexes.
Through the steps, the first model prediction result is generated by utilizing the cross entropy loss and can be used for describing the deviation of model training in the classification task, and then the epilepsy prediction result is reversely propagated back to the second submodel and the first submodel so as to correct the second submodel and the first submodel and respectively obtain the personalized personal model. The cloud server 104 updates the epilepsy detection model based on the feedback information continuously provided by the user terminal 1022, so that the accuracy of the epilepsy detection is improved.
As shown in fig. 4, in some embodiments, the cloud server 104 is specifically configured to perform the following steps: firstly, obtaining dynamic monitoring data X of userdynamic,XdynamicComprising at least two motion vectors xmnInputting the dynamic monitoring data into a first sub-model A with complete training to obtain a dynamic characteristic matrix Hdynamic. Then obtaining static monitoring data XstaticFor static monitoring data xstaticCarry out digital coding to obtainTo the corresponding static feature vector Hstatic. Then, a multi-mode data hidden state matrix H formed by the dynamic feature matrix and the static feature vectorwave=(h1,h2,…,hj,…,hm,Hstatic) Inputting the dynamic monitoring data and the static monitoring data into a second sub-model B to be fused through a multi-head attention mechanism to obtain second characteristic information U*I.e. joint expression of dynamic and static monitoring data
Figure BDA0003475614530000151
Figure BDA0003475614530000152
Finally, according to the second characteristic information U*Determining a plurality of implicit representation information
Figure BDA0003475614530000153
Calculating corresponding attention weight for a plurality of said implicit expression information respectively
Figure BDA0003475614530000154
Generating a prediction probability vector s according to a plurality of implicit representation information and corresponding attention weights, and determining an epilepsy prediction result according to the probability prediction vector s and a prediction target
Figure BDA0003475614530000155
Wherein the dynamic characteristic data is expressed as follows:
Figure BDA0003475614530000156
static feature data is denoted Xstatic=[xb1,xb2,…xbn]The set of static baseline data and dynamic monitoring data, i.e., multimodal data, is then denoted as H ═ H { (H } H)dynamic,HstaticWhere m denotes the number of signals and n denotes the length of the sequence sampled in time slice t。
In order to characterize the interrelationship between different signals, a data relationship graph between all corresponding dynamic vectors of the dynamic monitoring data may be generated by the adjacency matrix. The Adjacency matrix (Adjacency matrix) is an N × N matrix with the weight value as an element, and the Adjacency relation is calculated as a connection vertex viAnd vjEdge e ofij(ii) a If there is a neighboring relation e between motion vectors in different receiving channelsijIf the nodes are in adjacent relation, aijNot equal to 0, otherwise 0. The adjacency matrix is represented as follows:
Figure BDA0003475614530000157
wherein epsilon is any real number,
Figure BDA0003475614530000158
the matrix is a M-order solid matrix, and M represents the type of the received channel data.
The adjacency matrix represents the incidence relation among the data in the M different receiving channels, and the data relation graph is generated by the adjacency matrix; the data relation graph is composed of a Vertex (Vertex) representing receiving channel data and an edge (Edges) representing an association relation, and is defined as follows:
G=(V,E)
wherein V ═ { V ═ ViI ═ 1,2, …, m, and denotes the set of all vertices, vi=[xi1,xi2,…xin]Denotes the ith signal entity, E ═ Eij|(vi,vj) e.V represents the set of edges connected between vertices, eijRepresenting the association between the ith signal entity and the jth signal entity.
Firstly, the first sub-model is mainly used for processing multi-channel time sequence waveform data, and aims to fuse the incidence relation among channels, extract the characteristics of the waveform data and output a dynamic characteristic matrix, wherein the dynamic characteristic matrix consists of hidden state vectors of Graph LSTM units, namely Hdynamic=(h1,h2,…,hm)。
Fig. 5 is a schematic diagram of a network structure of the first submodel. The graph recurrent neural network structure comprises a forgetting gate, an input gate, an output gate and an intermediate state. And the forgetting gate controls whether to forget the hidden state of the previous layer at a certain probability and simultaneously transmits the hidden state to the sigmoid function in combination with the current input data. The input gate receives the previous layer of hidden state output and the current input information, whether the intermediate state is transmitted to the hidden state output in the current time period or not is controlled according to a certain probability, and the value of the output gate and the value of the intermediate state at the current time are multiplied bit by bit to generate the final output value of the graph length memory unit. The specific calculation formula is as follows:
Figure BDA0003475614530000161
Figure BDA0003475614530000162
Figure BDA0003475614530000163
Figure BDA0003475614530000164
Figure BDA0003475614530000165
hi=oi⊙tanh(ci)
the Graph LSTM is used for characterization of dynamic information, and introduces a corresponding forgetting Gate (Forget Gate) for each adjacent node, and the Gate value of the forgetting Gate only depends on the hidden state information of the adjacent node corresponding to the forgetting Gate. Values i of Input Gate (Input Gate) and Output Gate (Output Gate)iAnd oiDependent on the value f of the forgotten doortjThe hidden states of all neighboring nodes j output a weighted sum and an input vector. Wherein, of node iThe unit has j adjacent nodes
Figure BDA0003475614530000166
Intermediate calculation result c of storage unit in Graph LSTMiAnd uiThe semantic information of the input gate and all the forgetting gates are fused, and finally the hidden state vector h of the current channel is generated under the action of the output gatei
Wherein x isiIs XdynamicThe dynamic vector of the ith kind of data; h isiIs the hidden state vector of the current output; w(i),W(o),W(u),W(f)And U(i),U(o),U(u),U(f)Two groups of parameter matrixes of an input gate, an output gate, an intermediate state and a forgetting gate are respectively arranged; b(i),b(o),b(u),b(f)Is a bias vector; σ, tanh and |, which are sigmod activation function, hyperbolic tangent function and point product calculation, respectively; the x T represents tensor dot product operation, defined as TTA=∑m(T:,:,m·A:,m). The weight matrix U is a three-dimensional tensor of lxlxlxlxlxlxm, l is a dimension of the current first feature information, and m is a type of the received channel data, i.e., the number of dynamic vectors corresponding to the dynamic monitoring data is equal to the number of adjacent nodes.
Then, the static monitoring data X is processedstaticLinear transformation is carried out to obtain corresponding static eigenvector HstaticThe dynamic feature matrix and the static feature vector form a multi-mode data hidden state matrix Hwave=(h1,h2,…,hj,…,hm,HstaticInputting the multi-mode data hidden state matrix into a second sub-model, fusing the dynamic monitoring data and the static monitoring data through a multi-head attention mechanism to obtain second characteristic information, namely the joint expression of the dynamic monitoring data and the static monitoring data
Figure BDA0003475614530000171
qi,ki,vi=Wq·hi,Wk·hi,wv·hi
Wherein h isiIs a feature sub-vector; q. q.si,ki,viWhich are respectively a query vector, a key vector and a value vector, which can be obtained by projecting an input vector into a query space, a key space and a value space, respectively; wq、WkAnd WvIs a projection matrix.
Figure BDA0003475614530000172
Figure BDA0003475614530000173
Wherein alpha isiThe attention probability distribution values of m +1 attention heads; q. q.si,ki,viRespectively, a query vector, a key vector, and a value vector; dkIs a key vector kiOf (c) is calculated.
Figure BDA0003475614530000174
Figure BDA0003475614530000175
When the second characteristic information U is obtained*Then, generating an epilepsy prediction result according to the second characteristic information, and acquiring feedback information y of the user based on the epilepsy early warningiAnd upload to the cloud server 104, so that the cloud server 104 can base on the feedback information yiThe specific implementation manner of updating the epilepsy detection model of the user is the same as that in the above embodiment, and details are not repeated herein.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An epilepsy detection system, comprising: a data acquisition system and a cloud server connected to the data acquisition system, wherein,
the data acquisition system is used for acquiring multi-modal data of a user, uploading the multi-modal data to the cloud server, and carrying out epilepsy early warning according to an epilepsy prediction result returned by the cloud server; the multimodal data comprises dynamic monitoring data and static monitoring data;
the cloud server is used for inputting the multi-modal data into an epilepsy detection model to obtain an epilepsy prediction result; the epilepsy detection model is a neural network model obtained by training based on multi-modal data samples of a user and corresponding epileptic seizure events.
2. The epilepsy detection system according to claim 1, wherein the data acquisition system comprises a wearable device and a user terminal, wherein,
the wearable device is used for acquiring dynamic monitoring data of a user in real time and uploading the dynamic monitoring data to the cloud server;
the user terminal is used for acquiring static monitoring data of a user, uploading the static monitoring data to the cloud server, and carrying out epilepsy early warning according to an epilepsy prediction result returned by the cloud server.
3. The epilepsy detection system according to claim 1, wherein the cloud server is specifically configured to:
acquiring dynamic monitoring data of a user, and determining first characteristic information according to the dynamic monitoring data and a first sub-model; the first sub-model is a model obtained by utilizing a recurrent neural network for training;
acquiring the static monitoring data, and determining second characteristic information according to the static monitoring data, the first characteristic information and a second submodel; the second sub-model is a model obtained by utilizing attention mechanism training;
and generating an epilepsy prediction result according to the second characteristic information.
4. The epilepsy detection system according to claim 3, wherein the acquiring dynamic monitoring data of the user, and determining the first characteristic information according to the dynamic monitoring data and the first sub-model comprises:
acquiring multi-dimensional dynamic monitoring data; wherein the dynamic monitoring data comprises at least two dynamic vectors;
inputting the dynamic monitoring data into a well-trained graph cyclic neural network to obtain a dynamic characteristic matrix; wherein the dimension of the dynamic feature matrix is the same as the dimension of the dynamic monitoring data.
5. The epilepsy detection system according to claim 4, wherein the inputting the dynamic monitoring data into a well-trained graph-recurrent neural network to obtain a dynamic feature matrix comprises:
and acquiring a data relation graph among all corresponding dynamic vectors of the dynamic monitoring data, and inputting the data relation graph and the dynamic monitoring data into the graph recurrent neural network to output the dynamic feature matrix.
6. The epilepsy detection system according to claim 3, wherein the obtaining the static monitoring data, and the determining second feature information according to the static monitoring data, the first feature information and the second submodel comprises:
acquiring static monitoring data;
determining a corresponding static feature vector according to the static monitoring data;
inputting the dynamic feature matrix and the static feature vector into a multi-attention mechanism model to output second feature information.
7. The epilepsy detection system of claim 6, wherein the inputting the dynamic feature matrix and the static feature vector to a multi-attention mechanism model to output second feature information comprises:
determining a plurality of feature sub-vectors according to the dynamic feature matrix and the static feature vector;
respectively projecting the plurality of characteristic sub-vectors to a query space, a key space and a value space to obtain the query vector, the key vector and the value vector corresponding to the characteristic sub-vectors;
and calculating to obtain second characteristic information according to the query vectors, the key vectors and the value vectors of the plurality of characteristic sub-vectors.
8. The epilepsy detection system according to claim 3, wherein the generating epilepsy prediction based on the second characteristic information comprises:
determining a plurality of pieces of hidden representation information according to the second characteristic information;
calculating corresponding attention weights for the plurality of pieces of hidden representation information respectively;
generating a prediction probability vector according to the plurality of hidden representation information and the corresponding attention weight;
and determining an epilepsy prediction result according to the probability prediction vector and the prediction target.
9. The epilepsy detection system according to claim 2, wherein the user terminal is further configured to:
feedback information of the user based on the epilepsy early warning is acquired and uploaded to the cloud server, so that the cloud server updates an epilepsy detection model of the user based on the feedback information.
10. The epilepsy detection system according to claim 9, wherein the cloud server is further configured to:
determining a loss value based on the epilepsy prediction result of the user and the corresponding feedback information;
updating model parameters in the epilepsy detection model until the loss value is less than a desired threshold.
CN202210054441.4A 2022-01-18 2022-01-18 Epilepsy detection system Pending CN114550907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210054441.4A CN114550907A (en) 2022-01-18 2022-01-18 Epilepsy detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210054441.4A CN114550907A (en) 2022-01-18 2022-01-18 Epilepsy detection system

Publications (1)

Publication Number Publication Date
CN114550907A true CN114550907A (en) 2022-05-27

Family

ID=81670995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210054441.4A Pending CN114550907A (en) 2022-01-18 2022-01-18 Epilepsy detection system

Country Status (1)

Country Link
CN (1) CN114550907A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024011473A1 (en) * 2022-07-14 2024-01-18 京东方科技集团股份有限公司 Subject state determination method, training method for deep learning model, and electronic device
CN117617995A (en) * 2024-01-26 2024-03-01 小舟科技有限公司 Method for collecting and identifying brain-computer interface key brain region code and computer equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024011473A1 (en) * 2022-07-14 2024-01-18 京东方科技集团股份有限公司 Subject state determination method, training method for deep learning model, and electronic device
CN117617995A (en) * 2024-01-26 2024-03-01 小舟科技有限公司 Method for collecting and identifying brain-computer interface key brain region code and computer equipment
CN117617995B (en) * 2024-01-26 2024-04-05 小舟科技有限公司 Method for collecting and identifying brain-computer interface key brain region code and computer equipment

Similar Documents

Publication Publication Date Title
Torti et al. Embedded real-time fall detection with deep learning on wearable devices
CN109036553B (en) Disease prediction method based on automatic extraction of medical expert knowledge
Bianchi et al. Learning representations of multivariate time series with missing data
Gil-Martín et al. Improving physical activity recognition using a new deep learning architecture and post-processing techniques
Cai et al. Hierarchical domain adaptation projective dictionary pair learning model for EEG classification in IoMT systems
Sansano et al. A study of deep neural networks for human activity recognition
CN113627518A (en) Method for realizing multichannel convolution-recurrent neural network electroencephalogram emotion recognition model by utilizing transfer learning
Gudur et al. Activeharnet: Towards on-device deep bayesian active learning for human activity recognition
Li et al. Deep learning of smartphone sensor data for personal health assistance
Betancourt et al. Self-attention networks for human activity recognition using wearable devices
CN114550907A (en) Epilepsy detection system
CN110298303B (en) Crowd identification method based on long-time memory network glance path learning
Khatiwada et al. Automated human activity recognition by colliding bodies optimization-based optimal feature selection with recurrent neural network
Zhou et al. A hybrid attention-based deep neural network for simultaneous multi-sensor pruning and human activity recognition
Yu et al. Semi-supervised learning and data augmentation in wearable-based momentary stress detection in the wild
Li et al. Collaborative-set measurement for ECG-based human identification
Kramer et al. Reconstructing nonlinear dynamical systems from multi-modal time series
Yu et al. Modality fusion network and personalized attention in momentary stress detection in the wild
Zhao et al. Attention‐based sensor fusion for emotion recognition from human motion by combining convolutional neural network and weighted kernel support vector machine and using inertial measurement unit signals
Kamoji et al. Prediction of Parkinson's Disease using Machine Learning and Deep Transfer Learning from different Feature Sets
Khan et al. AT2GRU: A human emotion recognition model with mitigated device heterogeneity
Raoof et al. A conditional input-based GAN for generating spatio-temporal motor imagery electroencephalograph data
Zhong et al. A robust frequency-domain-based graph adaptive network for Parkinson's disease detection from gait data
Esfahani et al. Impact of data preparation in freezing of gait detection using feature-less recurrent neural network
Wei et al. PulseID: Multi-scale photoplethysmographic identification using a deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination