CN113349796B - Sleep monitoring method, equipment and storage medium based on multi-source signals - Google Patents

Sleep monitoring method, equipment and storage medium based on multi-source signals Download PDF

Info

Publication number
CN113349796B
CN113349796B CN202110666187.9A CN202110666187A CN113349796B CN 113349796 B CN113349796 B CN 113349796B CN 202110666187 A CN202110666187 A CN 202110666187A CN 113349796 B CN113349796 B CN 113349796B
Authority
CN
China
Prior art keywords
signal
sleep
neural network
source
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110666187.9A
Other languages
Chinese (zh)
Other versions
CN113349796A (en
Inventor
王涵
王贝贝
吴锋
李晔
马国睿
刘状
李鹏生
王凤领
周若愚
周肖树
王一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Institute Of Advanced Technology Chinese Academy Of Sciences Co ltd
Original Assignee
Zhuhai Institute Of Advanced Technology Chinese Academy Of Sciences Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Institute Of Advanced Technology Chinese Academy Of Sciences Co ltd filed Critical Zhuhai Institute Of Advanced Technology Chinese Academy Of Sciences Co ltd
Priority to CN202110666187.9A priority Critical patent/CN113349796B/en
Publication of CN113349796A publication Critical patent/CN113349796A/en
Application granted granted Critical
Publication of CN113349796B publication Critical patent/CN113349796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • A61B5/347Detecting the frequency distribution of signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • A61B5/397Analysis of electromyograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/398Electrooculography [EOG], e.g. detecting nystagmus; Electroretinography [ERG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention relates to a sleep monitoring method, a sleep monitoring device and a sleep monitoring medium technical scheme based on multi-source signals, which comprise the following steps: the wearable equipment is used for collecting a plurality of different types of multi-source physiological signals when a human body sleeps; constructing a deep learning model of the trapezoid multiflow convolutional neural network, and inputting a multisource physiological signal to the deep learning model of the trapezoid multiflow convolutional neural network for learning to obtain a polysomnography model; and analyzing the multisource physiological signals through a plurality of classifiers of the polysomnography model to obtain the sleep symptoms of the human body. The beneficial effects of the invention are as follows: the sleep disease diagnosis with multiple signal inputs, multiple flows, small sample capacity and high accuracy is realized, and the requirement on storage resources is low, so that the sleep disease diagnosis device can be used for a terminal. The machine learning model short board which needs a large amount of sample data to complete training is solved.

Description

Sleep monitoring method, equipment and storage medium based on multi-source signals
Technical Field
The invention relates to the field of computers, in particular to a sleep monitoring method, equipment and a storage medium based on multi-source signals.
Background
Sleep is an important physiological activity of a human body, sleep diseases cause great trouble to people in life, and how to accurately analyze the sleep diseases is a problem which needs to be solved currently.
In the prior art, physical characteristics during sleep are acquired through wearable equipment or single equipment, sleep diseases possibly existing are analyzed through acquired data, the data analysis in the existing mode is simple, and various data during sleep cannot be comprehensively judged.
Disclosure of Invention
The invention aims to at least solve one of the technical problems in the prior art, and provides a sleep monitoring method, equipment and a storage medium based on multi-source signals, which solve the defects in the prior art.
The technical scheme of the invention comprises a sleep monitoring method based on multi-source signals, which is characterized by comprising the following steps: collecting a plurality of different types of multi-source physiological signals of a human body during sleep through wearable equipment; constructing a deep learning model of the trapezoid multiflow convolutional neural network, and inputting the multisource physiological signal to the deep learning model of the trapezoid multiflow convolutional neural network for learning to obtain a polysomnography model; and analyzing the multi-source physiological signals through a plurality of classifiers of the polysomnography model to obtain the sleep symptoms of the human body.
The method for monitoring sleep based on multi-source signals, wherein the multi-source physiological signals at least comprise ECG, EEG, EOG, EMG, a nasal information signal and a pulse signal.
According to the sleep monitoring method based on the multi-source signals, a trapezoidal multi-flow convolutional neural network deep learning model is constructed, the multi-source physiological signals are input to the trapezoidal multi-flow convolutional neural network deep learning model for learning, and the polysomnography model comprises the following steps in sequence: preprocessing the multisource physiological signals, and filtering and decomposing the multisource physiological signals by adopting different Butterworth filters to obtain delta signal waves, theta signal waves, alpha signal waves, beta signal waves and gamma signal waves; carrying out wavelet separation on complex waveforms subjected to filtering decomposition to obtain wavelets; performing discrete wavelet transform on the wavelets and signal waves; performing equal pixel point cutting on the signal wave subjected to discrete wavelet transformation to obtain a corresponding n-segment waveform; respectively carrying out X convolution layer and pooling layer treatment on the n sections of waveforms to obtain corresponding treatment results; performing primary maximum pooling treatment on the separated wavelet results through a one-dimensional maximum pooling layer, connecting the convolution layers in series, and performing maximum pooling treatment on the signal waves and the results of the serial connection of the convolution layers through a full-connection layer by adopting the one-dimensional maximum pooling layer; performing activation by a softmax activation function; and obtaining a classification result.
The sleep monitoring method based on multi-source signals, wherein the discrete wavelet transform comprises: by passing through
Representing the discrete wavelet transform of a sequence of values X (t) in arbitrary space L 2 (R), the wavelet being represented as
The wavelet uses the preprocessed signals, and the frequency and time localization filter is carried out on the wavelet through two complementary filters, wherein the localization filter comprises a high-frequency filter and a low-frequency filter, the frequency is a scale parameter j in a formula, the time is a displacement parameter k in the formula, and t represents time.
The sleep monitoring method based on the multi-source signals, wherein the deep learning model of the trapezoidal multi-flow convolutional neural network comprises the following steps:
when processing the signal wave, the signal wave passes through the convolution kernel with x and is subjected to pooling and convolution processing of j layers, then equal pixel point cutting is executed, and high-frequency parameters x and j are calculated, wherein the calculation flow comprises the following steps:
s510, mapping the sequence { u (i) } to an m-dimensional vector x (i);
S520, calculating a vector x (i) and the remaining vector x (j) for each i value, where i=0, 1,2, … N-m+1, j+.i, and the distance d ij is d ij =max|u (i+k) -u (i+k) |, k=0, 1, … m-1;
S530, for each i value, record d ij < r, r is the similarity formula difference threshold and r >0, and the ratio of the number of occurrences of this case to the total number of vectors N-m+1:
Based on the similarity tolerance r, Representing the probability that m-dimensional patterns in the sequence are similar to each other.
S540, takingThen taking the average of all i and recording as/> Representing the average proximity between vectors by
S550, repeating steps S510-S540 for m+1, and determiningAnd/>
Wherein, the approximate entropy is defined as:
S560, setting a threshold value tau, comparing the test signal parameter with the normal signal parameter, setting j > x if the test signal parameter-normal signal parameter is more than or equal to tau, otherwise, setting x > j,0 is less than or equal to x, and j is less than or equal to n.
According to the sleep monitoring method based on the multi-source signals, normal signal parameters are normal samples, and the test signal parameters are diseased samples.
The technical scheme of the invention also comprises a sleep monitoring device based on the multi-source signal, which is characterized in that the device comprises: the acquisition device comprises a plurality of different wearable devices and is used for acquiring a plurality of different types of multi-source physiological signals when a human body sleeps; the learning device is used for constructing a deep learning model of the trapezoid multiflow convolutional neural network, inputting the multisource physiological signals to the deep learning model of the trapezoid multiflow convolutional neural network for learning to obtain a polysomnography model, and analyzing the multisource physiological signals through a plurality of classifiers of the polysomnography model to obtain sleep symptoms of a human body; and the alarm device prompts the sleeping symptoms of the human body conforming to the sleeping diseases through the interactive interface.
The technical solution of the present invention further comprises a computer-readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method steps according to any one of the claims.
The beneficial effects of the invention are as follows: the sleep disease diagnosis with multiple signal inputs, multiple flows, small sample capacity and high accuracy is realized, and the requirement on storage resources is low, so that the sleep disease diagnosis device can be used for a terminal. The machine learning model short board which needs a large amount of sample data to complete training is solved.
Drawings
The invention is further described below with reference to the drawings and examples;
Fig. 1 shows a general flow chart according to an embodiment of the invention.
Fig. 2 shows a system framework of multi-source signals according to an embodiment of the invention.
Fig. 3 shows a schematic representation of a discrete wavelet transform according to an embodiment of the present invention.
FIG. 4 shows a deep learning model structure of a trapezoidal multiflow convolutional neural network according to an embodiment of the present invention.
Fig. 5 shows a specific example according to an embodiment of the present invention.
Fig. 6 shows a second example according to an embodiment of the invention.
Fig. 7 shows a device diagram according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to the present embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein the accompanying drawings are used to supplement the description of the written description so that one can intuitively and intuitively understand each technical feature and overall technical scheme of the present invention, but not to limit the scope of the present invention.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement and the like should be construed broadly, and those skilled in the art can reasonably determine the specific meaning of the terms in the present invention in combination with the specific contents of the technical scheme.
Fig. 1 shows a general flow chart according to an embodiment of the invention. The process comprises the following steps: the wearable equipment is used for collecting a plurality of different types of multi-source physiological signals when a human body sleeps; constructing a deep learning model of the trapezoid multiflow convolutional neural network, and inputting a multisource physiological signal to the deep learning model of the trapezoid multiflow convolutional neural network for learning to obtain a polysomnography model; and analyzing the multisource physiological signals through a plurality of classifiers of the polysomnography model to obtain the sleep symptoms of the human body.
Fig. 2 is a flow chart of sleep monitoring of a multi-source signal according to an embodiment of the present invention. The process comprises the following steps: preprocessing the acquired original brain electrical signals, and filtering and decomposing by adopting different Butterworth filters to obtain delta, theta, alpha, beta and gamma waves. And then performing discrete wave conversion on the 2-seed electroencephalogram signals after filtering, enabling the signals after wavelet conversion to enter a trapezoid multi-flow convolution CNN framework proposed by us, automatically learning the characteristics after convolution and pooling operation, and finally obtaining a classification result to obtain symptoms of normal or sleep diseases (sleep disorder, sleep breathing disorder, sleep apnea, hypoventilation syndrome and the like).
Fig. 3 shows a schematic representation of a discrete wavelet transform according to an embodiment of the present invention. The discrete wavelet transformation is a localized analysis of time (space) frequency, gradually performs multi-scale refinement on signals (functions) through telescopic translation operation, finally achieves time subdivision at high frequency and frequency subdivision at low frequency, can automatically adapt to the requirement of time-frequency signal analysis, and can focus on any details of the signals.
By passing through
Representing the discrete wavelet transform of a sequence of values X (t) in arbitrary space L 2 (R), the wavelet being represented as
The wavelet uses preprocessed signals, and the frequency and time localization filter is carried out on the wavelet through two complementary filters, wherein the localization filter comprises a high-frequency filter and a low-frequency filter, the frequency is a scale parameter j in a formula, the time is a displacement parameter k in the formula, t represents the time, LPF 1-LPF 4 are low-pass filters, HPF 1-HPF 4 are high-pass filters, and CA 1-CA 4 and CD 1-CD 4 are filtering results.
FIG. 4 shows a deep learning model structure of a trapezoidal multiflow convolutional neural network according to an embodiment of the present invention. The flow comprises the following steps from left to right:
(1) i waveform signals (denoising and the like) after preliminary pretreatment are input (ECG, EEG, EOG, EMG, nasal information signals, pulse signals and the like); (2) For complex signals, such as EEG, wavelet separation is not required, such as for simple waveforms, where the wavelet separation of the EEG signal is, for example, delta (0.1-4.0 Hz) wave, theta (4.0-7.5 Hz) wave, alpha (7.5-12.0 Hz) wave, beta (12.0-15.5 Hz) wave, and gamma (15.5-45.0 Hz) wave; (3) Discrete wavelet transformation is carried out on wavelet/input signal wave; (4) performing equal pixel point cutting on the processed wave to form n sections; (5) Respectively carrying out X convolution layer and pooling layer operations on the n sections of waveforms; (6) Carrying out maximum pooling treatment on the separated wavelet results by adopting 1-max-pooling; (7) Serial connection, entering a full connection layer, and carrying out maximum pooling treatment on the result by adopting 1-max-pooling; (8) activating the function using softmax; (9) two classifications are implemented.
Fig. 5 shows a specific example according to an embodiment of the present invention. The wavy waveform in the figure is a diseased sample, and the straight line perpendicular to the diseased sample is a normal sample.
Taking the alpha wave separated from the EEG as an example, setting n=10, dividing every 100 pixel points into 1 section, 10 sections, wherein each section enters a convolution layer and a pooling layer, setting parameters of 64, 32, 128 and the like, setting convolution kernels of x, realizing serial connection, full connection layer, maximum pooling of 1-max-pooling and softmax after convolution and pooling of j layers, and realizing two classification tasks.
Fig. 6 shows a second example of the present invention, where "×" in fig. 6 is a diseased sample and "Σ" is a normal sample. Referring to fig. 5, the flow of the present embodiment includes, from left to right:
For the above-mentioned settings of x and j, taking the β signal of the EEG signal as an example, two parts of the signal are separated from 100 pixel points, and the high-frequency parameters are calculated, and the parameter calculation process is as follows:
Mapping the sequence { u (i) } onto an m-dimensional vector x (i);
Calculating a vector x (i) for each i value with the remaining vectors x (j), where i=0, 1,2, … N-m+1, j+notei, and the distance d ij is d ij =max|u (i+k) -u (i+k) |, k=0, 1, … m-1;
For each value of i, the case of d ij < r, r being the similarity formula difference threshold and r >0, and the ratio of the number of occurrences of this case to the total number of vectors N-m+1:
Based on the similarity tolerance r, Representing the probability that m-dimensional patterns in the sequence are similar to each other.
Taking outThen taking the average of all i and recording as/>Representing the average proximity between vectors by
Repeating steps S510-S540 for m+1, determiningAnd/>
The approximate entropy is defined as: (N is a finite value).
Defining a threshold value tau, wherein the threshold value tau is set to be 0.8, and when a test signal parameter-normal signal parameter is equal to or more than tau and is identified by a double-headed arrow below, j of the layer is set to be slightly larger, and the value of x is slightly smaller; otherwise, j takes a slightly smaller value, while x takes a slightly larger value. Here, when the test signal parameter-normal signal parameter is ≡τ, j=2, i.e., two convolution layers and two pooling layers, x=5; otherwise, j=1, x=10. Thereby completing the multi-stream CNN of the ladder structure.
Fig. 7 shows a device diagram according to an embodiment of the invention. It comprises the following steps: the acquisition device comprises a plurality of different wearable devices and is used for acquiring a plurality of different types of multi-source physiological signals when a human body sleeps; the learning device is used for constructing a deep learning model of the trapezoid multiflow convolutional neural network, inputting a multisource physiological signal to the deep learning model of the trapezoid multiflow convolutional neural network for learning to obtain a polysomnography model, and analyzing the multisource physiological signal through a plurality of classifiers of the polysomnography model to obtain sleep symptoms of a human body; and the alarm device prompts the sleeping symptoms of the human body conforming to the sleeping diseases through the interactive interface.
It should be appreciated that the method steps in embodiments of the present invention may be implemented or carried out by computer hardware, a combination of hardware and software, or by computer instructions stored in non-transitory computer-readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Furthermore, the operations of the processes described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes (or variations and/or combinations thereof) described herein may be performed under control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications), by hardware, or combinations thereof, collectively executing on one or more processors. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable computing platform, including, but not limited to, a personal computer, mini-computer, mainframe, workstation, network or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optical read and/or write storage medium, RAM, ROM, etc., such that it is readable by a programmable computer, which when read by a computer, is operable to configure and operate the computer to perform the processes described herein. Further, the machine readable code, or portions thereof, may be transmitted over a wired or wireless network. When such media includes instructions or programs that, in conjunction with a microprocessor or other data processor, implement the steps described above, the invention described herein includes these and other different types of non-transitory computer-readable storage media. The invention also includes the computer itself when programmed according to the methods and techniques of the present invention.
The computer program can be applied to the input data to perform the functions described herein, thereby converting the input data to generate output data that is stored to the non-volatile memory. The output information may also be applied to one or more output devices such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including specific visual depictions of physical and tangible objects produced on a display.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present invention.

Claims (5)

1. A sleep monitoring method based on a multi-source signal, the method comprising:
collecting a plurality of different types of multi-source physiological signals of a human body during sleep through wearable equipment;
Constructing a deep learning model of the trapezoid multiflow convolutional neural network, and inputting the multisource physiological signal to the deep learning model of the trapezoid multiflow convolutional neural network for learning to obtain a polysomnography model;
The construction of the deep learning model of the trapezoid multi-flow convolutional neural network, the input of the multi-source physiological signals to the deep learning model of the trapezoid multi-flow convolutional neural network for learning, and the acquisition of the polysomnography model sequentially comprises the following steps:
Preprocessing the multisource physiological signals, wherein preprocessing EEG, and filtering and decomposing by adopting different Butterworth filters to obtain delta signal waves, theta signal waves, alpha signal waves, beta signal waves and gamma signal waves;
Carrying out wavelet separation on complex waveforms subjected to filtering decomposition to obtain wavelets;
performing discrete wavelet transform on the wavelets and signal waves;
Performing equal pixel point cutting on the signal wave subjected to discrete wavelet transformation to obtain a corresponding n-segment waveform;
respectively carrying out X convolution layer and pooling layer treatment on the n sections of waveforms to obtain corresponding treatment results;
performing primary maximum pooling treatment on the separated wavelet results through a one-dimensional maximum pooling layer, connecting the convolution layers in series, and performing maximum pooling treatment on the signal waves and the results of the serial connection of the convolution layers through a full-connection layer by adopting the one-dimensional maximum pooling layer;
Performing activation by a softmax activation function;
Obtaining a classification result;
the discrete wavelet transform includes: by passing through
Representing a sequence of valuesIn arbitrary space/>Medium discrete wavelet transform, wavelet is expressed as
The wavelet uses the preprocessed signal, and the wavelet is subjected to a frequency and time localization filter by two complementary filters, wherein the localization filter comprises a high-frequency filter and a low-frequency filter, and the scale parameters are as followsThe displacement parameter is/>,/>Representing time;
The trapezoidal multiflow convolutional neural network deep learning model comprises:
When processing signal wave, has the following functions for signal wave passing Convolution kernel of size and pass/>After the pooling and convolution treatment of the layers, performing equal pixel point cutting, and calculating the high-frequency parameter/>And/>The calculation flow comprises the following steps:
S510, sequence is repeated Mapping to/>Dimension vector/>Applying;
S520, calculating each Vector of values/>And the rest of the vectors/>Where i=0, 1,2, … N-m+1, j+.i, distanceFor/>
S530, for eachValue, record/>In case of/>Is a similar formula difference threshold and/>And the number of occurrences of this case and the total number of vectors/>Is defined by the ratio of:
,/>
Based on similar tolerances ,/>Representing the/>, in the sequenceProbability of dimensional patterns being similar to each other;
S540, taking Natural logarithm of (1), then take all/>And is reported as/>, />Representing the average proximity between vectors by
S550, forRepeating the steps S510-S540 to determine/>And/>
Wherein, the approximate entropy is defined as:
S560, setting a threshold value Comparing the test signal parameter with the normal signal parameter, if the test signal parameter-normal signal parameter is not less than/>Set/>Otherwise,/>,/>
And analyzing the multi-source physiological signals through a plurality of classifiers of the polysomnography model to obtain the sleep symptoms of the human body.
2. The multi-source signal based sleep monitoring method as claimed in claim 1, characterized in that, the multi-source physiological signal comprises at least one of EEG and/or ECG, EOG, EMG, nasal information signal and pulse signal.
3. The multi-source signal based sleep monitoring method of claim 1, wherein the normal signal parameter is a normal sample and the test signal parameter is a diseased sample.
4. A sleep monitoring device based on multi-source signals, characterized in that the device is adapted to perform the method of any of claims 1-3, the device comprising:
The acquisition device comprises a plurality of different wearable devices and is used for acquiring a plurality of different types of multi-source physiological signals when a human body sleeps;
the learning device is used for constructing a deep learning model of the trapezoid multiflow convolutional neural network, inputting the multisource physiological signals to the deep learning model of the trapezoid multiflow convolutional neural network for learning to obtain a polysomnography model, and analyzing the multisource physiological signals through a plurality of classifiers of the polysomnography model to obtain sleep symptoms of a human body;
and the alarm device prompts the sleeping symptoms of the human body conforming to the sleeping diseases through the interactive interface.
5. A computer readable storage medium storing a computer program, which when executed by a processor performs the method according to any one of claims 1-3.
CN202110666187.9A 2021-06-16 2021-06-16 Sleep monitoring method, equipment and storage medium based on multi-source signals Active CN113349796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110666187.9A CN113349796B (en) 2021-06-16 2021-06-16 Sleep monitoring method, equipment and storage medium based on multi-source signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110666187.9A CN113349796B (en) 2021-06-16 2021-06-16 Sleep monitoring method, equipment and storage medium based on multi-source signals

Publications (2)

Publication Number Publication Date
CN113349796A CN113349796A (en) 2021-09-07
CN113349796B true CN113349796B (en) 2024-05-10

Family

ID=77534637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110666187.9A Active CN113349796B (en) 2021-06-16 2021-06-16 Sleep monitoring method, equipment and storage medium based on multi-source signals

Country Status (1)

Country Link
CN (1) CN113349796B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107438398A (en) * 2015-01-06 2017-12-05 大卫·伯顿 Portable wearable monitoring system
CN109922526A (en) * 2017-12-13 2019-06-21 苹果公司 Unlicensed band manages control instructions device
CN110897639A (en) * 2020-01-02 2020-03-24 清华大学深圳国际研究生院 Electroencephalogram sleep staging method based on deep convolutional neural network
CN111477299A (en) * 2020-04-08 2020-07-31 广州艾博润医疗科技有限公司 Method and device for regulating and controlling sound-electricity stimulation nerves by combining electroencephalogram detection and analysis control
WO2020248008A1 (en) * 2019-06-14 2020-12-17 The University Of Adelaide A method and system for classifying sleep related brain activity
CN112120691A (en) * 2020-09-22 2020-12-25 浙江智柔科技有限公司 Signal identification method and device based on deep learning and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11439344B2 (en) * 2015-07-17 2022-09-13 Origin Wireless, Inc. Method, apparatus, and system for wireless sleep monitoring

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107438398A (en) * 2015-01-06 2017-12-05 大卫·伯顿 Portable wearable monitoring system
CN109922526A (en) * 2017-12-13 2019-06-21 苹果公司 Unlicensed band manages control instructions device
WO2020248008A1 (en) * 2019-06-14 2020-12-17 The University Of Adelaide A method and system for classifying sleep related brain activity
CN110897639A (en) * 2020-01-02 2020-03-24 清华大学深圳国际研究生院 Electroencephalogram sleep staging method based on deep convolutional neural network
CN111477299A (en) * 2020-04-08 2020-07-31 广州艾博润医疗科技有限公司 Method and device for regulating and controlling sound-electricity stimulation nerves by combining electroencephalogram detection and analysis control
CN112120691A (en) * 2020-09-22 2020-12-25 浙江智柔科技有限公司 Signal identification method and device based on deep learning and computer equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
便携式睡眠监测系统的研制;吴锋;周玉彬;成奇明;张信民;潘玮;刘娟;俞梦孙;;医疗卫生装备;20081115(第11期);全文 *
卷积神经网络在脑疲劳检测中的研究;杨硕;丁建清;王磊;刘帅;;信号处理;20190425(第04期);全文 *
基于多参数特征融合的自动睡眠分期方法;吕甜甜;王心醉;俞乾;贾朋飞;陈骁;吴成雄;;计算机应用;20171220(第S2期);全文 *
小波变换与生物信号处理;周先来, 钟伯成;合肥联合大学学报;20000330(第01期);全文 *

Also Published As

Publication number Publication date
CN113349796A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
US10869610B2 (en) System and method for identifying cardiac arrhythmias with deep neural networks
CN110555468A (en) Electroencephalogram signal identification method and system combining recursion graph and CNN
CN110929581A (en) Electroencephalogram signal identification method based on space-time feature weighted convolutional neural network
CN110141216B (en) Identification method, training method and system for QRS (QRS) characteristic waves of electrocardiosignals
CN111202517B (en) Sleep automatic staging method, system, medium and electronic equipment
CN114224360B (en) EEG signal processing method, equipment and storage medium based on improved EMD-ICA
CN114533086A (en) Motor imagery electroencephalogram decoding method based on spatial domain characteristic time-frequency transformation
CN112263218A (en) Sleep staging method and device
CN113925459A (en) Sleep staging method based on electroencephalogram feature fusion
CN115919330A (en) EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution
Asghar et al. Semi-skipping layered gated unit and efficient network: hybrid deep feature selection method for edge computing in EEG-based emotion classification
CN112932501A (en) Method for automatically identifying insomnia based on one-dimensional convolutional neural network
CN114027786A (en) Sleep disordered breathing detection method and system based on self-supervision memory network
CN115414041A (en) Autism assessment device, method, terminal device and medium based on electroencephalogram data
CN114692682A (en) Method and system for classifying motor imagery based on graph embedding representation
Zhang et al. A novel multimodule neural network for EEG denoising
Nam et al. The effects of layer-wise relevance propagation-based feature selection for EEG classification: A comparative study on multiple datasets
Tiwari et al. EEG signals to digit classification using deep learning-based one-dimensional convolutional neural network
Shah et al. Analysis of EEG for Parkinson’s Disease Detection
Çelebi et al. An emotion recognition method based on EWT-3D–CNN–BiLSTM-GRU-AT model
CN113349796B (en) Sleep monitoring method, equipment and storage medium based on multi-source signals
CN116421200A (en) Brain electricity emotion analysis method of multi-task mixed model based on parallel training
CN114569116A (en) Three-channel image and transfer learning-based ballistocardiogram ventricular fibrillation auxiliary diagnosis system
CN111493864B (en) EEG signal mixed noise processing method, equipment and storage medium
Gondowijoyo et al. Applying artificial neural network on heart rate variability and electroencephalogram signals to determine stress

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant