CN111248868A - Quick eye movement sleep analysis method, system and equipment - Google Patents

Quick eye movement sleep analysis method, system and equipment Download PDF

Info

Publication number
CN111248868A
CN111248868A CN202010105188.1A CN202010105188A CN111248868A CN 111248868 A CN111248868 A CN 111248868A CN 202010105188 A CN202010105188 A CN 202010105188A CN 111248868 A CN111248868 A CN 111248868A
Authority
CN
China
Prior art keywords
eye movement
data
video
rapid eye
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010105188.1A
Other languages
Chinese (zh)
Inventor
刘茹涵
黄献
江文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Huxiang Medical Devices Co ltd
Original Assignee
Changsha Huxiang Medical Devices Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Huxiang Medical Devices Co ltd filed Critical Changsha Huxiang Medical Devices Co ltd
Priority to CN202010105188.1A priority Critical patent/CN111248868A/en
Publication of CN111248868A publication Critical patent/CN111248868A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/398Electrooculography [EOG], e.g. detecting nystagmus; Electroretinography [ERG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Abstract

The application discloses a rapid eye movement sleep analysis method, a system and equipment, which classify video polysomnography data in a rapid eye movement period and a non-rapid eye movement period, and store classification results and video data according to a corresponding relation; carrying out data preprocessing on the video data, and dividing the preprocessed video data and the corresponding classification result into a training set and a test set; performing feature learning on video data in a training set, classifying according to a given corresponding classification result, and performing certain adjustment on a deep neural network on a test set; preprocessing and fragmenting video information to be classified to obtain input data matched with the input of the deep neural network; and inputting the deep neural network to obtain the output result of the network. The mode of video monitoring is used for automatically analyzing the rapid eye movement sleep, the complexity of a video polysomnography system is reduced, the contact influence on a patient is reduced, and the comfort level and the accuracy of the polysomnography patient are improved.

Description

Quick eye movement sleep analysis method, system and equipment
Technical Field
The application belongs to the technical field of medical signal processing, and particularly relates to a method, a system and equipment for analyzing rapid eye movement sleep.
Background
The analysis of the sleep structure at night has great significance for understanding the sleep condition of a sleep patient, and in the analysis of the sleep structure, rapid eye movement sleep and non-rapid eye movement sleep are two important components in the sleep structure. Wherein, in the non-rapid eye movement period, the slow eye movement occasionally occurs. However, the rapid eye movement stage has obvious conjugation, irregular and sharp eye movement and is accompanied by phasic muscle twitch. Due to these characteristics, the analysis of the rapid eye movement period and the non-rapid eye movement period is mainly interpreted clinically by using an ocular electrical signal, a chin electrical signal and an electroencephalogram signal in polysomnography.
At present, an automatic rapid eye movement sleep analysis method is mainly based on electroencephalogram signals, electro-oculogram signals, electromyogram signals or combination of the electro-oculogram signals, the electro-oculogram signals and the electromyogram signals. The acquisition of these signals all requires the patient to wear additional experimental equipment, such as electroencephalogram electrodes, electromyogram electrodes, and electrooculogram electrodes, on the face or head. These devices can place an additional burden on the patient, which can affect the patient's sleep and lead to a reduction in sleep quality. In addition, with the rapid development of the deep learning method in the field of computer vision, the automatic judgment of the rapid eye movement through the video is no longer a difficult problem to overcome.
Disclosure of Invention
To solve the problems in the prior art, the technical scheme provided by the application is as follows:
in a first aspect, an embodiment of the present application provides a method for analyzing fast eye movement sleep, where the method includes: classifying the video polysomnography data in a rapid eye movement period and a non-rapid eye movement period, and storing classification results and video data according to a corresponding relation; carrying out data preprocessing on the video data, and dividing the preprocessed video data and the corresponding classification result into a training set and a test set; performing feature learning on video data in a training set based on a deep neural network, classifying according to a given corresponding classification result, and performing certain adjustment on the deep neural network on a test set; preprocessing and fragmenting video information to be classified to obtain input data matched with the input of the deep neural network; and inputting the deep neural network to obtain a network output result.
By adopting the implementation mode, the automatic analysis of the rapid eye movement sleep is carried out by using the video monitoring mode, so that the working time of a doctor on the sleep stage can be greatly reduced, and the working efficiency is improved. In addition, the method can be used for simplifying a video multi-conduction sleep monitoring system, the myoelectric and electrooculogram of the chin are replaced, the complexity of the video multi-conduction sleep monitoring system is reduced, and the contact influence on a patient is reduced.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the classifying the video polysomnography data in a fast eye movement period and a non-fast eye movement period, and storing the classification result and the video data according to a corresponding relationship includes: acquiring video multi-sleep monitoring record information, wherein the video multi-sleep monitoring record information comprises: an electro-ocular signal, an electromyographic signal, an electroencephalographic signal, and a video signal; acquiring corresponding rapid eye movement stage marks marked by a clinician; and acquiring the data of the rapid eye movement period and the data of the non-rapid eye movement period according to the video multi-lead sleep monitoring record information and the corresponding rapid eye movement stage label marked by the clinician.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the data preprocessing is performed on the video data, and the preprocessed video data and the corresponding classification result are divided into a training set and a test set, which includes: dividing the video data into recording segments according to the marked minimum rapid eye movement stage staging time interval given by a doctor, wherein the time length of each segment is the same and corresponds to a staging label; performing a pre-processing operation on the video data, the pre-processing operation comprising: resampling and calculating optical flow superposition; constructing a completely processed standard data set by using the processed video data and the staging label according to the operation; the standard data set is divided into a training set and a testing set.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the performing feature learning on the video data in the training set based on the deep neural network, classifying the video data with reference to a given corresponding classification result, and performing a certain adjustment on the deep neural network on the test set includes: training the deep neural network by using a training set based on the deep neural network; the test set is used to perform the test and adjust the parameters of the model.
In a second aspect, an embodiment of the present application provides a rapid eye movement sleep analysis system, including: the data classification module is used for classifying the video multi-sleep monitor data in a rapid eye movement period and a non-rapid eye movement period and storing a classification result and the video data according to a corresponding relation; the data preprocessing module is used for preprocessing the video data and dividing the preprocessed video data and the corresponding classification result into a training set and a test set; the characteristic learning module is used for carrying out characteristic learning on the video data in the training set based on the deep neural network, classifying the video data according to the given corresponding classification result and carrying out certain adjustment on the deep neural network on the test set; the data input module is used for preprocessing and fragmenting the video information to be classified to obtain input data matched with the input of the deep neural network; and the output result module is used for inputting the deep neural network to obtain a deep neural network output result.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the data classification module includes: a first obtaining unit, configured to obtain video polysomnography information, where the video polysomnography information includes: an electro-ocular signal, an electromyographic signal, an electroencephalographic signal, and a video signal; the second acquisition unit is used for acquiring corresponding rapid eye movement stage marks marked by the clinician; and the classification unit is used for obtaining the data of the rapid eye movement period and the data of the non-rapid eye movement period according to the video multi-lead monitoring record information and the corresponding rapid eye movement stage labels marked by the clinicians.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the data preprocessing module includes: the data dividing unit is used for dividing the video data into recording segments according to the minimum quick eye movement period staging time interval marked by a doctor, wherein the time length of each segment is the same, and each segment corresponds to a staging label; a pre-processing unit configured to perform a pre-processing operation on the video data, the pre-processing operation including: resampling and calculating optical flow superposition; a construction unit, configured to construct a completely processed standard data set using the processed video data and the staging label according to the above operation; the standard data set is divided into a training set and a testing set.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the feature learning module includes: the training unit is used for training the deep neural network by using a training set based on the deep neural network; and the test unit is used for testing by using the test set and adjusting the parameters of the model.
In a third aspect, an embodiment of the present application provides a fast eye movement sleep analysis apparatus, including: a processor; a memory for storing computer executable instructions; when the processor executes the computer-executable instructions, the processor executes the method for analyzing rapid eye movement sleep according to the first aspect or any one of the possible implementation manners of the first aspect
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a method for analyzing rapid eye movement sleep according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a deep neural network according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a rapid eye movement sleep analysis system according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a rapid eye movement sleep analysis apparatus according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Most of the prior polysomnography systems contain video signal acquisition, and because reading long-time video information consumes a large amount of manpower and material resources, the information is not well utilized. In recent years, due to the development of computer vision algorithms, automatic video image analysis techniques are gradually mature, and classification of videos by using a deep learning algorithm becomes feasible.
Fig. 1 is a schematic flow diagram of a method for analyzing rapid eye movement sleep provided in an embodiment of the present application, and referring to fig. 1, the method for analyzing rapid eye movement sleep in the embodiment of the present application includes:
s101, classifying the video polysomnography data in a rapid eye movement period and a non-rapid eye movement period, and storing classification results and video data according to a corresponding relation.
Specifically, video polysomnography monitoring record information is acquired, and the monitoring record information includes: an electro-ocular signal, an electromyographic signal, an electroencephalographic signal, and a video signal. And acquiring corresponding rapid eye movement stage marks marked by clinicians, and acquiring the rapid eye movement stage data and non-rapid eye movement stage data according to the video polysomnography information and the corresponding rapid eye movement stage marks marked by the clinicians.
In the embodiment of the invention, the video polysomnography data is a sleep polysomnography examination record of a sleep patient with electrooculogram, myoelectricity, electroencephalogram and video signals collected in clinical practice. The record comprises original signal data of each signal in the multi-lead sleep monitoring and sleep stage reports given by doctors, wherein the sleep stage reports comprise stage marks of non-eyeball fast moving sleep (NREM (non-rapid eye movement stage sleep) and eyeball fast moving sleep (REM (rapid eye movement stage sleep)).
S102, carrying out data preprocessing on the video data, and dividing the preprocessed video data and the corresponding classification result into a training set and a test set.
In the embodiment of the invention, the data preprocessing method specifically solves the problems of data fragmentation and resampling. The preprocessing of the data of the rapid eye movement period specifically comprises the following steps:
the video data is divided into recording segments according to the minimum quick eye movement stage staging time interval marked by the doctor, each segment has the same time length and corresponds to a staging label.
In the embodiment of the present invention, the video data set includes the fast eye movement period and the non-fast eye movement period labels corresponding to the time sequence. The minimum rapid eye movement stage time interval is the minimum sleep stage resolution time displayed by the polysomnography report, such as: the report results show that the minimum resolution time is 30 seconds, and the time duration of the non-rapid eye movement region is a multiple of 30 seconds. The identifier given by each segment is distinguished by 0 and 1, for example, the identifier 0 indicates that the fixed-length segment is all located in the non-rapid eye movement period, and the identifier 1 indicates that the fixed-length segment is all located in the rapid eye movement period. According to the above description, the video is divided into segments according to the minimum resolution time, and each segment is given a 0, 1 mark.
Performing a pre-processing operation on the video data, the pre-processing operation comprising: resampling and computing optical flow overlays.
The original video data does not necessarily meet the requirements of the subsequent deep neural network in the size of the image and the number of sampled frames, so the size and the number of frames of the video are adjusted in the step.
And constructing a complete standard data set by using the processed video data and the staging label according to the operation.
The standard data set is divided into a training set and a testing set according to a specific proportion.
In the embodiment of the present invention, the preprocessed video data and the corresponding classification result are divided into a training set and a test set, and the method of dividing the preprocessed video data into the training set and the test set may adopt proportion distribution, where the proportion is 8: 2. that is, all video data used for model acquisition will be divided into 80% training set and 20% testing set in a manner corresponding to whether the eye movement is rapid.
S103, feature learning is carried out on the video data in the training set based on the deep neural network, classification is carried out according to the given corresponding classification result, and certain adjustment is carried out on the deep neural network on the testing set.
Specifically, the deep neural network is trained using a training set based on the deep neural network, tested using a test set, and parameters of the model are adjusted.
In an embodiment of the present invention, a deep neural network structure diagram as shown in fig. 2 is provided. The depth neural network presents a structure from a three-dimensional video block to an output rapid eye movement label and a non-rapid eye movement label, wherein the three-dimensional video block is formed by splicing multiple frames of pictures, the scale is H x W x F, H represents the length of a video image, W represents the width of the video image, and F represents the number of the included video image frames. The deep neural network comprises two feature extraction modules, wherein the first module comprises a three-dimensional convolution layer and a three-dimensional pooling layer, and the second module comprises two three-dimensional convolution layers and a three-dimensional pooling layer. The whole deep neural network sequentially comprises a feature extraction module group consisting of two modules I and three modules II, a feature fusion layer consisting of two full-connection layers and a classification layer consisting of a softmax layer.
In the embodiment of the present invention, the number of the three-dimensional convolution layers in the first module i is one, the number of the convolution kernels is 64, the size of the convolution kernels is 3 × 3, and the step size is 1 × 1; the number of three-dimensional pooling layers is one, and the size of the pooling core is 1 x 2. The number of the three-dimensional convolution layers in the second module I is one, the number of convolution kernels is 128, the size of the convolution kernels is 3 x 3, and the step size is 1 x 1; the number of three-dimensional pooling layers is one, and the size of the pooling core is 2 x 2. The number of the three-dimensional convolution layers in the first module II is two, wherein the number of the two convolution kernels is 256, the sizes of the convolution kernels are 3 x 3, and the step size is 1 x 1; the number of three-dimensional pooling layers is one, and the size of the pooling core is 2 x 2. The number of the three-dimensional convolution layers in the second module II is two, wherein the number of the two convolution kernels is 512, the sizes of the convolution kernels are 3 x 3, and the step size is 1 x 1; the number of three-dimensional pooling layers is one, and the size of the pooling core is 2 x 2. The number of the three-dimensional convolution layers in the third module II is two, wherein the number of the two convolution kernels is 512, the sizes of the convolution kernels are 3 x 3, and the step size is 1 x 1; the number of three-dimensional pooling layers is one, and the size of the pooling core is 2 x 2. The number of the neurons of the two full connection layers is 3096 for the former, 2 for the latter, and two types for the output softmax.
And S104, preprocessing and slicing the video information to be classified to obtain input data matched with the input of the deep neural network.
In the embodiment of the present invention, the video information to be classified is preprocessed according to the preprocessing method in S102 and is segmented.
And S105, inputting the deep neural network to obtain a deep neural network output result.
According to the embodiment, the rapid eye movement sleep analysis method is provided, the automatic analysis of the rapid eye movement sleep is performed by using a video monitoring mode, the working time of a doctor on the sleep stage can be greatly reduced, and the working efficiency is improved. In addition, the method can be used for simplifying a video multi-lead sleep monitoring system, and replacing chin myoelectricity and electrooculogram, so that the complexity of the multi-lead sleep monitoring system is reduced, the contact influence on a patient is reduced, and the comfort level of a patient subjected to multi-lead sleep monitoring is improved.
In correspondence with the above-mentioned embodiment of a rapid eye movement sleep analysis, the present application also provides an embodiment of a rapid eye movement sleep analysis system, and referring to fig. 3, the rapid eye movement sleep analysis system 20 includes: a data classification module 201, a data preprocessing module 202, a feature learning module 203, a data input module 204, and an output result module 205.
The data classification module 201 is configured to classify the video polysomnography data in a fast eye movement period and a non-fast eye movement period, and store a classification result and the video data according to a corresponding relationship. The data preprocessing module 200 is configured to perform data preprocessing on the video data, and divide the preprocessed video data and the corresponding classification result into a training set and a test set according to a certain proportion. The feature learning module 203 is configured to perform feature learning on the video data in the training set based on the deep neural network, classify the video data with reference to the given corresponding classification result, and perform a certain adjustment on the deep neural network on the test set. And the data input module 204 is configured to perform preprocessing and fragmentation on the video information to be classified to obtain input data matched with the input of the deep neural network. And the output result module 205 is used for inputting the deep neural network to obtain a deep neural network output result.
Further, the data classification module comprises: a first obtaining unit, configured to obtain video polysomnography information, where the video polysomnography information includes: an electro-ocular signal, an electromyographic signal, an electroencephalographic signal, and a video signal. And the second acquisition unit is used for acquiring corresponding rapid eye movement stage marks marked by the clinician. And the classification unit is used for obtaining the data of the rapid eye movement period and the data of the non-rapid eye movement period according to the video polysomnography recording information and the corresponding rapid eye movement stage labels marked by the clinicians.
The data preprocessing module comprises: and the data dividing unit is used for dividing the video data into recording segments according to the minimum quick eye movement period staging time interval marked by the doctor, wherein the time length of each segment is the same, and each segment corresponds to one staging label. A pre-processing unit configured to perform a pre-processing operation on the video data, the pre-processing operation including: resampling and computing optical flow overlays. And the construction unit is used for constructing a complete standard data set by using the processed video data and the staging label according to the operation. The standard data set is divided into a training set and a testing set according to a specific proportion.
The feature learning module includes: and the training unit is used for training the deep neural network by using the training set based on the deep neural network. And the test unit is used for testing by using the test set and adjusting the parameters of the model.
An embodiment of the present application further provides a rapid eye movement sleep analysis apparatus, and referring to fig. 4, the rapid eye movement sleep analysis apparatus 30 includes: a processor 301, a memory 302, and a communication interface 303.
In fig. 4, the processor 301, the memory 302, and the communication interface 303 may be connected to each other by a bus; the bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
The processor 301 generally controls the overall functions of the rapid eye movement sleep analysis device 30, such as starting the rapid eye movement sleep analysis device, dividing video information in video polysomnography data into rapid eye movement period data and non-rapid eye movement period data after the rapid eye movement sleep analysis device is started, preprocessing the video data to obtain a training set and a test set, performing feature learning and classification on the preprocessed video data based on a deep neural network, analyzing a rapid eye movement period detection result, and the like.
The processor 301 may be a general-purpose processor such as a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP. The processor may also be a Microprocessor (MCU). The processor may also include a hardware chip. The hardware chips may be Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a Field Programmable Gate Array (FPGA), or the like.
Memory 302 is configured to store computer-executable instructions to support the operation of the rapid eye movement sleep analysis device 30 data. The memory 301 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
After the rapid eye movement sleep analysis device 30 is started, the processor 301 and the memory 302 are powered on, and the processor 301 reads and executes the computer executable instructions stored in the memory 302 to complete all or part of the steps in the above-mentioned embodiment of the rapid eye movement sleep analysis method.
The communication interface 303 is used for the rapid eye movement sleep analysis device 30 to transmit data, for example, to realize communication with a client and a server. The communication interface 303 includes a wired communication interface, and may also include a wireless communication interface. The wired communication interface comprises a USB interface, a Micro USB interface and an Ethernet interface. The wireless communication interface may be a WLAN interface, a cellular network communication interface, a combination thereof, or the like.
In an exemplary embodiment, the rapid eye movement sleep analysis device 30 provided by the embodiments of the present application further includes a power supply component that provides power to the various components of the rapid eye movement sleep analysis device 30. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the rapid eye sleep analysis device 30.
A communication component configured to facilitate wired or wireless communication between the rapid eye movement sleep analysis device 30 and other devices. The rapid eye movement sleep analysis device 30 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. The communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. The communication component also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the rapid eye movement sleep analysis device 30 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), rapid eye movement sleep analysis devices, micro rapid eye movement sleep analysis devices, processors, or other electronic components.
The same and similar parts among the various embodiments in the specification of the present application may be referred to each other. In particular, for the system and apparatus embodiments, since the method therein is substantially similar to the method embodiments, the description is relatively simple, and reference may be made to the description in the method embodiments for relevant points.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A method for analyzing rapid eye movement sleep, the method comprising:
classifying the video polysomnography data in a rapid eye movement period and a non-rapid eye movement period, and storing classification results and video data according to a corresponding relation;
carrying out data preprocessing on the video data, and dividing the preprocessed video data and the corresponding classification result into a training set and a test set;
performing feature learning on video data in a training set based on a deep neural network, classifying according to a given corresponding classification result, and performing certain adjustment on the deep neural network on a test set;
preprocessing and fragmenting video information to be classified to obtain input data matched with the input of the deep neural network;
and inputting the deep neural network to obtain a deep neural network output result.
2. The method of claim 1, wherein the classifying the video polysomnography data into a fast eye movement period and a non-fast eye movement period, and the storing the classification result and the video data according to a corresponding relationship comprises:
acquiring video multi-sleep monitoring record information, wherein the video multi-sleep monitoring record information comprises: an electro-ocular signal, an electromyographic signal, an electroencephalographic signal, and a video signal;
acquiring corresponding rapid eye movement stage marks marked by a clinician;
and acquiring the data of the rapid eye movement period and the data of the non-rapid eye movement period according to the video multi-lead sleep monitoring record information and the corresponding rapid eye movement stage label marked by the clinician.
3. The method of claim 2, wherein the pre-processing the video data and separating the pre-processed video data and the corresponding classification results into a training set and a testing set comprises:
dividing the video data into recording segments according to the marked minimum rapid eye movement stage staging time interval given by a doctor, wherein the time length of each segment is the same and corresponds to a staging label;
performing a pre-processing operation on the video data, the pre-processing operation comprising: resampling and calculating optical flow superposition;
constructing a completely processed standard data set by using the processed video data and the staging label according to the operation;
the standard data set is divided into a training set and a testing set.
4. The method of claim 3, wherein the learning of features of the video data in the training set based on the deep neural network, the classification with reference to the corresponding classification results, and the certain adjustment of the deep neural network on the test set comprise:
training the deep neural network by using a training set based on the deep neural network;
the test set is used to perform the test and adjust the parameters of the model.
5. A rapid eye movement sleep analysis system, the system comprising:
the data classification module is used for classifying the video multi-sleep monitor data in a rapid eye movement period and a non-rapid eye movement period and storing a classification result and the video data according to a corresponding relation;
the data preprocessing module is used for preprocessing the video data and dividing the preprocessed video data and the corresponding classification result into a training set and a test set;
the characteristic learning module is used for carrying out characteristic learning on the video data in the training set based on the deep neural network, classifying the video data according to the given corresponding classification result and carrying out certain adjustment on the deep neural network on the test set;
the data input module is used for preprocessing and fragmenting the video information to be classified to obtain input data matched with the input of the deep neural network;
and the output result module is used for inputting the deep neural network to obtain a deep neural network output result.
6. The system of claim 5, wherein the data classification module comprises:
a first obtaining unit, configured to obtain video polysomnography information, where the video polysomnography information includes: an electro-ocular signal, an electromyographic signal, an electroencephalographic signal, and a video signal;
the second acquisition unit is used for acquiring corresponding rapid eye movement stage marks marked by the clinician;
and the classification unit is used for obtaining the data of the rapid eye movement period and the data of the non-rapid eye movement period according to the video polysomnography recording information and the corresponding rapid eye movement stage labels marked by the clinicians.
7. The system of claim 6, wherein the data pre-processing module comprises:
the data dividing unit is used for dividing the video data into recording segments according to the minimum rapid eye movement period staging time interval marked by a doctor, wherein the time length of each segment is the same, and each segment corresponds to a staging label;
a pre-processing unit configured to perform a pre-processing operation on the video data, the pre-processing operation including: resampling and calculating optical flow superposition;
a construction unit, configured to construct a completely processed standard data set using the processed video data and the staging label according to the above operation;
the standard data set is divided into a training set and a testing set.
8. The system of claim 7, wherein the feature learning module comprises:
the training unit is used for training the deep neural network by using a training set based on the deep neural network;
and the test unit is used for testing by using the test set and adjusting the parameters of the model.
9. A rapid eye movement sleep analysis apparatus, comprising:
a processor;
a memory for storing computer executable instructions;
when the computer-executable instructions are executed by the processor, the processor performs the method of analyzing rapid eye movement sleep of any of claims 1-4.
CN202010105188.1A 2020-02-20 2020-02-20 Quick eye movement sleep analysis method, system and equipment Pending CN111248868A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010105188.1A CN111248868A (en) 2020-02-20 2020-02-20 Quick eye movement sleep analysis method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010105188.1A CN111248868A (en) 2020-02-20 2020-02-20 Quick eye movement sleep analysis method, system and equipment

Publications (1)

Publication Number Publication Date
CN111248868A true CN111248868A (en) 2020-06-09

Family

ID=70941611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010105188.1A Pending CN111248868A (en) 2020-02-20 2020-02-20 Quick eye movement sleep analysis method, system and equipment

Country Status (1)

Country Link
CN (1) CN111248868A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113630658A (en) * 2021-07-23 2021-11-09 重庆天如生物科技有限公司 System and method for collecting and labeling gastrointestinal endoscope video image data
CN114998229A (en) * 2022-05-23 2022-09-02 电子科技大学 Non-contact sleep monitoring method based on deep learning and multi-parameter fusion
CN115581435A (en) * 2022-08-30 2023-01-10 湖南万脉医疗科技有限公司 Sleep monitoring method and device based on multiple sensors

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6805668B1 (en) * 2001-06-26 2004-10-19 Cadwell Industries, Inc. System and method for processing patient polysomnograph data utilizing multiple neural network processing
US20080262373A1 (en) * 2007-04-02 2008-10-23 Burns Joseph W Automated polysomnographic assessment for rapid eye movement sleep behavior disorder
CN104793493A (en) * 2015-04-09 2015-07-22 南京邮电大学 Semi-automatic sleep staging device based on realtime neutral network
CN104834946A (en) * 2015-04-09 2015-08-12 清华大学 Method and system for non-contact sleep monitoring
CN107174209A (en) * 2017-06-02 2017-09-19 南京理工大学 Sleep stage based on nonlinear kinetics method by stages
CN107205650A (en) * 2015-01-27 2017-09-26 苹果公司 system for determining sleep quality
WO2018070935A1 (en) * 2016-10-11 2018-04-19 National University Of Singapore Determining sleep stages
CN110097039A (en) * 2019-05-30 2019-08-06 东北电力大学 A kind of sleep state monitoring energy conservation based on deep learning image recognition is helped the elderly system
CN110348500A (en) * 2019-06-30 2019-10-18 浙江大学 Sleep disturbance aided diagnosis method based on deep learning and infrared thermal imagery

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6805668B1 (en) * 2001-06-26 2004-10-19 Cadwell Industries, Inc. System and method for processing patient polysomnograph data utilizing multiple neural network processing
US20080262373A1 (en) * 2007-04-02 2008-10-23 Burns Joseph W Automated polysomnographic assessment for rapid eye movement sleep behavior disorder
CN107205650A (en) * 2015-01-27 2017-09-26 苹果公司 system for determining sleep quality
CN104793493A (en) * 2015-04-09 2015-07-22 南京邮电大学 Semi-automatic sleep staging device based on realtime neutral network
CN104834946A (en) * 2015-04-09 2015-08-12 清华大学 Method and system for non-contact sleep monitoring
WO2018070935A1 (en) * 2016-10-11 2018-04-19 National University Of Singapore Determining sleep stages
CN107174209A (en) * 2017-06-02 2017-09-19 南京理工大学 Sleep stage based on nonlinear kinetics method by stages
CN110097039A (en) * 2019-05-30 2019-08-06 东北电力大学 A kind of sleep state monitoring energy conservation based on deep learning image recognition is helped the elderly system
CN110348500A (en) * 2019-06-30 2019-10-18 浙江大学 Sleep disturbance aided diagnosis method based on deep learning and infrared thermal imagery

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113630658A (en) * 2021-07-23 2021-11-09 重庆天如生物科技有限公司 System and method for collecting and labeling gastrointestinal endoscope video image data
CN114998229A (en) * 2022-05-23 2022-09-02 电子科技大学 Non-contact sleep monitoring method based on deep learning and multi-parameter fusion
CN114998229B (en) * 2022-05-23 2024-04-12 电子科技大学 Non-contact sleep monitoring method based on deep learning and multi-parameter fusion
CN115581435A (en) * 2022-08-30 2023-01-10 湖南万脉医疗科技有限公司 Sleep monitoring method and device based on multiple sensors

Similar Documents

Publication Publication Date Title
CN111248868A (en) Quick eye movement sleep analysis method, system and equipment
US9898659B2 (en) System and method for remote medical diagnosis
CN104902806B (en) The assessment system and method for europathology
CN109886933A (en) A kind of medical image recognition method, apparatus and storage medium
CN114514585A (en) Disease prediction system, premium calculation system, and disease prediction method
CN108920634A (en) The skin disease characteristic analysis system of knowledge based map
US20220346700A1 (en) Eeg recording and analysis
Bohr et al. Quantitative analysis of organic vocal fold pathologies in females by high‐speed endoscopy
CN105534534A (en) Emotion recognition method, device and system based on real-time functional magnetic resonance
CN112089398A (en) Method for detecting drug addiction degree
DE102018210973A1 (en) Method for monitoring a patient during a medical imaging examination, in particular a magnetic resonance examination
CN110141258A (en) A kind of emotional state detection method, equipment and terminal
CN113057599A (en) Machine for rapidly evaluating pain
Zhang et al. A human-in-the-loop deep learning paradigm for synergic visual evaluation in children
Kaushal et al. An IoMT‐based smart remote monitoring system for healthcare
EP3659500A1 (en) Analysis unit and system for assessment of hair condition
CN113317762A (en) Cloud server
CN110414298A (en) A kind of more attribute joint recognition methods of monkey face
US20230144621A1 (en) Capturing diagnosable video content using a client device
CN113100780B (en) Automatic processing method for synchronous brain electricity-function magnetic resonance data
CN110675953B (en) System for identifying psychotic patients using artificial intelligence and big data screening
CN112686158A (en) Emotion recognition system and method based on electroencephalogram signals and storage medium
CN114041753B (en) Sleep staging method, apparatus, computer device and storage medium
US20220245811A1 (en) Analysis of retinal imaging using video
CN117481602A (en) Sleep instrument based on forehead electroencephalogram and body movement signals and staging algorithm thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200609

RJ01 Rejection of invention patent application after publication