CN108198617A - Illness appraisal procedure, terminal device and computer-readable medium - Google Patents

Illness appraisal procedure, terminal device and computer-readable medium Download PDF

Info

Publication number
CN108198617A
CN108198617A CN201711468158.1A CN201711468158A CN108198617A CN 108198617 A CN108198617 A CN 108198617A CN 201711468158 A CN201711468158 A CN 201711468158A CN 108198617 A CN108198617 A CN 108198617A
Authority
CN
China
Prior art keywords
sleep apnea
data
unit
target
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711468158.1A
Other languages
Chinese (zh)
Inventor
王伟
刘洪涛
梁杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Het Data Resources and Cloud Technology Co Ltd
Original Assignee
Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Het Data Resources and Cloud Technology Co Ltd filed Critical Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority to CN201711468158.1A priority Critical patent/CN108198617A/en
Publication of CN108198617A publication Critical patent/CN108198617A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the invention discloses illness appraisal procedure, terminal device and computer-readable medium, wherein method includes:Obtain the first object sleep apnea data of the first unit interval, then the first object sleep apnea data are inputted into N layer depth neural networks, and the middle layer output data of the deep neural network is obtained as the second target sleep apnea data, the N is positive integer, the first object sleep apnea data and the second target sleep apnea data are finally inputted into sleep apnea assessment models simultaneously, obtain sleep apnea assessment result, the sleep apnea assessment result is used to indicate whether user corresponding with the first object sleep apnea data sleep apnea occurs within first unit interval.Using the embodiment of the present invention, can sleep apnea Illnesses Diagnoses intelligently be carried out to user, improve the convenient and efficient property and accuracy of illness assessment.

Description

Disease evaluation method, terminal device and computer readable medium
Technical Field
The invention relates to the technical field of intelligent medical treatment, in particular to a disease evaluation method, terminal equipment and a computer readable medium.
Background
Sleep Apnea Syndrome (SAS) is a symptom with unclear etiology and pathogenesis at present, and the clinical manifestations mainly include: the night sleep snoring is accompanied by symptoms such as apnea and daytime sleepiness. The apnea can cause repeated hypercapnia and night hypoxia, so that complications such as coronary heart disease, diabetes, cerebrovascular disease and the like can be caused, and even sudden death at night can be caused in severe cases. How to accurately diagnose sleep apnea syndrome is an important part of nighttime medicine.
In order to solve the above problems, the SAS detection method proposed in the prior art mainly includes: x-ray projection detection, Polysolnogram (PSG) detection, nasopharyngoscope detection. However, in practice, the SAS detection methods provided by the prior art all depend on expensive medical equipment and are difficult to popularize. Particularly, the PSG detection method requires detection for more than 7 hours in a sleep monitoring room equipped with expensive medical equipment, has many monitoring signals, and requires a professional to diagnose to analyze whether a user has sleep apnea syndrome, the severity of the condition of the user when the user has sleep apnea syndrome, and the like, which causes consumption of a large amount of labor cost and equipment cost, and is not easy to popularize. Therefore, a simple and convenient disease assessment scheme needs to be designed.
Disclosure of Invention
The embodiment of the invention provides a disease evaluation method, which can intelligently and conveniently diagnose the sleep apnea disease grade of a user and improve the convenience, rapidness and accuracy of disease evaluation.
In a first aspect, embodiments of the present invention provide a method for evaluating a disease condition, the method including:
acquiring first target sleep apnea data of a first unit time;
inputting the first target sleep apnea data into N layers of deep neural networks, and acquiring middle layer output data of the deep neural networks as second target sleep apnea data, wherein N is a positive integer;
and simultaneously inputting the first target sleep apnea data and the second target sleep apnea data into a sleep apnea evaluation model to obtain a sleep apnea evaluation result, wherein the sleep apnea evaluation result is used for indicating whether a user corresponding to the first target sleep apnea data has sleep apnea within the first unit time.
In some possible embodiments, the middle layer of the deep neural network is layer N-1 of the deep neural network.
In some possible embodiments, the first target sleep apnea data comprises sleep apnea data for a second unit of time, the first unit of time comprises the second unit of time and the first unit of time is greater than the second unit of time.
In some possible embodiments, the first target sleep apnea data includes sleep apnea data for an i-n second unit time to an i + m second unit time, where i is a positive integer, n is a positive integer, and m is a positive integer.
In some possible embodiments, before the inputting the first target sleep apnea data as the sleep apnea evaluation model, the method further includes:
obtaining the sleep apnea assessment model.
In some possible embodiments, the obtaining the sleep apnea assessment model comprises:
acquiring a preset number of first sleep apnea sample data of the first unit time;
inputting the first sleep apnea sample data into an N-layer deep neural network, and acquiring middle-layer output data of the deep neural network as second sleep apnea sample data;
and simultaneously inputting the first sleep apnea sample data and the second sleep apnea sample data, and training a preset recognition model to obtain the sleep apnea evaluation model.
In some possible embodiments, said obtaining a preset number of first sleep apnea sample data for said first unit of time comprises:
acquiring sleep apnea data of a preset number of users;
tagging the sleep apnea data based on a third unit of time, thereby obtaining the sleep apnea tag data comprising one or more sleep apnea tags, wherein the second unit of time comprises a plurality of the third unit of time;
obtaining the preset number of first sleep apnea sample data according to the sleep apnea marking data and the second unit time, wherein the first sleep apnea sample data comprises a sleep apnea label; under the condition that the number of the continuous sleep apnea marks in the second unit time exceeds a preset threshold, the sleep apnea label is used for indicating that the user corresponding to the first sleep apnea sample data has sleep apnea in the second unit time; otherwise, the sleep apnea label is used for indicating that the user corresponding to the first sleep apnea sample data does not have sleep apnea within the second unit time.
In some possible embodiments, after the first target sleep apnea data and the second target sleep apnea data are simultaneously input to a sleep apnea evaluation model to obtain a sleep apnea evaluation result, the method further includes:
determining a sleep apnea symptom grade of the user according to a plurality of sleep apnea evaluation results;
wherein the sleep apnea disorder level comprises any one of: health grade, severe grade, moderate grade, mild grade.
In a second aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a unit configured to execute the method of the first aspect.
In a third aspect, an embodiment of the present invention provides another terminal device, including: a processor, a memory, a communication interface, and a bus; the processor, the memory and the communication interface are connected through the bus and complete mutual communication; the memory stores executable program code; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory to perform the method of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, the computer program comprising program instructions, which, when executed by a processor, cause the processor to perform the method of the first aspect.
According to the embodiment of the invention, first target sleep apnea data of a first unit time is obtained, then the first target sleep apnea data is input into an N-layer deep neural network, middle layer output data of the deep neural network is obtained to serve as second target sleep apnea data, N is a positive integer, finally the first target sleep apnea data and the second target sleep apnea data are simultaneously input into a sleep apnea evaluation model, and a sleep apnea evaluation result is obtained and is used for indicating whether a user corresponding to the first target sleep apnea data has sleep apnea within the first unit time. By adopting the embodiment of the invention, the sleep apnea disease diagnosis of the user can be intelligently carried out, and the convenience, the rapidness and the accuracy of the disease evaluation are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a method for evaluating a condition according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram of a method for evaluating a condition according to another embodiment of the present invention;
fig. 3 is a schematic block diagram of a terminal device according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a terminal device according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminals described in embodiments of the invention include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
Referring to fig. 1, which is a schematic flow chart of a method for evaluating a disease condition according to an embodiment of the present invention, the method shown in fig. 1 may include the following steps:
step S102, the terminal device obtains first target sleep apnea data in first unit time.
In the present application, the first target sleep apnea data includes sleep apnea data for a second unit time, the first unit time includes the second unit time and the first unit time is greater than the second unit time.
The sleep apnea data may be physiological data used to determine whether a user is experiencing sleep apnea, including but not limited to Electrocardiography (ECG) data, photoplethysmography (PPG) data, Ballistocardiogram (BCG) data, Seismogram (SCG) data, Impedance Cardiogram (ICG) data, Pulse Wave (PW) data, Blood Pressure (BP) data, and so forth.
The first unit time and the second unit time are time interval measurement units set by the user side or the terminal device side, and the time interval measurement units may be minutes min, hours h, seconds s, and the like, which is not limited in the present invention. In this application, the duration of the period included in the first unit time is greater than the duration of the period included in the second unit time, that is, the first unit time is greater than the second unit time, specifically, the first unit time may be an integer multiple of the second unit time, for example, the first unit time is 3 times the second unit time, and the first unit time may not be an integer multiple of the second unit time, for example, the first unit time is 1.5 times the second unit time. For example, if the current sleep apnea data of the second unit time is sleep apnea data of 5 minutes, the corresponding target apnea data of the first unit time may be sleep apnea data including data from 3 minutes before and after the 5 minute, that is, including data from 2 minutes to 8 minutes, and if the current sleep apnea data of the second unit time is sleep apnea data of 5 minutes, the corresponding target apnea data of the first unit time may be sleep apnea data including data from 0.5 minutes before and after the 5 minute, that is, including data from 4.5 minutes to 6.5 minutes. It will be appreciated that the contextual association of target sleep apnea data will make the assessment more accurate due to the temporal context of the apnea events. Step S104, the terminal equipment inputs the first target sleep apnea data into an N-layer deep neural network to obtain a data result of a middle layer of the deep neural network as second sleep apnea sample data, wherein N is a positive integer.
The Deep Neural Network (DNN) may be a DNN including N layers of Neural networks, where N is set by a user side or a terminal device side in a user-defined manner, and the present application is not limited thereto.
Step S106, the terminal device simultaneously inputs the first target sleep apnea data and the second target sleep apnea data into a sleep apnea evaluation model to obtain a sleep apnea evaluation result, and the sleep apnea evaluation result is used for indicating whether a user corresponding to the first target sleep apnea data has sleep apnea within the first unit time.
The sleep apnea evaluation model is a mathematical model set by a user side or a terminal device side in a user-defined mode, and is specifically set forth below.
The following description refers to some specific embodiments and alternative embodiments of the present application.
In step S102, the terminal device may first acquire the physiological data of the user in the first unit time, and the acquisition mode of the physiological data is not limited in the present invention, for example, the physiological data is acquired from a server or other devices through a network, or is extracted from a local database, and the like. For the physiological data, reference may be made to the related explanations in the foregoing embodiments, which are not repeated herein.
Secondly, the terminal device may perform feature extraction on the physiological data to obtain physiological feature data (referred to as first target sleep apnea data in this application) in the first unit time. The physiological characteristic data may be used to determine whether a sleep apnea has occurred in the user within the first unit of time. Taking the example that the physiological data includes electrocardiographic ECG data (ECG signal), the terminal device may perform feature extraction processing, such as R-wave monitoring, RR interval calculation, and the like, on the ECG data to extract ECG feature data in the ECG data. The ECG characteristic data may also be referred to as Heart Rate Variability (HSV) data. Specifically, the terminal device may perform linear feature extraction and/or nonlinear feature extraction processing on the ECG data. Accordingly, when the terminal device performs a linear feature extraction process on the ECG data, the obtained ECG feature data may be linear domain feature data. When the terminal device performs the nonlinear feature extraction processing on the ECG data, the obtained ECG feature data may be nonlinear domain feature data.
In an alternative embodiment, the linear feature extraction process may be specifically divided into a time domain feature extraction process and a frequency domain feature extraction process. Accordingly, the linear domain characterization data may include both time domain characterization data and frequency domain characterization data. Several time-domain feature extraction methods and frequency-domain feature extraction methods are exemplarily given below. For example, the time-domain feature extraction method may include any one of: mean RR, MSD, Mean SD, SDNN, SDANN, r MSSD, PNN50, SDSD, NN50, and the like. Wherein Mean RR is interval Mean value, which is used to reflect Mean level of heart rate variability HSV, and is called Mean of rriintervals in english. MSD is the average of the absolute values of the differences between adjacent RR intervals, and is called mean subcocessivedifferences in english. Mean SD is the Mean value of standard deviation of RR interphase, and is called RR interphase means in English. SDNN is the standard deviation of RR intervals of sinus beats and is known as standard deviation of normal to normal values in English. SDANN is used to represent the standard deviation of the mean of RR intervals over a 5 minute period, and is generally referred to in English as the Standard of the average of the averages in the amounts of the minutes of the experimental recording. r MSSD is used to represent The root mean square of The difference between adjacent RR intervals, and is called The root mean square of difference between two adjacent ad jacent NNintervals in English. PNN50 represents the ratio of the number of heart beats with a difference between sinus adjacent RR intervals of more than 50ms to the total number of heart beats of RR intervals, which is generally referred to as Percent of NN50 total number of RR intervals. SDSD is used to represent the Standard deviation of all RR interval differences, all called Standard definition of statistical Difference between adjacent cycles. NN50 is the number of heart beats in all RR intervals, the difference between adjacent RR intervals is greater than 50ms, and is called number of pairs of adjacent normal to normal intervals by more than 50 ms.
The frequency domain feature extraction method comprises any one of the following extraction methods: ULF, VLF, HF, LF, nULF, nVLF, nHF, nLF, etc. Among them, ULF is Ultra Low Frequency, used to reflect the influence of circadian rhythm and neuroendocrine rhythm, and is called Ultra Low Frequency in english. VLF is a Very Low Frequency, associated with caloric regulation and humoral regulation, and is known throughout english as Very Low Frequency. Correspondingly, HF is High Frequency, and is called High Frequency throughout english. LF is Low Frequency, and English is called Low Frequency. nULF is normalized Ultra Low Frequency, and is called normal Ultra Low Frequency in English. nVLF is normalized to Very Low Frequency, and is called normal Very Low Frequency in English. nHF is normalized High Frequency, and is called normal High Frequency throughout English. nLF is the normalized low frequency, which is called normal Lowfrequency in English.
In an alternative embodiment, the specific manner of the nonlinear feature extraction processing is not limited in the present invention. For example, the terminal device may generally explore the ECG signal (i.e., ECG data) through nonlinear system theory and method, and obtain the nonlinear domain data by processing poincare scattergram. Detailed description of the non-linear domain feature extraction embodiments of the present invention are not described in detail or limited.
Before step S104 and step S106, training of the N-layer deep neural network DNN and training of the sleep apnea evaluation model are also involved. Which are separately described below.
Firstly, training the N-layer deep neural network DNN.
First, the terminal device acquires training sample data (which may also be referred to herein as target sleep apnea sample data). The training sample data includes physiological data of a preset number of users and disease condition diagnosis result data (referred to as disease condition diagnosis data in the present application) correspondingly given by a doctor. The preset number of users includes one or more groups of users, and the number of each group of users is not limited.
In a specific implementation, the terminal device may acquire physiological data of a preset number of users in a first unit time, and then perform feature extraction on the physiological data, so as to acquire physiological feature data of the preset number of users in the first unit time (i.e., sleep apnea data in the above text application). For the feature extraction, reference may be made to the related explanation in the foregoing step S102, which is not described herein again.
In an optional embodiment, the terminal device may further perform a labeling process on the physiological characteristic data, that is, label the sleep apnea data to obtain corresponding sleep apnea labeling data, where the sleep apnea labeling data carries/includes a sleep apnea label, and the sleep apnea label is used to indicate whether the user has sleep apnea. Then, sleep apnea tag data is obtained from the sleep apnea marker data.
Specifically, the terminal device may perform a labeling process on the sleep apnea data (i.e., the above physiological characteristic data) according to a third unit time, so as to obtain a plurality of continuous sleep apnea labeling data. The sleep apnea marker or the sleep apnea marker data is used to indicate/reflect whether the user has sleep apnea within a third unit of time. The third unit time is set autonomously by the user side or the terminal device side, the second unit time is greater than the third unit time, for example, the second unit time may be measured in minutes and min, and the third unit time may be measured in seconds and s.
And the terminal equipment divides the plurality of continuous sleep apnea mark data according to the second unit time, so as to obtain a plurality of sleep apnea sample data in the second unit time and sleep apnea labels of the sleep apnea sample data. The sleep apnea label or the sleep apnea sample data is for indicating whether the user has sleep apnea occurring within the second unit of time. Each said sleep apnea sample data may comprise a plurality of said sleep apnea marker data, e.g. 1min of sleep apnea sample data comprises 601 s of sleep apnea marker data. Further, when the terminal device detects that the number of consecutive target sleep apnea mark data occurring in the second unit time exceeds a first threshold, the corresponding sleep apnea sample data or sleep apnea label in the second unit time is used for indicating that the user has sleep apnea within the second unit time; otherwise, the corresponding sleep apnea sample data or sleep apnea label in the second unit time is used for indicating that the user does not have sleep apnea in the second unit time. Wherein the target sleep apnea marker data is indicative of the occurrence of sleep apnea by the user within the third unit of time. The first threshold is set by the user side or the terminal equipment side.
In alternative embodiments, the sleep apnea marker or the sleep apnea tag may be represented in the form of a preset string, a preset numerical value, or the like. For example, the sleep apnea flag may indicate that the user has not experienced sleep apnea within a third unit of time with a "0"; accordingly, the user is represented by "1" as the occurrence of sleep apnea within the third unit time.
In an optional embodiment, the first threshold may be set by a user side or a terminal device side in a self-defined manner, which is not limited in this application.
The following is a detailed description by way of an example. Taking the third unit time as second and the second unit time as minute as an example, the terminal device marks the sleep apnea data in the second unit time by taking the third unit time (second) as a unit, and obtains a plurality of continuous sleep apnea mark data. For example, with "1" to indicate that sleep apnea occurred/occurred to the user within the current second; accordingly, a "0" indicates that the user has not experienced sleep apnea within the current second. And dividing the data in 60 seconds (1 minute) as a second unit time, wherein the terminal equipment can obtain the sleep apnea sample data in the second unit time in a statistics mode. Each sleep apnea sample data may comprise a plurality of sleep apnea marker data, where a minute of sleep apnea sample data may comprise 60 sleep apnea marker data. Further, the terminal device may count whether the number of consecutive "1" s occurring within one minute (i.e. within a second unit time) exceeds a first threshold (e.g. 40), and if so, the terminal device may regard that the user has sleep apnea occurring within this one minute (within the second unit time), and may mark the sleep apnea sample data of this one minute (within the second unit time) as sleep apnea data, that is, the sleep apnea label of the sleep apnea sample data within the second unit time is used for indicating that the user has sleep apnea occurring within the second unit time; otherwise, considering that the user does not have sleep apnea within the minute (within the second unit time), the sleep apnea sample data of the minute is marked as normal data, that is, the sleep apnea label of the sleep apnea sample data within the second unit time is used for indicating that the user does not have sleep apnea within the second unit time.
It should be appreciated that the first threshold is adjustable during model training. The first threshold value can be adjusted according to the sleep apnea sample data of the real user, so that the model is better and more accurate.
Secondly, the terminal device needs to create a preset neural network, and the preset neural network design comprises N layers of neural networks.
In an optional embodiment, after the feature extraction, the terminal device may further perform a preprocessing on the physiological feature data (i.e., the sleep apnea data) to obtain processed physiological feature data, so that the terminal device performs subsequent related processing, such as a labeling process, by using the processed physiological feature data.
The pretreatment comprises any one or more of the following in combination: data deduplication processing, exception data processing, normalization processing, format conversion processing, and the like. Taking abnormal data processing as an example, due to the fact that the acquired signals are discontinuous, data after feature extraction is abnormal, the terminal device can remove data with the heart rate exceeding 100 times/minute in the training sample data and remove data with the heart rate being 0 times/minute. As another example, normalized data is taken as an example, because of the variety of physiological data or non-unity between data, the data needs to be processed into data with a uniform format, which is beneficial to model optimization.
In an optional embodiment, after the tag is printed, the terminal device may further perform context association on the sleep apnea sample data.
Specifically, the sleep apnea data of each user has a certain correlation, that is, the sleep apnea sample data of each time of the user is related to the sleep apnea sample data of the previous period and the sleep apnea sample data of the next period. Therefore, when the terminal device diagnoses the sleep apnea sample data in the ith time period, the terminal device can correlate the sleep apnea sample data between the first m time periods and the last n time periods which are away from the ith time period, wherein i, m and n are positive integers. That is, in the present application, the target sleep apnea data for each first unit time may specifically include sleep apnea data for the i-n th second unit time to the i + m th second unit time.
Assuming that the format of the input data supported by the pattern recognition model is a column vector, the terminal device may combine the column vector Dn of L × 1 in the ith time interval and the sleep apnea sample data in the first m time interval and the last n time interval into a matrix of L × m + m +1, and then the terminal device may convert the matrix into a one-dimensional column vector, which is finally used by the terminal device as the sleep apnea sample data for characterizing the ith time interval (i.e., the sleep apnea sample data in the ith first unit time). The present invention does not detailed about the conversion of matrix or vector data, but to ensure the consistency of matrix/vector dimension in the data processing process, the method can adopt the way of part 0, i.e. 0 is added in front of or behind a group of data. The sleep apnea sample data may be the sleep apnea sample data obtained after the labeling and preprocessing. It can be understood that due to the context correlation of the occurrence time of the apnea event, the improved HMM model obtained based on the apnea sample data is more accurate by performing the context correlation processing on the apnea sample data.
It should be noted that, in the ith period, the m-th period, and the n-th period, the measurement units of the periods are the same as the measurement units of the second unit time, for example, the measurement units are all in minutes, and the following description will be made in detail by using an example.
It should be noted that the processes of tagging, preprocessing, and context association described above may be specifically placed inside the pattern recognition model for processing, or may be placed outside the pattern recognition model for processing. It will be appreciated that when placed in the pattern recognition model process, then the sleep apnea sample data input into the pattern recognition model is untagged/tagged, pre-processed and context-dependent sleep apnea sample data.
It should be understood that, in the data processing process, a sleep apnea tag carried in the sleep apnea sample data (specifically, the sleep apnea sample data is obtained after the terminal device finishes tagging) does not change with an intermediate processing process of data (such as preprocessing, context association processing, and the like).
And finally, the terminal equipment trains the preset neural network by using the target sleep apnea sample data, so as to obtain the trained N-layer deep neural network.
It should be understood that the target sleep apnea sample data may be sleep apnea data extracted from the above features, may also be labeled sleep apnea data, or may also be context-dependent sleep apnea data, and the like, which is not limited in this application.
Taking the target sleep apnea sample data as data after context association as an example, the terminal device inputs the target sleep apnea sample data after uplink association processing into the preset neural network for training, so as to obtain the trained N-layer deep neural network. The present application does not describe the training of the N-layer deep neural network in detail.
Secondly, training the sleep apnea evaluation model.
First, the terminal device acquires training sample data (which may also be referred to herein as first sleep apnea sample data). The training sample data includes physiological data of a preset number of users and disease condition diagnosis result data (referred to as disease condition diagnosis data in the present application) correspondingly given by a doctor. The preset number of users includes one or more groups of users, and the number of each group of users is not limited.
For the description of the first sleep apnea sample data, reference may be made to the aforementioned description of the target sleep apnea sample data, and details are not repeated here. In this application, the first sleep apnea sample data may be context-dependent first sleep apnea sample data.
The first sleep apnea sample data and the target sleep apnea sample data may also be sleep apnea sample data of the same group of users, or may also be sleep apnea sample data of different groups of users. That is, the first sleep apnea sample data and the target sleep apnea sample data may be the same or different, and the application is not limited thereto.
Secondly, the terminal equipment inputs the first sleep apnea sample data into the trained N-layer neural network, so as to obtain output data of an intermediate layer of the N-layer neural network. Further, the output data may be taken as a second sleep apnea sample data.
In an alternative embodiment, the intermediate layer is any one or more layers from the first layer to the N-1 th layer in the N-layer neural network, and the application is not limited thereto. Preferably, to realize deep extraction of feature data and guarantee accuracy of model training, the intermediate layer may be an N-1 th layer in the N layers of DNNs.
It should be understood that, in the data processing process, a sleep apnea tag carried in the first sleep apnea sample data (specifically, the sleep apnea tag is generated after the terminal device finishes tagging the first sleep apnea sample data) does not change with an intermediate processing process of data (such as preprocessing, context association processing, and the like). That is, the sleep apnea sample data carried in the first sleep apnea sample data is the same as the sleep apnea sample data carried in the second sleep apnea sample data, which is not described in detail herein.
Then, the terminal device needs to create a pattern recognition model to be trained. The pattern recognition model may be used to assess a sleep apnea condition of the user, which may be a mathematical model related to a time series. The mathematical model includes, but is not limited to, a decision Tree algorithm model, a Support Vector Machine (SVM) algorithm model, a Random Forest (RF) algorithm model, a Logistic Regression (LR) model, a Conditional Random Field (CRF) model, a Boosting Tree (Boosting Tree) algorithm model, a neural network algorithm model, a neighborhood (k-Nearest Neighbor, KNN) algorithm model, a gradient descent algorithm model, and the like.
And finally, the terminal equipment can train the mode recognition model by utilizing the first sleep apnea sample data and the second sleep apnea sample data, so that the trained sleep apnea evaluation model is obtained.
Specifically, the terminal device may input the first sleep apnea sample data and the second sleep apnea sample data to the pattern recognition model at the same time for model training to obtain the sleep apnea evaluation model.
The sleep apnea evaluation model can be referred to the related description of the pattern recognition model, and is not described herein.
To help the user to understand, taking the pattern recognition model as a Random Forest (RF) algorithm model as an example, ECG data of 40 users is obtained as training sample data, where the training sample data includes 10 sets of physiological data records of healthy users, severe users, moderate users, and mild users. Referring to the related description in the foregoing embodiment, the terminal device may perform feature extraction and labeling on the 40 sets of ECG data respectively, so that a plurality of sleep apnea sample data (i.e. the first sleep apnea sample data of the first unit data in this application) in units of a second unit time (minute) carry a sleep apnea label. Assuming that the terminal device extracts 19-dimensional feature data for each set of ECG data when extracting features, each sleep apnea sample data may be represented as a column vector of 19 x 1. Optionally, the terminal device may perform preprocessing on each sleep apnea sample data, such as screening out abnormal data, normalizing, and the like. Further, the terminal device may also perform context correlation on the sleep apnea sample data, for example, correlate data m minutes before and n minutes after the ith minute, that is, correlate the first sleep apnea sample data from the ith + m minute to the ith-n minute, to obtain a matrix of L (m + n + 1). Then, performing head-to-tail splicing on the matrix, converting the matrix into a one-dimensional vector to obtain a one-dimensional column vector with the length of L (m + n +1), wherein L is 19; i, m, n may be a positive integer autonomously set by the user side or the terminal device side.
Then, the terminal device may input the first sleep apnea sample data of the first unit time into a N-layer deep neural network DNN, thereby obtaining output data of an intermediate layer of the DNN as second sleep apnea sample data. For example, the output data (i.e., the output result) of the (N-1) th layer is used as the second sleep apnea sample data.
And then, the terminal equipment creates an RF model, and simultaneously inputs the first sleep apnea sample data and the second sleep apnea sample data to train the RF model, so as to obtain the trained RF model. Specifically, the number T of RF decision trees and the decision threshold of each decision tree are set, and T new self-help sample sets are randomly extracted in a replacement manner by using a self-help resampling technique, thereby constructing T classification trees. And each classification tree in the RF is a binary tree, and the split growth or stop of the nodes is carried out according to the principle from top to bottom. Further, there are provided Mall variables, such as Mall ═ L × (m + n +1) + k, where k is the dimension/length of the second sleep apnea sample data, similar to the first sleep apnea sample data described above, and will not be described in detail here. The terminal equipment randomly extracts Mtry variables at a node of each tree, selects a variable with the strongest splitting capability from the Mtry variables, and judges whether the node supports splitting according to a Gini criterion. Specifically, when the calculation result of the Gini criterion is smaller than a preset threshold, splitting is not performed; otherwise, the splitting continues. According to the principle, the RF model can be trained by using the first sleep apnea sample data and the second sleep apnea sample data of 40 users, so as to obtain an RF model composed of T decision trees. The present invention is not described in much detail with respect to the Gini's guidelines and the construction of RF models.
Accordingly, in step S104, the terminal device inputs the first target sleep apnea data into the N-layer deep neural network DNN, so as to obtain output data of the intermediate layer of the DNN, and uses the output data as second target sleep apnea data. The intermediate layer may be any one or more of the first layer to the N-1 st layer of the N-layer DNN, which is not described in detail herein.
In step S106, the terminal device may input the first target sleep apnea sample data and the second target sleep apnea sample data to the sleep apnea assessment model at the same time, and perform disease diagnosis and assessment by using the model, thereby obtaining a sleep apnea assessment result. Wherein the sleep apnea evaluation result is used for indicating whether the user has sleep apnea in the second unit time. This application is not described in detail here.
In an alternative embodiment, the terminal device may obtain a plurality of sleep apnea evaluation results, that is, whether the user has sleep apnea in a plurality of second unit times, by using the sleep apnea evaluation principle. Further, the terminal device may determine a sleep apnea condition level to which the user belongs according to the plurality of sleep apnea evaluation results. The plurality of second unit times may be a plurality of continuous second unit times, for example, within a few consecutive minutes, and the plurality of sleep apnea evaluation results may be sleep apnea evaluation results calculated by the terminal device according to sleep apnea data per minute by using the sleep apnea evaluation model.
In an alternative embodiment, the sleep apnea disorder level is used to reflect whether the user has a sleep apnea disorder and the state of the condition in which the user has a sleep apnea disorder. For example, the sleep apnea disorders may be ranked according to the health status of the user as follows: health grade, severity grade, moderate grade, mild grade, etc., and the present application is not limited thereto.
Specifically, in an embodiment, the terminal device may determine, according to the plurality of sleep apnea evaluation results, the number of sleep apnea evaluation results corresponding to occurrence of sleep apnea of the user; the level of sleep apnea disorders to which the user belongs is then determined according to the threshold interval in which the number lies. For example, if the number is in a first preset threshold interval (greater than 10), the level of the sleep apnea disorders to which the user belongs is determined to be a severe level, and the like.
In yet another embodiment, the terminal device may calculate the frequency of sleep apnea occurring for the user according to the plurality of sleep apnea evaluation results, for example, the number of times of sleep apnea occurring for the user in a fourth unit time (e.g., hour). And then determining the level of the sleep apnea symptom to which the user belongs according to the threshold interval in which the frequency is positioned. For example, the terminal device uses the sleep apnea evaluation model to count that the number Y of times that the user has sleep apnea within each hour is within a first threshold interval (e.g., greater than or equal to 30), and then may determine that the user has a sleep apnea condition and is at a severe level. Accordingly, if Y is within a second threshold interval (e.g., Y is greater than 15, less than 30), then the user may be determined to have a sleep apnea condition and be at a moderate level. Y is within a third threshold interval (e.g., Y is greater than 5, less than 15), then the user may be determined to have sleep apnea disorders and be at a mild level. Y is within a fourth threshold interval (e.g., Y is less than 5), then it may be determined that the user is not suffering from a sleep apnea condition and is at a healthy level.
In an optional embodiment, the terminal device may include an internet device such as a user device, a smart phone (e.g., an Android phone, an IOS phone, etc.), a personal computer, a tablet computer, a palm computer, a Mobile internet device (MID, Mobile internet devices), or a wearable smart device, and the embodiment of the present invention is not limited thereto.
It should be noted that, the present application adopts N layers of deep neural networks DNN and a sleep apnea evaluation model to be combined together for evaluating the sleep apnea disorders of the user, because the sleep apnea evaluation model (such as an RF model) is an effective mode recognition/classification model, and has the characteristics of low time complexity, high recognition efficiency, high accuracy, etc., but the model needs to determine each feature data separately, so that the relevance among multiple sets of feature data is easily ignored, and meanwhile, deep abstract feature information hidden in each set of feature data is also ignored. In addition, the characteristics of the physiological data (such as physiological label data, ECG data and the like) of the user are not independent, and the sleep apnea evaluation model is used alone to analyze the physiological data of the user, so that the effect is poor. However, DNN is just able to compensate for the drawbacks of the sleep apnea assessment model described above. Specifically, DNN is a classification model related to time series, and since its special network structure can be used for data extraction of deep features, the extracted feature data is a deep abstraction of original data, and the original data can be combined to make up some defects of the sleep apnea assessment model in terms of decision, which are not described in detail herein.
Fig. 2 is a schematic flow chart of a method for evaluating disease symptoms according to an embodiment of the present invention. The method as shown in fig. 2 may comprise the following implementation steps:
step S202, the terminal device obtains sleep apnea data of a preset number of users.
Step S204, the terminal device performs a labeling process on the sleep apnea data based on a third unit time, so as to obtain the sleep apnea labeling data including one or more sleep apnea labels, wherein the second unit time includes a plurality of third unit times;
step S206, the terminal equipment obtains the preset number of first sleep apnea sample data and the sleep apnea labels of the first sleep apnea sample data according to the sleep apnea marking data and the second unit time;
wherein, in the case that the number of the continuous sleep apnea marks occurring in the second unit time exceeds a preset threshold, the sleep apnea label is used for indicating that the user corresponding to the sleep apnea sample data has sleep apnea occurring in the second unit time; otherwise, the sleep apnea label is used for indicating that the user corresponding to the sleep apnea sample data does not have sleep apnea within the second unit time.
That is, the terminal device may obtain first sleep apnea sample data of a first unit time, where the first sleep apnea sample data includes a sleep apnea label for indicating whether a user corresponding to the sleep apnea sample data has sleep apnea within the second unit time.
Step S208, the terminal device inputs the first sleep apnea sample data into a Deep Neural Network (DNN) of N layers to obtain output data of a middle layer of the DNN as second sleep apnea sample data.
Step S210, the terminal device takes the first sleep apnea data and the second sleep apnea sample data as input at the same time, and trains a mode recognition model to obtain a sleep apnea evaluation model.
Step S212, the terminal device acquires first target sleep apnea data in first unit time.
In an alternative embodiment, the first target sleep apnea data comprises sleep apnea data for a second unit of time, the first unit of time comprises the second unit of time and the first unit of time is greater than the second unit of time.
In an alternative embodiment, the first target sleep apnea data includes sleep apnea data for the i-n th second unit time to the i + m th second unit time, where i is a positive integer, n is a positive integer, and m is a positive integer.
Step S214, the terminal device inputs the first target sleep apnea data into N layers of Deep Neural Networks (DNN), and obtains intermediate layer output data of the DNN as second target sleep apnea data, wherein N is a positive integer.
Step S216, the terminal device inputs the first target sleep apnea data and the second target sleep apnea data into a sleep apnea evaluation model at the same time to obtain a sleep apnea evaluation result, and the sleep apnea evaluation result is used for indicating whether a user corresponding to the first target sleep apnea data has sleep apnea within the first unit time.
Step S218, the terminal equipment determines the sleep apnea symptom grade of the user according to a plurality of sleep apnea evaluation results; wherein the sleep apnea disorder level comprises any one of: health grade, severe grade, moderate grade, mild grade.
For parts which are not shown and not described in the embodiment of the present invention, reference may be made to the description related to the embodiment described in fig. 1, which is not described herein again.
The embodiment of the invention also provides a terminal device, and the terminal is used for executing the unit of the method in any one of the preceding claims. Specifically, referring to fig. 3, it is a schematic block diagram of a terminal device according to an embodiment of the present invention. The terminal device 300 of the present embodiment includes: an acquisition unit 302 and a processing unit 304; wherein:
the acquiring unit 302 is configured to acquire first target sleep apnea data of a first unit time;
the obtaining unit 302 is further configured to input the first target sleep apnea data into a N-layer deep neural network DNN, and obtain middle layer output data of the DNN as second target sleep apnea data, where N is a positive integer;
the processing unit 304 is configured to input the first target sleep apnea data and the second target sleep apnea data into a sleep apnea assessment model at the same time, so as to obtain a sleep apnea assessment result, where the sleep apnea assessment result is used to indicate whether a user corresponding to the first target sleep apnea data has sleep apnea within the first unit time.
In an alternative embodiment, the intermediate layer of DNN is the N-1 st layer of DNN.
In an alternative embodiment, the first target sleep apnea data comprises sleep apnea data for a second unit of time, the first unit of time comprises the second unit of time and the first unit of time is greater than the second unit of time.
In an alternative embodiment, the first target sleep apnea data includes sleep apnea data for the i-n th second unit time to the i + m th second unit time, where i is a positive integer, n is a positive integer, and m is a positive integer.
In an alternative embodiment, the obtaining unit 302 is further configured to obtain the sleep apnea assessment model.
In an alternative embodiment,
the obtaining unit 302 is configured to obtain a preset number of first sleep apnea sample data of the first unit time;
the obtaining unit 302 is further configured to input the first sleep apnea sample data into a N-layer deep neural network DNN, and obtain middle layer output data of the DNN as second sleep apnea sample data;
the processing unit 304 is configured to train a preset recognition model by taking the first sleep apnea sample data and the second sleep apnea sample data as input at the same time, so as to obtain the sleep apnea assessment model.
In an alternative embodiment,
the acquiring unit 302 is configured to acquire sleep apnea data of a preset number of users;
the processing unit 304 is further configured to perform a tagging process on the sleep apnea data based on a third unit time, so as to obtain the sleep apnea flag data comprising one or more sleep apnea flags, wherein the second unit time comprises a plurality of the third unit times;
the processing unit 304 is further configured to obtain the preset number of first sleep apnea sample data according to the sleep apnea marking data and the second unit time, where the first sleep apnea sample data includes a sleep apnea marking; under the condition that the number of the continuous sleep apnea marks in the second unit time exceeds a preset threshold, the sleep apnea label is used for indicating that the user corresponding to the first sleep apnea sample data has sleep apnea in the second unit time; otherwise, the sleep apnea label is used for indicating that the user corresponding to the first sleep apnea sample data does not have sleep apnea within the second unit time.
In an alternative embodiment,
the processing unit 304 is further configured to determine a sleep apnea condition level to which the user belongs according to a plurality of the sleep apnea assessment results; wherein the sleep apnea disorder level comprises any one of: health grade, severe grade, moderate grade, mild grade.
Details that are not shown or not described in the embodiments of the present invention may specifically refer to the related explanations in the embodiments of the method described in fig. 1-fig. 2, and are not described herein again.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present invention. The terminal device 300 of the present embodiment includes: at least one processor 601, a communication interface 602, a user interface 603 and a memory 604, wherein the processor 601, the communication interface 602, the user interface 603 and the memory 604 can be connected by a bus or other means, and the embodiment of the present invention is exemplified by being connected by the bus 605. Wherein,
processor 601 may be a general-purpose processor, such as a Central Processing Unit (CPU).
The communication interface 602 may be a wired interface (e.g., an ethernet interface) or a wireless interface (e.g., a cellular network interface or using a wireless local area network interface) for communicating with other electronic devices or websites. In the embodiment of the present invention, the communication interface 602 is specifically configured to recommend the target recommendation object to a user of the electronic device.
The user interface 603 may specifically be a touch panel, including a touch screen and a touch screen, for detecting an operation instruction on the touch panel, and the user interface 603 may also be a physical button or a mouse. The user interface 603 may also be a display screen for outputting, displaying images or data.
Memory 604 may include Volatile Memory (Volatile Memory), such as Random Access Memory (RAM); the Memory may also include a Non-volatile Memory (Non-volatile Memory), such as a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, HDD), or a Solid-State Drive (SSD); the memory 604 may also comprise a combination of the above types of memory. The memory 604 is used for storing a set of program codes, and the processor 601 is used for calling the program codes stored in the memory 604 and executing the following operations:
acquiring first target sleep apnea data of a first unit time;
inputting the first target sleep apnea data into N layers of deep neural networks, and acquiring middle layer output data of the deep neural networks as second target sleep apnea data, wherein N is a positive integer;
and simultaneously inputting the first target sleep apnea data and the second target sleep apnea data into a sleep apnea evaluation model to obtain a sleep apnea evaluation result, wherein the sleep apnea evaluation result is used for indicating whether a user corresponding to the first target sleep apnea data has sleep apnea within the first unit time.
In some possible embodiments, the middle layer of the deep neural network is layer N-1 of the deep neural network.
In some possible embodiments, the first target sleep apnea data comprises sleep apnea data for a second unit of time, the first unit of time comprises the second unit of time and the first unit of time is greater than the second unit of time.
In some possible embodiments, the first target sleep apnea data includes sleep apnea data for an i-n second unit time to an i + m second unit time, where i is a positive integer, n is a positive integer, and m is a positive integer.
In some possible embodiments, before the inputting the first target sleep apnea data as the sleep apnea assessment model, the processor 601 is further configured to:
obtaining the sleep apnea assessment model.
In some possible embodiments, the obtaining the sleep apnea assessment model comprises:
acquiring a preset number of first sleep apnea sample data of the first unit time;
inputting the first sleep apnea sample data into an N-layer deep neural network, and acquiring middle-layer output data of the deep neural network as second sleep apnea sample data;
and simultaneously inputting the first sleep apnea sample data and the second sleep apnea sample data, and training a preset recognition model to obtain the sleep apnea evaluation model.
In some possible embodiments, said obtaining a preset number of first sleep apnea sample data for said first unit of time comprises:
acquiring sleep apnea data of a preset number of users;
tagging the sleep apnea data based on a third unit of time, thereby obtaining the sleep apnea tag data comprising one or more sleep apnea tags, wherein the second unit of time comprises a plurality of the third unit of time;
obtaining the preset number of first sleep apnea sample data according to the sleep apnea marking data and the second unit time, wherein the first sleep apnea sample data comprises a sleep apnea label; under the condition that the number of the continuous sleep apnea marks in the second unit time exceeds a preset threshold, the sleep apnea label is used for indicating that the user corresponding to the first sleep apnea sample data has sleep apnea in the second unit time; otherwise, the sleep apnea label is used for indicating that the user corresponding to the first sleep apnea sample data does not have sleep apnea within the second unit time.
In some possible embodiments, after the first target sleep apnea data and the second target sleep apnea data are simultaneously input to a sleep apnea evaluation model to obtain a sleep apnea evaluation result, the processor 601 is further configured to:
determining a sleep apnea symptom grade of the user according to a plurality of sleep apnea evaluation results;
wherein the sleep apnea disorder level comprises any one of: health grade, severe grade, moderate grade, mild grade.
In a further embodiment of the invention, a computer-readable storage medium is provided, which stores a computer program comprising program instructions, which when executed by a processor, implement all or part of the implementation or implementation steps of the method embodiments described above.
The computer readable storage medium may be an internal storage unit of the terminal according to any of the foregoing embodiments, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. A method of condition assessment, the method comprising:
acquiring first target sleep apnea data of a first unit time;
inputting the first target sleep apnea data into N layers of Deep Neural Networks (DNN), and acquiring intermediate layer output data of the DNN as second target sleep apnea data, wherein N is a positive integer;
and simultaneously inputting the first target sleep apnea data and the second target sleep apnea data into a sleep apnea evaluation model to obtain a sleep apnea evaluation result, wherein the sleep apnea evaluation result is used for indicating whether a user corresponding to the first target sleep apnea data has sleep apnea within the first unit time.
2. The method of claim 1, wherein the intermediate layer of DNN is the N-1 st layer of DNN.
3. The method of claim 2, wherein the first target sleep apnea data comprises sleep apnea data for a second unit of time, wherein the first unit of time comprises the second unit of time and wherein the first unit of time is greater than the second unit of time.
4. The method of claim 3, wherein the first target sleep apnea data comprises sleep apnea data for an i-n second unit time to an i + m second unit time, wherein i is a positive integer, n is a positive integer, and m is a positive integer.
5. The method of any of claims 1-4, wherein prior to simultaneously inputting the first target sleep apnea data and the second target sleep apnea data into a sleep apnea assessment model, the method further comprises:
obtaining the sleep apnea assessment model.
6. The method of claim 5, wherein the obtaining the sleep apnea assessment model comprises:
acquiring a preset number of first sleep apnea sample data of the first unit time;
inputting the first sleep apnea sample data into the DNN, and acquiring intermediate layer output data of the DNN as second sleep apnea sample data;
and simultaneously inputting the first sleep apnea sample data and the second sleep apnea sample data, and training a preset recognition model to obtain the sleep apnea evaluation model.
7. The method of claim 6, wherein said obtaining a preset number of first sleep apnea sample data for said first unit of time comprises:
acquiring sleep apnea data of a preset number of users;
tagging the sleep apnea data based on a third unit of time, thereby obtaining the sleep apnea tag data comprising one or more sleep apnea tags, wherein the second unit of time comprises a plurality of the third unit of time;
obtaining the preset number of first sleep apnea sample data according to the sleep apnea marking data and the second unit time, wherein the first sleep apnea sample data comprises a sleep apnea label; under the condition that the number of the continuous sleep apnea marks in the second unit time exceeds a preset threshold, the sleep apnea label is used for indicating that the user corresponding to the first sleep apnea sample data has sleep apnea in the second unit time; otherwise, the sleep apnea label is used for indicating that the user corresponding to the first sleep apnea sample data does not have sleep apnea within the second unit time.
8. The method of claim 7, wherein after inputting the first target sleep apnea data and the second target sleep apnea data into a sleep apnea assessment model simultaneously to obtain a sleep apnea assessment result, the method further comprises:
determining a sleep apnea symptom grade of the user according to a plurality of sleep apnea evaluation results;
wherein the sleep apnea disorder level comprises any one of: health grade, severe grade, moderate grade, mild grade.
9. A terminal device, comprising an acquisition unit and a processing unit, wherein,
the acquisition unit is used for acquiring first target sleep apnea data of a first unit time;
the acquisition unit is further configured to input the first target sleep apnea data into an N-layer deep neural network DNN, and acquire intermediate layer output data of the DNN as second target sleep apnea data, where N is a positive integer;
the processing unit is configured to input the first target sleep apnea data and the second target sleep apnea data to a sleep apnea assessment model at the same time, so as to obtain a sleep apnea assessment result, where the sleep apnea assessment result is used to indicate whether a user corresponding to the first target sleep apnea data has sleep apnea within the first unit time.
10. A terminal device, comprising: a processor, a memory, a communication interface, and a bus; the processor, the memory and the communication interface are connected through the bus and complete mutual communication; the memory stores executable program code; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory to perform the method of any one of claims 1-8.
11. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-8.
CN201711468158.1A 2017-12-27 2017-12-27 Illness appraisal procedure, terminal device and computer-readable medium Pending CN108198617A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711468158.1A CN108198617A (en) 2017-12-27 2017-12-27 Illness appraisal procedure, terminal device and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711468158.1A CN108198617A (en) 2017-12-27 2017-12-27 Illness appraisal procedure, terminal device and computer-readable medium

Publications (1)

Publication Number Publication Date
CN108198617A true CN108198617A (en) 2018-06-22

Family

ID=62585540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711468158.1A Pending CN108198617A (en) 2017-12-27 2017-12-27 Illness appraisal procedure, terminal device and computer-readable medium

Country Status (1)

Country Link
CN (1) CN108198617A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109540393A (en) * 2018-12-11 2019-03-29 东风马勒热系统有限公司 A kind of plugging device for intercooler sealing propertytest
CN110151169A (en) * 2019-07-04 2019-08-23 中山大学 A kind of sleep state method for identifying and classifying based on electrocardiogram (ECG) data
CN114795133A (en) * 2022-06-29 2022-07-29 华南师范大学 Sleep apnea detection method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361277A (en) * 2016-08-26 2017-02-01 中山大学 Sleep apnea syndrome assessment method based on electrocardiogram signals
CN107491638A (en) * 2017-07-28 2017-12-19 深圳和而泰智能控制股份有限公司 A kind of ICU user's prognosis method and terminal device based on deep learning model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361277A (en) * 2016-08-26 2017-02-01 中山大学 Sleep apnea syndrome assessment method based on electrocardiogram signals
CN107491638A (en) * 2017-07-28 2017-12-19 深圳和而泰智能控制股份有限公司 A kind of ICU user's prognosis method and terminal device based on deep learning model

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109540393A (en) * 2018-12-11 2019-03-29 东风马勒热系统有限公司 A kind of plugging device for intercooler sealing propertytest
CN110151169A (en) * 2019-07-04 2019-08-23 中山大学 A kind of sleep state method for identifying and classifying based on electrocardiogram (ECG) data
CN114795133A (en) * 2022-06-29 2022-07-29 华南师范大学 Sleep apnea detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108091391A (en) Illness appraisal procedure, terminal device and computer-readable medium
CN107595243B (en) Disease evaluation method and terminal equipment
Çınar et al. Classification of normal sinus rhythm, abnormal arrhythmia and congestive heart failure ECG signals using LSTM and hybrid CNN-SVM deep neural networks
JP2022523741A (en) ECG processing system for depiction and classification
US20190328251A1 (en) Arrhythmia detection method, arrhythmia detection device and arrhythmia detection system
US20230335289A1 (en) Systems and methods for generating health risk assessments
CN108198617A (en) Illness appraisal procedure, terminal device and computer-readable medium
Poddar et al. Automated diagnosis of coronary artery diseased patients by heart rate variability analysis using linear and non-linear methods
WO2015185927A1 (en) Monitoring adherence to healthcare guidelines
CN108305688A (en) Illness appraisal procedure, terminal device and computer-readable medium
Zhang et al. A simple self-supervised ECG representation learning method via manipulated temporal–spatial reverse detection
Chen et al. Edge2Analysis: a novel AIoT platform for atrial fibrillation recognition and detection
CN114190897B (en) Training method of sleep stage model, sleep stage method and device
Liu et al. Beatquency domain and machine learning improve prediction of cardiovascular death after acute coronary syndrome
Saini et al. K-nearest neighbour-based algorithm for P-and T-waves detection and delineation
CN109674474A (en) Sleep apnea recognition methods, equipment and computer-readable medium
CN108182974B (en) Disease evaluation method, terminal device and computer readable medium
EP4041073A1 (en) Systems and methods for electrocardiogram diagnosis using deep neural networks and rule-based systems
EP3605553A1 (en) Systems and methods for unobtrusive digital health assessment
Brohet Fragmentation of the QRS complex: the latest electrocardiographic craze?
CN109770904A (en) Monitoring method, device, computer equipment and the storage medium of apnea
CN117292829A (en) Graded diagnosis and treatment information system for coronary heart disease
Das et al. Interpretation of EKG with image recognition and convolutional neural networks
Singh et al. A new approach for identification of heartbeats in multimodal physiological signals
WO2021098461A1 (en) Heart state evaluation method and apparatus, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180622

RJ01 Rejection of invention patent application after publication