Disclosure of Invention
The embodiment of the application provides a training method and device of a sleep state identification model, a sleep state identification method, a computer and a computer readable storage medium, which can solve the problem that the accuracy of monitoring the sleep state through physiological data acquired by wearable equipment is low.
In a first aspect, an embodiment of the present application provides a training method for a sleep state recognition model, including:
acquiring physiological data acquired by wearable equipment, wherein the physiological data is non-electroencephalogram data;
acquiring electroencephalogram data acquired by electroencephalogram monitoring equipment, wherein the electroencephalogram data and the physiological data are data acquired for the same target object in the same time period;
carrying out sleep state identification on the electroencephalogram data to obtain a first sleep state identification result;
performing sleep state class marking on the physiological data by using the first sleep state identification result to obtain marked physiological data, wherein the marked physiological data comprises a sleep state class label;
and training the sleep state recognition model to be trained by using the labeled physiological data or the first physiological parameter characteristic and the labeled physiological data to obtain the trained sleep state recognition model, wherein the first physiological parameter characteristic is a characteristic extracted from the physiological data.
As can be seen from the above, in the embodiment of the application, the sleep state identification result based on the electroencephalogram data is used to perform sleep state class labeling on the physiological data which is not the electroencephalogram data, so that the sleep state class label of the physiological data is more accurate, and further, the sleep state identification model obtained based on the labeled physiological data training is higher in the sleep state identification precision of the physiological data collected by the wearing device.
In some possible implementation manners of the first aspect, training a sleep state recognition model to be trained by using the labeled physiological data or the first physiological parameter characteristic and the labeled physiological data to obtain a trained sleep state recognition model, including:
inputting a second physiological parameter characteristic extracted from the labeled physiological data, or inputting the first physiological parameter characteristic and the labeled physiological data into the sleep state recognition model to be trained, and obtaining a second sleep state recognition result output by the sleep state recognition model to be trained;
according to the second sleep state identification result and the sleep state class label, network parameter adjustment is carried out on the sleep state identification model to be trained;
and after iterative training is carried out for multiple times, a trained sleep state recognition model is obtained.
In some possible implementation manners of the first aspect, training a to-be-trained sleep state recognition model by using the labeled physiological data or the physiological parameter characteristics and the labeled physiological data to obtain a trained sleep state recognition model, including:
selecting a first target feature from the first physiological parameter features or selecting a second target feature from the second physiological parameter features, wherein the second physiological parameter features are features extracted from the labeled physiological data;
inputting the second target characteristic or the first target characteristic and the labeled physiological data into the sleep state recognition model to be trained, and obtaining a third sleep state recognition result output by the sleep state recognition model to be trained;
according to the third sleep state identification result and the sleep state class label, network parameter adjustment is carried out on the sleep state identification model to be trained, and a target sleep state identification model is obtained after iterative training is carried out for multiple times;
testing the sleep state classification accuracy of the target sleep state identification model by using the test data set;
if the sleep state classification accuracy rate meets the preset index requirement, determining the target sleep state recognition model as a trained sleep state recognition model;
and if the sleep state classification accuracy does not meet the preset index requirement, returning to the step of selecting the first target feature from the first physiological parameter features or selecting the second target feature from the second physiological parameter features until the sleep state classification accuracy meets the preset index requirement.
In the implementation mode, feature selection and parameter optimization are carried out through the sleep state classification accuracy, and the calculation overhead is reduced by reducing the number of features while the model classification accuracy is kept.
In some possible implementations of the first aspect, selecting the first target feature from the first physiological parameter feature or selecting the second target feature from the second physiological parameter feature includes:
and selecting a first target feature from the first physiological parameter features by using a multi-target genetic algorithm, or randomly selecting a second target feature from the second physiological parameter features by using a multi-target genetic algorithm.
In some possible implementation manners of the first aspect, performing sleep state identification on the electroencephalogram data to obtain a first sleep state identification result, including:
performing feature extraction on the electroencephalogram data to obtain electroencephalogram features;
determining dominant brain wave types in the brain wave data according to the brain wave characteristics;
and obtaining a first sleep state identification result according to the dominant brain wave type.
In the implementation mode, the sleep state identification result is obtained through the dominant brain wave type, so that the sleep state identification is more accurate, and the label accuracy of the physiological data is further improved.
In a second aspect, an embodiment of the present application provides a sleep state identification method, including:
acquiring physiological data acquired by wearable equipment, wherein the physiological data is non-electroencephalogram data;
extracting target features from the physiological data;
inputting the target characteristics into the trained sleep state recognition model to obtain a sleep state recognition result output by the sleep state recognition model;
wherein the sleep state recognition model is a model obtained by training using any one of the training methods of the first aspect.
In a third aspect, an embodiment of the present application provides a training apparatus for a sleep state recognition model, including:
the physiological data acquisition module is used for acquiring physiological data acquired by the wearable equipment, and the physiological data is non-electroencephalogram data;
the electroencephalogram data acquisition module is used for acquiring electroencephalogram data acquired by electroencephalogram monitoring equipment, wherein the electroencephalogram data and the physiological data are acquired for the same target object in the same time period;
the recognition module is used for carrying out sleep state recognition on the electroencephalogram data to obtain a first sleep state recognition result;
the labeling module is used for performing sleep state class labeling on the physiological data by using the first sleep state identification result to obtain labeled physiological data, and the labeled physiological data comprises a sleep state class label;
and the training module is used for training the sleep state recognition model to be trained by using the labeled physiological data or the first physiological parameter characteristic and the labeled physiological data to obtain the trained sleep state recognition model, wherein the first physiological parameter characteristic is a characteristic extracted from the physiological data.
In some possible implementations of the third aspect, the training module is specifically configured to:
selecting a first target feature from the first physiological parameter features or selecting a second target feature from the second physiological parameter features, wherein the second physiological parameter features are features extracted from the labeled physiological data;
inputting the second target characteristic or the first target characteristic and the labeled physiological data into the sleep state recognition model to be trained, and obtaining a third sleep state recognition result output by the sleep state recognition model to be trained;
according to the third sleep state identification result and the sleep state class label, network parameter adjustment is carried out on the sleep state identification model to be trained, and a target sleep state identification model is obtained after iterative training is carried out for multiple times;
testing the sleep state classification accuracy of the target sleep state identification model by using the test data set;
if the sleep state classification accuracy rate meets the preset index requirement, determining the target sleep state recognition model as a trained sleep state recognition model;
and if the sleep state classification accuracy does not meet the preset index requirement, returning to the step of selecting the first target feature from the first physiological parameter features or selecting the second target feature from the second physiological parameter features until the sleep state classification accuracy meets the preset index requirement.
In a fourth aspect, embodiments of the present application provide an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the method according to any one of the first aspect or the second aspect.
In a fifth aspect, the present application is embodied in a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the method according to any one of the first or second aspects.
In a sixth aspect, embodiments of the present application provide a computer program product, which, when run on an electronic device, causes the electronic device to perform the method of any one of the first aspect or the second aspect.
It is understood that the beneficial effects of the second to sixth aspects can be seen from the description of the first aspect, and are not described herein again.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing a relative importance or importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Before describing the technical solutions provided in the embodiments of the present application, relevant contents that may be related to the embodiments of the present application are described below.
The brain wave is an electrical oscillation generated when nerve cells in the human brain move. This swing is called brain waves because it appears to the scientific instrument as waves.
Brain waves can be divided into four major categories depending on frequency: beta waves (apparent consciousness), alpha waves (bridge consciousness), theta waves (subconscious consciousness), and delta waves (unconsciousness). The combination of these consciousness forms the internal and external behavior, emotion and learning expression of a person.
In general, when a person is under stress, the brain produces mainly beta waves; when the human body is relaxed, the brain generates mainly alpha waves; when a person feels hazy sleepiness, the brain generates mainly theta waves; when a person goes to deep sleep, the brain produces mainly delta waves.
The electroencephalogram data is signal data obtained by collecting weak bioelectricity generated by the human brain at the scalp through an electroencephalogram collecting device and amplifying and recording the collected weak bioelectricity.
The technical solutions provided in the embodiments of the present application will be described in detail below.
At present, wearing equipment such as intelligent bracelet, intelligent hand and intelligent glasses can gather the physiological data except that the brain electricity data to carry out sleep state monitoring based on these physiological data, in order to obtain sleep state monitoring result.
However, wearing equipment such as intelligent bracelet and intelligent hand is whether to sleep through whether standing up, or whether heartbeat blood pressure is steady, all so is not very accurate. The patient does not necessarily go to sleep well because the blood pressure of the heartbeat of the patient is stable or the patient does not turn over frequently. In addition, these external indicators cannot finely classify the sleeping state into light sleep and deep sleep.
In other words, the accuracy of monitoring the sleep state is low by the existing non-electroencephalogram physiological data acquired by the wearable device.
In the embodiment of the application, the sleep state identification result of the electroencephalogram data is used for carrying out sleep state class marking on the non-electroencephalogram physiological data acquired by the wearing equipment, and the marked physiological data is used for training the sleep state model. Therefore, the sleep state identification result of the electroencephalogram data is used, and the sleep state type of the physiological data is labeled, so that the sleep state type label of the physiological data is more accurate, and the identification precision of the trained sleep state identification model is higher.
It can be understood that the sleep of the human body is actually controlled by the center, and the sleep center is at the tail end of the brain stem, so the brain waves can accurately represent the sleep state of the human body. That is, compared with the existing method that the sleep state is judged through whether the patient turns over or not, whether the heartbeat and the blood pressure are stable or not and other external indexes, the sleep state identification result based on the electroencephalogram data marks the sleep state category of the physiological data, so that the sleep state can be identified more accurately by the model.
Referring to fig. 1, a schematic flowchart of a method for training a sleep state recognition model according to an embodiment of the present disclosure is shown, where the method may be applied to electronic devices such as computers, workstations, servers, and the like, and is not limited herein. The method comprises the following steps:
s101, acquiring physiological data acquired by the wearable device, wherein the physiological data is non-electroencephalogram data.
It should be noted that, the wearable device is a wearable smart device, and may include but is not limited to: intelligent wrist-watch, intelligent bracelet, intelligent glasses and intelligent earphone etc..
The wearable device can acquire the physiological data through the sensor. The physiological data refers to physiological data other than brain electricity, and may include but is not limited to: heart rate, electrocardiogram, electrodermal and various biomarkers in human epidermal interstitial fluid.
Wherein, the heart rate refers to the times of heartbeat per minute of a normal person in a resting state, also called resting heart rate, and is generally 60-100 times/minute; electrocardiography (ECG) is a synthesis of action potentials generated by cardiac muscle cells when a heart of a human body has a pulse; the skin electricity is an emotional physiological index, which represents the change of skin electric conduction when the body is stimulated.
Various biomarkers in human epidermal tissue fluid may include, but are not limited to: glucose, lactic acid, ph, cholesterol, and the like.
Optionally, three physiological data of heart rate, electrocardiogram and electrodermal of the user can be acquired. Of course, other types of physiological data may be acquired.
Step S102, acquiring electroencephalogram data acquired by electroencephalogram monitoring equipment, wherein the electroencephalogram data and the physiological data are acquired for the same target object in the same time period.
In the specific application, after a user enters a sleep state, the wearable device and the electroencephalogram monitoring device are used for monitoring the same user at the same time so as to acquire and obtain the electroencephalogram data and the non-electroencephalogram physiological data. For example, a user wears a head ring (i.e., an electroencephalogram monitoring device) and a smart band to sleep for a period of time (e.g., one hour) to acquire electroencephalogram data and non-electroencephalogram physiological data during the period of time.
The electroencephalogram monitoring device is a device for monitoring the electroencephalogram of a user, and can be a wearable device, such as an intelligent head ring or a head band for acquiring electroencephalogram data; of course, the electroencephalogram monitoring device can also be a medical electroencephalogram monitoring device. When the electroencephalogram monitoring device is a wearable device, the wearable device in the step S101 is a wearable device for acquiring physiological data other than electroencephalogram, and the electroencephalogram monitoring device is a wearable device for acquiring electroencephalogram data.
And S103, carrying out sleep state identification on the electroencephalogram data to obtain a first sleep state identification result.
In specific application, after electroencephalogram data are obtained, electroencephalogram features can be extracted from the electroencephalogram data through preprocessing and feature extraction of the electroencephalogram data, and a first sleep state identification result is obtained according to the electroencephalogram features.
Preprocessing of brain electricity may include, but is not limited to, the following steps: removing interference at power frequency (e.g., 50Hz or 60 Hz), removing electro-ocular interference by Independent Component Analysis (ICA), band-pass filtering, extreme processing, and wavelet filtering.
And after the brain electrical data is subjected to a preprocessing step, extracting brain electrical characteristics from the preprocessed brain electrical data. For example, for electroencephalogram data, features are calculated every 8s, and signals of the past 8s five bands of the electroencephalogram data are extracted: alpha, beta, gamma, theta, sigma; and calculating the maximum value, the minimum value, the mean value, the variance, the change rate, the first-order difference average value, the second-order difference average value, the normalized first-order difference average value, the normalized second-order difference average value, the change range, the square sum of sequence difference values and the like of each wave band signal. The change rate is positive when rising and negative when falling.
In some embodiments, the dominant brain wave type may be determined according to the extracted brain wave features, and the first sleep state identification result may be obtained based on the dominant brain wave type. The first sleep state identification result is used for characterizing a sleep state, which may be represented by a descriptive state classification as shown in table 1 below, or may be represented by a number for describing a sleep level, and is not limited herein.
TABLE 1
Dominant brain wave types
|
First sleep state recognition result
|
Beta wave
|
Sobering up
|
Alpha wave
|
Light sleep
|
Theta wave
|
Middle sleeping
|
Delta wave
|
Deep sleep |
The definition of the dominant brain wave type may refer to a brain wave with the highest statistical power average value or the highest normalized statistical power average value in a certain period of time (e.g., 2 minutes). Of course, the definition of the dominant brain wave type may be determined according to actual experience or specific circumstances, and is not limited herein.
In other embodiments, the sleep state identification result of the electroencephalogram data of the electroencephalogram monitoring device may also be obtained by using an existing sleep monitoring method or a sleep state monitoring model, which is not limited herein.
And S104, performing sleep state class marking on the physiological data by using the first sleep state identification result to obtain marked physiological data, wherein the marked physiological data comprises a sleep state class label.
And according to the sleep state identification result of the electroencephalogram data, marking a sleep state class label for the non-electroencephalogram physiological data of the same user in the same time period, wherein the sleep state class label is used for representing the sleep state. For example, in the time period a, if it is known that the user is in a shallow sleep state through a first sleep state identification result of the electroencephalogram data, a shallow sleep state label is marked for the physiological data of the user in the time period a; and in the time period B, through the first sleep state identification result of the electroencephalogram data, knowing that the user is in a sound sleep state, and marking a sound sleep state label for the physiological data of the user in the time period B. After the labeling process, the physiological data including the sleep state class label can be obtained.
In specific application, the labeling process can be automatic labeling, namely, a sleep state class label is automatically marked for corresponding physiological data according to electroencephalogram characteristics; or, the physiological data may be labeled manually, that is, a sleep state category label is marked for the corresponding physiological data manually according to the first sleep state identification result.
And S105, training the sleep state recognition model to be trained by using the labeled physiological data or the first physiological parameter characteristic and the labeled physiological data to obtain the trained sleep state recognition model, wherein the first physiological parameter characteristic is a characteristic extracted from the physiological data.
In specific application, before feature extraction is performed on physiological data or labeled physiological data, a preprocessing step can be performed first, and then feature extraction is performed.
The preprocessing step may include, but is not limited to, signal amplification and denoising processing. Relatively pure physiological data can be obtained by a pre-processing step.
The physiological data may be of different types, and the physiological parameter characteristics extracted from the physiological data or labeled physiological data may be correspondingly different.
For example, for the electrocardiographic data, the physiological parameter feature may include physiological parameter features of four layers of linear, nonlinear, time domain and frequency domain of the electrocardiographic data, and the physiological parameter feature may include the following feature quantities:
SDNN: standard deviation of RR intervals of all sinus beats;
NN50: the number of the difference between adjacent NNs is greater than 50 ms;
PNN50: the number of adjacent NN differences >50ms as a percentage of the total sinus beat number;
SDSD: standard deviation of adjacent RR interval difference values;
RR _ MEAN: average value of RR gap;
ECG: analyzing Min, max, mean, var after baseline drift;
and (3) wavelet: respectively counting the maximum value, the minimum value, the median and the standard deviation of the 3 layers of high-frequency details and the 1 layer of low-frequency approximation by using db6 wavelet and 3 layers of decomposition processing;
VLF (very low frequency), LF (low frequency), and HF (high frequency).
For heart rate data, electrodermal data, and biomarkers, feature extraction is performed based mainly on statistical features. At this time, the physiological parameter characteristics may include a maximum value, a minimum value, a mean value, a variance, a change rate (if the change rate is positive when increasing and negative when decreasing), a first order difference average value, a second order difference average value, a normalized first order difference average value, a normalized second order difference average value, a change range, a square sum of sequence differences, and the like.
In a specific feature extraction process, the time of physiological data is cut into smaller units (for example, one segment of 2 minutes) within the same time period of extraction of electroencephalogram data, and all averaging processes are performed in the smaller units of time.
It should be noted that the sleep state recognition model is a machine learning model, which may be, but is not limited to: k-nearest neighbor model, perceptron, naive Bayes model, decision tree, logistic regression model, support vector machine, adaBoost, bayesian network or neural network model.
As can be seen from the above, in the embodiment of the application, the sleep state identification result based on the electroencephalogram data is used to perform sleep state class labeling on the physiological data which is not the electroencephalogram data, so that the sleep state class label of the physiological data is more accurate, and further, the sleep state identification model obtained based on the labeled physiological data training is higher in the sleep state identification precision of the physiological data collected by the wearing device.
Based on the above embodiment, in the model training process, the second physiological parameter feature extracted from the labeled physiological data may be input as a model, or the labeled physiological data and the first physiological parameter feature extracted from the unlabeled physiological data may be input as a model. These two cases will be separately described below.
In some embodiments, after extracting the second physiological parameter feature from the labeled physiological data, inputting the second physiological parameter feature to the to-be-trained sleep state recognition model, and obtaining a second sleep state recognition result output by the to-be-trained sleep state recognition model; calculating a loss value according to the second sleep state identification result and the sleep state class label, and performing back propagation according to the loss value to adjust network parameters of the sleep state identification model to be trained; then inputting the second physiological parameter characteristics into the sleep state recognition model to be trained after network parameter adjustment, obtaining the sleep state recognition result output by the model, calculating the loss value and then performing back propagation; after iterative training is carried out for a plurality of times, the trained sleep state recognition model can be obtained until the loss value tends to be stable.
Exemplarily, referring to a schematic block diagram of a sleep state recognition model training process shown in fig. 2, as shown in fig. 2, physiological data acquired by a wearable device is acquired, and electroencephalogram data acquired by an electroencephalogram monitoring device in the same time period is acquired; acquiring a sleep state identification result according to the electroencephalogram data, and labeling the sleep state class label of the physiological data by using the sleep state identification result of the electroencephalogram data to obtain labeled physiological data; performing feature extraction on the labeled physiological data, and applying the extracted physiological parameter features to training of a machine learning model; when training is completed, the trained model and the output features are obtained.
The machine learning model is a sleep state recognition model. In the process, all the extracted physiological parameter characteristics are used for model training, and after the training is finished, all the physiological parameter characteristics are output.
In other embodiments, the first physiological parameter feature is extracted from unlabeled physiological data; inputting the first physiological parameter characteristics and the labeled physiological data into a sleep state recognition model to be trained, and obtaining a second sleep state recognition result output by the sleep state recognition model to be trained; calculating a loss value according to the second sleep state identification result and the sleep state class label, and performing back propagation according to the loss value so as to adjust network parameters of the sleep state identification model to be trained; after repeated iterative training, the trained sleep state recognition model can be obtained until the loss value tends to be stable.
Exemplarily, referring to another schematic block diagram of a sleep state recognition model training process shown in fig. 3, as shown in fig. 3, physiological data acquired by a wearable device is acquired, and electroencephalogram data acquired by an electroencephalogram monitoring device in the same time period is acquired; acquiring a sleep state identification result according to the electroencephalogram data, and labeling the sleep state class label of the physiological data by using the sleep state identification result of the electroencephalogram data to obtain labeled physiological data; extracting the characteristics of the physiological data which are not marked, and applying the extracted physiological parameter characteristics and the marked physiological data to the training of a machine learning model; when training is completed, the trained model and the output features are obtained.
In the embodiments corresponding to fig. 2 and fig. 3, all the extracted physiological parameter features are input into the sleep state recognition model and act on the model training process. However, all extracted physiological parameter features are applied to model training, so that the number of features is large, and the calculation cost is large.
In order to reduce the number of features and reduce the calculation overhead while maintaining the model accuracy, the embodiment of the application can perform feature selection and parameter optimization through the sleep state classification accuracy so as to improve the sleep state classification accuracy and reduce the number of features.
In some embodiments, if the second physiological parameter feature extracted from the labeled physiological data is used as the model input, a second target feature is selected from the second physiological parameter feature, where the second target feature refers to a partial feature set selected from the second physiological parameter feature, and the partial feature set is used as the input of the model, rather than inputting all the extracted second physiological parameter features as the model input.
In specific application, the feature selection mode may be random selection, or feature selection may be performed by using a Non-dominant sorting genetic algorithm (NSGA-II). The step of NSGA-II may comprise: firstly, solving a Pareto solution for M physiological parameter characteristics; then obtaining a set of F1, F2 \8230andF k and other Pareto; putting all physiological parameter characteristics of F1 into N, if N is not full, continuing to put F2 until Fk can not be put into N (space) in which F1, F2, \ 8230and F (k-1) are already put, and solving for Fk at the moment; and for the Pareto solutions in the Fk, solving a crowding distance Lk [ i ] (growing distance) of each Pareto in the Fk, sorting the Pareto solutions in the Fk in a descending manner according to the Lk [ i ], putting the solutions into N until the N is full, and finally obtaining a physiological parameter feature set. The physiological parameter feature set is the selected feature parameter.
After the second target feature is selected out, inputting the second target feature into the sleep state recognition model to be trained, obtaining a third sleep state recognition result output by the sleep state recognition model to be trained, calculating a loss value according to the third sleep state recognition result and the sleep state class label, and performing back propagation according to the loss value so as to perform network parameter adjustment on the sleep state recognition model to be trained; and (5) performing iterative training for multiple times until the loss value tends to be stable, and obtaining a target sleep state recognition model.
After the target sleep state identification model is obtained, the test data set is used to test the sleep state classification accuracy of the target sleep state identification model. The test data set is pre-constructed, including physiological data and corresponding sleep state class labels.
And if the sleep state classification accuracy rate meets the preset index requirement, determining the target sleep state recognition model as a trained sleep state recognition model, and outputting the selected second target characteristic. The preset index requirement may represent a threshold, that is, if the sleep state classification accuracy is greater than a certain threshold, the preset index requirement is considered to be satisfied, otherwise, if the sleep state classification accuracy is less than the threshold, the preset index requirement is considered not to be satisfied.
And if the sleep state classification accuracy does not meet the preset index requirement, returning to the step of selecting the second target feature from the second physiological parameter features, namely returning to the step of feature selection, continuously selecting a part of feature set, and inputting the selected part of feature set into the sleep state identification model. And circulating the steps until the sleep state classification accuracy meets the preset index requirement.
Exemplarily, referring to still another schematic block diagram of the sleep state recognition model training process shown in fig. 4, as shown in fig. 4, physiological data acquired by the wearable device is acquired, and electroencephalogram data acquired by the electroencephalogram monitoring device within the same time period is acquired; acquiring a sleep state identification result according to the electroencephalogram data, and labeling the sleep state class label of the physiological data by using the sleep state identification result of the electroencephalogram data to obtain labeled physiological data; performing feature extraction on the labeled physiological data, and acting the extracted physiological parameter features on the training of a machine learning model; determining whether the sleep state classification accuracy rate meets the index requirement, if so, outputting a model and characteristics, namely taking a current sleep state recognition model as a trained model, and taking a physiological parameter characteristic set input by the current model as the input of the model; if not, returning to the step of feature selection, continuing to randomly select or select features by using a multi-target genetic algorithm, and cooperatively using the selected physiological parameter feature set for model training. And circulating the steps until the sleep state classification accuracy rate meets the index requirement.
In other embodiments, if the labeled physiological data and the first physiological parameter characteristic extracted from the unlabeled physiological data are used as model inputs, the first target characteristic is selected from the first physiological parameter characteristic. The first target feature is a partial feature set of the first physiological parameter feature.
The feature selection may be performed randomly or by using a multi-target genetic algorithm, which is not described herein again.
And after the first target feature is selected, inputting the first target feature and the labeled physiological data into the sleep state recognition model to be trained, obtaining a third sleep state recognition result output by the sleep state recognition model to be trained, calculating a loss value according to the third sleep state recognition result and the sleep state class label, and performing back propagation according to the loss value to perform network parameter adjustment on the sleep state recognition model to be trained. Performing iterative training for many times, and obtaining a target sleep state identification model when the loss value tends to be stable;
after the target sleep state identification model is obtained, the test data set is used to test the sleep state classification accuracy of the target sleep state identification model.
And if the sleep state classification accuracy rate meets the preset index requirement, determining the target sleep state recognition model as a trained sleep state recognition model, and outputting the selected first target characteristic.
And if the sleep state classification accuracy does not meet the preset index requirement, returning to the step of selecting the first target feature from the first physiological parameter features, namely returning to the step of feature selection, continuously selecting a part of feature set, and inputting the selected part of feature set into the sleep state recognition model. And circulating the steps until the sleep state classification accuracy meets the preset index requirement.
Exemplarily, referring to still another schematic block diagram of the sleep state recognition model training process shown in fig. 5, as shown in fig. 5, acquiring physiological data acquired by a wearable device, and acquiring electroencephalogram data acquired by an electroencephalogram monitoring device within the same time period; acquiring a sleep state identification result according to the electroencephalogram data, and labeling a sleep state class label of the physiological data by using the sleep state identification result of the electroencephalogram data to obtain labeled physiological data; extracting the characteristics of the physiological data which are not marked, and applying the extracted physiological parameter characteristics and the marked physiological data to the training of a machine learning model; determining whether the sleep state classification accuracy rate meets the index requirement, if so, outputting a model and characteristics, namely, taking a current sleep state recognition model as a trained model, and taking a physiological parameter characteristic set input by the current model as the input of the model; if not, returning to the step of feature selection, continuing to randomly select or select features by using a multi-target genetic algorithm, and applying the selected physiological parameter feature set and labeled physiological data to model training. And circulating the steps until the sleep state classification accuracy rate meets the index requirement.
It should be noted that, if the feature selection mode is random selection, after various feature parameter combinations are tested, a feature parameter combination with the highest accuracy or exceeding a threshold value is selected as the final model input.
By the above-mentioned model training method, after the trained sleep state recognition model is obtained, the sleep state recognition model can be used for sleep state recognition.
Referring to fig. 6, a schematic block diagram of a flow of a sleep state identification method provided in an embodiment of the present application is shown, where the method may be applied to wearable devices, such as a smart band and a smart watch; the method can also be applied to non-wearable devices, such as mobile phones, computers and other terminals, and is not limited herein. The method may comprise the steps of:
step S601, acquiring physiological data acquired by the wearable device, wherein the physiological data is non-electroencephalogram data.
It can be understood that the user wears the wearable device to sleep, and the wearable device acquires physiological data except for electroencephalogram.
Step S602, extracting target features from the physiological data.
It should be noted that the target feature may refer to all the features extracted from the physiological data mentioned above, or may refer to a part of the features extracted from all the features mentioned above by the feature extraction step. That is, the input of the model may be determined according to the model training phase, if the model input is all the features extracted from the physiological data in the model training phase, the target feature is all the features, and if the model input in the model training phase is the partial feature set selected through the feature selection step, the target feature is the partial feature set.
Of course, the target feature may also refer to all the physiological parameter features mentioned in the training stage, and after all the physiological parameter features are extracted, a part of the physiological parameter features are selected as the target feature.
And S603, inputting the target characteristics to the trained sleep state recognition model, and obtaining a sleep state recognition result output by the sleep state recognition model.
The sleep state recognition model is a model obtained by training using any one of the above-mentioned training methods, and is not described herein again.
Therefore, the sleep state recognition result of the electroencephalogram data is used for labeling the physiological data which are not electroencephalogram data, and the sleep state recognition accuracy of the trained model is higher.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 7 shows a block diagram of a training apparatus for a sleep state recognition model provided in an embodiment of the present application, which corresponds to the training method for a sleep state recognition model described in the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 7, the apparatus includes:
a physiological data acquisition module 71, configured to acquire physiological data acquired by the wearable device, where the physiological data is non-electroencephalogram data;
the electroencephalogram data acquisition module 72 is used for acquiring electroencephalogram data acquired by an electroencephalogram monitoring device, wherein the electroencephalogram data and the physiological data are data acquired for the same target object in the same time period;
the identification module 73 is used for carrying out sleep state identification on the electroencephalogram data to obtain a first sleep state identification result;
a labeling module 74, configured to perform sleep state class labeling on the physiological data using the first sleep state identification result, to obtain labeled physiological data, where the labeled physiological data includes a sleep state class label;
the training module 75 is configured to train the sleep state recognition model to be trained by using the labeled physiological data or the first physiological parameter feature and the labeled physiological data to obtain a trained sleep state recognition model, where the first physiological parameter feature is a feature extracted from the physiological data.
In some possible implementations, the training module 75 is specifically configured to:
selecting a first target feature from the first physiological parameter features or selecting a second target feature from the second physiological parameter features, wherein the second physiological parameter features are features extracted from the labeled physiological data;
inputting the second target characteristic or the first target characteristic and the labeled physiological data into the sleep state recognition model to be trained, and obtaining a third sleep state recognition result output by the sleep state recognition model to be trained;
according to the third sleep state identification result and the sleep state class label, network parameter adjustment is carried out on the sleep state identification model to be trained, and a target sleep state identification model is obtained after iterative training is carried out for multiple times;
testing the sleep state classification accuracy of the target sleep state identification model by using the test data set;
if the sleep state classification accuracy rate meets the preset index requirement, determining the target sleep state recognition model as a trained sleep state recognition model;
and if the sleep state classification accuracy does not meet the preset index requirement, returning to the step of selecting the first target feature from the first physiological parameter features or selecting the second target feature from the second physiological parameter features until the sleep state classification accuracy meets the preset index requirement.
In some possible implementations, the training module 75 is specifically configured to:
inputting a second physiological parameter characteristic extracted from the labeled physiological data, or inputting the first physiological parameter characteristic and the labeled physiological data into the sleep state recognition model to be trained, and obtaining a second sleep state recognition result output by the sleep state recognition model to be trained;
according to the second sleep state identification result and the sleep state class label, network parameter adjustment is carried out on the sleep state identification model to be trained;
after iterative training is carried out for a plurality of times, a trained sleep state recognition model is obtained.
In some possible implementations, the training module 75 is specifically configured to:
and selecting a first target feature from the first physiological parameter features by using a multi-target genetic algorithm, or randomly selecting a second target feature from the second physiological parameter features by using a multi-target genetic algorithm.
In some possible implementations, the identifying module 73 is specifically configured to:
performing feature extraction on the electroencephalogram data to obtain electroencephalogram features;
determining dominant brain wave types in the brain wave data according to the brain wave characteristics;
and obtaining a first sleep state identification result according to the dominant brain wave type.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the method embodiment in the embodiment of the present application, which may be referred to in the method embodiment section specifically, and are not described herein again.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 8, the electronic apparatus 8 of this embodiment includes: at least one processor 80 (only one shown in fig. 8), a memory 81, a computer program 82 being stored in the memory 81 and being executable on the at least one processor 80, the processor 80 implementing the steps of any of the method embodiments described above when executing the computer program 82.
The electronic device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of the electronic device 8, and does not constitute a limitation of the electronic device 8, and may include more or less components than those shown, or combine some of the components, or different components, such as an input-output device, a network access device, etc.
In an embodiment, the electronic device may be integrated with a gesture recognition module, and the gesture recognition module may be specifically an infrared gesture recognition module. In one embodiment, the electronic device is integrated with two gesture recognition modules, both of which are communicatively coupled to the processor 80. It can be understood that the principles of hand-waving gesture recognition and infrared gesture recognition are well known to those skilled in the art, and are not described in detail herein.
The Processor 80 may be a Central Processing Unit (CPU), and the Processor 80 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may in some embodiments be an internal storage unit of the electronic device 8, such as a hard disk or a memory of the electronic device 8. The memory 81 may also be an external storage device of the electronic device 8 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 8. In one embodiment, the memory 81 may also include both an internal storage unit and an external storage device of the electronic device 8. The memory 81 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides an electronic device, including: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on an electronic device, enables the electronic device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above may be implemented by instructing relevant hardware by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the methods described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present application, and they should be construed as being included in the present application.