WO2023024748A1 - 睡眠质量评估方法、在床状态监测方法及其装置 - Google Patents

睡眠质量评估方法、在床状态监测方法及其装置 Download PDF

Info

Publication number
WO2023024748A1
WO2023024748A1 PCT/CN2022/105838 CN2022105838W WO2023024748A1 WO 2023024748 A1 WO2023024748 A1 WO 2023024748A1 CN 2022105838 W CN2022105838 W CN 2022105838W WO 2023024748 A1 WO2023024748 A1 WO 2023024748A1
Authority
WO
WIPO (PCT)
Prior art keywords
sleep
processed
time period
preset time
bed state
Prior art date
Application number
PCT/CN2022/105838
Other languages
English (en)
French (fr)
Inventor
朱国康
张翼
郝得宁
戴晓伟
汪孔桥
Original Assignee
安徽华米健康科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110987315.XA external-priority patent/CN115732087A/zh
Priority claimed from CN202111082791.3A external-priority patent/CN115804566A/zh
Application filed by 安徽华米健康科技有限公司 filed Critical 安徽华米健康科技有限公司
Publication of WO2023024748A1 publication Critical patent/WO2023024748A1/zh
Priority to US18/419,199 priority Critical patent/US20240156397A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/681Wristwatch-type devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7278Artificial waveform generation or derivation, e.g. synthesising signals from measured signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2560/00Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
    • A61B2560/04Constructional details of apparatus
    • A61B2560/0431Portable apparatus, e.g. comprising a handle or case
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0219Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches

Definitions

  • the present disclosure relates to the technical fields of artificial intelligence and deep learning, and in particular to a sleep quality assessment method, a bed state monitoring method and a device thereof.
  • the present disclosure provides a sleep quality assessment method, a bed state monitoring method and a device thereof.
  • a sleep quality assessment method including: determining the sleep data of the subject to be processed; extracting sleep characteristics according to the preset reference core sleep period and sleep data of the subject to be processed; according to the sleep characteristics, Assessing the quality of sleep of the subject to be treated.
  • a wearable device-based bed occupancy monitoring method including: acquiring an acceleration signal output by the wearable device within a preset time period; Process the motion characteristics of the object within the preset time period; determine the posture characteristics of the object to be processed within the preset time period according to the acceleration signal; determine the posture characteristics according to the posture characteristics and the motion characteristics The in-bed state of the object to be processed within the preset time period.
  • a sleep quality assessment device including: a determining module, configured to determine sleep data of an object to be processed; time period and the sleep data, and extract sleep features; an evaluation module, configured to evaluate the sleep quality of the subject to be processed according to the sleep features.
  • a wearable device-based bed occupancy monitoring device including: a first acquisition module, configured to acquire the acceleration output by the sensor in the wearable device within a preset time period signal; a first determination module, configured to determine the motion characteristics of the object to be processed within the preset time period according to the acceleration signal; a second determination module, configured to determine the object to be processed according to the acceleration signal Posture characteristics within the preset time period; a third determining module, configured to determine the bed-occupied state of the subject to be processed within the preset time period according to the posture characteristics and the motion characteristics.
  • an electronic device including: at least one processor; and a memory communicatively connected to the at least one processor; wherein, the memory stores Executable instructions, the instructions are executed by the at least one processor, so that the at least one processor can execute the method described in the embodiment of the first aspect of the present disclosure.
  • a wearable device including: an acceleration sensor; a wearable accessory; at least one processor; and a memory connected in communication with the at least one processor; wherein, the memory stores An instruction executable by the at least one processor, the instruction being executed by the at least one processor, so that the at least one processor can execute the method described in the embodiment of the second aspect of the present disclosure.
  • a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the method described in the embodiment of the first aspect of the present disclosure, Or, execute the method described in the embodiment of the second aspect of the present disclosure.
  • a computer program product including a computer program.
  • the computer program When the computer program is executed by a processor, the method described in the embodiment of the first aspect of the present disclosure is implemented, or, the second aspect of the present disclosure is implemented. The method described in aspect embodiment.
  • FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure
  • FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure.
  • Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of tree model training according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram according to a fifth embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram according to a sixth embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram according to a seventh embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram according to an eighth embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram according to a ninth embodiment of the present disclosure.
  • FIG. 11 is a block diagram of an electronic device used to implement sleep quality assessment according to an embodiment of the present disclosure.
  • wearable devices capture physiological data, which requires direct contact with the human body, which brings inconvenience and psychological burden to the subject, interferes with the sleep process of the subject, and affects the sleep of the subject Habits will ultimately affect the accuracy of the assessment of the subject's sleep quality.
  • the evaluation model used by wearable devices to capture physiological data when analyzing sleep quality is mainly to artificially set total score synthesis rules for various sleep characteristics based on the domain knowledge of professionals, such as piecewise linear weighting. The rules deal with different sleep indicators in a rough way, and cannot accurately evaluate the user's sleep quality.
  • the present disclosure proposes a sleep quality assessment method, a bed state monitoring method and a device thereof.
  • FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure.
  • the sleep quality assessment method of the embodiment of the present disclosure can be applied to the sleep quality assessment device of the embodiment of the present disclosure, and the device can be configured in an electronic device.
  • the electronic device may be a mobile terminal, for example, a mobile phone, a tablet computer, a personal digital assistant, and other hardware devices with various operating systems.
  • the sleep quality assessment method may include the following steps:
  • Step 101 determine sleep data of an object to be processed.
  • the sleep data of the subject to be processed may be determined through the sign data of the subject to be processed.
  • the sleep quality assessment device may collect the sign data of the subject to be processed, and determine the sleep data of the subject to be processed according to the sign data.
  • the physical sign data may include: pulse, respiratory rate, and heart rate, etc.
  • the sleep data may include: sleep duration, deep sleep duration, and sleep interruption times, etc.
  • Step 102 extracting sleep features according to the preset reference core sleep period and sleep data of the object to be processed.
  • the reference core sleep period may include: an individual core sleep period of the object to be processed within a preset time period, and/or a group core sleep period within the area to which the object to be processed belongs within a preset time period.
  • the individual core sleep period of the object to be processed within the preset time period is 22:00 to 06:00 for the individual core sleep period of the object to be processed within 180 days
  • the group core sleep period within the area to which the object to be processed belongs The time period is the core sleep period of the group in the area to which the object to be processed belongs within 180 days from 22:00 to 06:00.
  • the extracted sleep features are also different depending on the reference core sleep period of the object to be processed.
  • the reference core sleep period includes: the individual core sleep period of the object to be processed within the preset time period, and it can be determined according to the individual core sleep period and sleep data of the object to be processed within the preset time period.
  • the first sleep feature in the sleep period the first sleep feature is used as the extracted sleep feature.
  • the first sleep feature may include at least one of the following features: the sleep duration of the object to be processed within the individual core sleep period, the ratio of the sleep duration to the total sleep duration of the object to be processed, the individual core sleep period The duration of deep sleep of the object to be processed, the number of deep sleep times of the object to be processed in the individual core sleep period, the proportion of deep sleep and light sleep duration of the object to be processed in the individual core sleep period, the waking time of the object to be processed in the individual core sleep period, The number of waking times of the object to be processed in the individual core sleep period and the ratio of the awake time to the sleep time of the object to be processed in the individual core sleep period.
  • the reference core sleep period includes: the core sleep period of the group in the area to which the object to be processed belongs within the preset time period, which can be based on the core sleep period of the group and the sleep data in the area to which the object to be processed belongs to within the preset time period Determine the second sleep feature of the subject to be processed during the group core sleep period, and use the second sleep feature as the extracted sleep feature.
  • the second sleep feature may include at least one of the following features: the sleep duration of the object to be processed in the core sleep period of the group, the ratio of the sleep duration to the total sleep duration of the object to be processed, the sleep duration of the object to be processed in the core sleep period of the group The duration of deep sleep, the number of deep sleep times of the object to be processed in the group core sleep period, the proportion of deep sleep and light sleep duration of the object to be processed in the group core sleep period, the waking time of the object to be processed in the group core sleep period, and the group core sleep period The number of waking times of the object to be processed and the ratio of the waking time to the sleeping time of the object to be processed in the core sleep period of the group.
  • the reference core sleep period includes: the individual core sleep period of the object to be processed within the preset time period and the group core sleep period within the area to which the object to be processed belongs within the preset time period, which can be based on the The individual core sleep period and sleep data of the object to be processed determine the first sleep characteristics of the object to be processed in the individual core sleep period, and determine the group core sleep period and sleep data according to the group core sleep period and sleep data in the area to which the object to be processed belongs within the preset time period.
  • the second sleep characteristics of the subject in the group core sleep period are processed to determine the sleep characteristics.
  • Step 103 evaluating the sleep quality of the subject to be processed according to the sleep characteristics.
  • the sleep quality evaluation process can be performed on the object to be processed.
  • the method by determining the sleep data of the object to be processed; extracting sleep characteristics according to the preset reference core sleep period and sleep data of the object to be processed; evaluating the sleep quality of the object to be processed according to the sleep characteristics, the method
  • the sleep characteristics extracted from the reference core sleep period and the sleep data of the subject to be processed are used to evaluate the sleep quality of the subject to be processed, and the individual factors of the subject to be processed are taken into account when extracting the sleep features, so that the sleep quality of the subject to be processed can be more accurately evaluated.
  • FIG. 2 is a schematic diagram according to the second embodiment of the present disclosure.
  • the sleep data of the subject to be processed can be obtained according to the sign data of the subject to be processed, and the embodiment shown in FIG. 2 may include the following steps:
  • Step 201 when the bed occupancy state of the subject to be treated is not out of bed within a preset period of time, the physical sign data of the subject to be treated is acquired.
  • the bed occupancy state of the subject to be processed within the preset time period can be determined first, and the method of determining the bed occupancy state of the subject to be processed within the preset time period can be found in A description of the examples follows.
  • the sleep quality assessment device can monitor and collect the sign data of the subject to be treated, such as pulse, respiratory rate and heart rate etc.
  • Step 202 perform sleep recognition on the sign data, and acquire the sleep data of the subject to be processed.
  • sleep recognition can be performed on the vital sign data to obtain the sleep data of the subject to be processed. For example, when the human body is in a sleeping state, the breathing rate drops. When the breathing rate drop is detected, it can be determined that the object to be processed is in a sleeping state, and the duration of the sleeping state can be used as the sleep duration of the object to be processed. For another example, sleep recognition can be performed on the heart rate, and the duration of deep sleep and the number of sleep interruptions of the subject to be processed can be determined.
  • Step 203 extracting sleep features according to the preset reference core sleep period and sleep data of the object to be processed.
  • Step 204 Evaluate the sleep quality of the subject to be processed according to the sleep characteristics.
  • steps 203-204 may be implemented in any one of the embodiments of the present disclosure, which is not limited in the embodiments of the present disclosure, and will not be repeated here.
  • the sleep data of the subject to be processed can be accurately obtained.
  • FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure.
  • the reference core sleep period includes: the individual core sleep period of the object to be processed within the preset time period and the group core sleep period within the area to which the object to be processed belongs within the preset time period, which can be based on the preset time period Determine the first sleep feature of the subject to be processed in the individual core sleep period within the individual core sleep period and sleep data of the subject to be processed, and determine according to the group core sleep period and sleep data in the area to which the subject to be processed belongs within the preset time period
  • the second sleep feature of the object to be processed in the group core sleep period is to determine the sleep feature.
  • the embodiment shown in Figure 3 may include the following steps:
  • Step 301 determine sleep data of an object to be processed.
  • Step 302 Determine the first sleep feature of the subject to be processed in the individual core sleep period according to the sleep data and the individual core sleep period.
  • the first sleep may include at least one of the following features: the sleep duration of the object to be processed within the individual core sleep period, the ratio of the sleep duration to the total sleep duration of the object to be processed, and the sleep duration within the individual core sleep period.
  • the duration of deep sleep of the object to be processed, the number of deep sleep times of the object to be processed in the individual core sleep period, the proportion of deep sleep and light sleep duration of the object to be processed in the individual core sleep period, the waking time of the object to be processed in the individual core sleep period, the individual The number of awakenings of the object to be processed in the core sleep period and the ratio of the awake time to the sleep time of the object to be processed in the individual core sleep period.
  • the sleep duration of the object to be processed within the individual core sleep period may be the overlapping duration of the sleep period of the object to be processed and the individual core sleep period in the sleep data.
  • the sleep data the object to be processed falls asleep at 23:00 and wakes up at 8:00 the next day, the sleep period of the object to be processed is from 23:00 to 8:00, and the core sleep period of the individual is from 22:00 to 06:00.
  • the overlapping duration of the object sleep period and the body core sleep period is 7 hours from 23:00 to 06:00; the ratio of the sleep duration and the total sleep duration of the object to be processed can be the proportion of the object to be processed in the individual core sleep period
  • the ratio of the sleep duration to the total sleep duration of the object to be processed for example, the sleep duration of the object to be processed within the individual core sleep period is from 23:00 to 06:00, and the total sleep duration of the object to be processed is from 23:00 to 07:00: 00, the ratio of the sleep duration to the total sleep duration of the object to be processed is 7/8;
  • the deep sleep duration of the object to be processed in the individual core sleep period is the total duration of the deep sleep state of the object to be processed in the individual core sleep period;
  • the deep sleep times of the object to be processed in the individual core sleep period can be the total number of times the object to be processed enters the deep sleep state in the individual core sleep period; the deep sleep and light sleep duration ratio of the object to be processed in the individual core sleep
  • Step 303 according to the sleep data and the group core sleep period, determine the second sleep characteristics of the subject to be processed in the group core sleep period.
  • the second sleep feature may include at least one of the following features: the sleep duration of the object to be processed in the core sleep period of the group, the ratio of the sleep duration to the total sleep duration of the object to be processed, and the sleep duration to be processed in the core sleep period of the group.
  • the duration of deep sleep of the object, the number of deep sleep times of the object to be processed in the group core sleep period, the proportion of deep sleep and light sleep duration of the object to be processed in the group core sleep period, the awake time of the object to be processed in the group core sleep period, and the group core sleep The number of waking times of the object to be processed within the time period and the ratio of the waking time to the sleep time of the object to be processed in the core sleep period of the group.
  • the sleep duration of the object to be processed in the core sleep period of the group may be the overlapping duration of the sleep period of the object to be processed and the core sleep period of the group in the sleep data.
  • the sleep data the object to be processed falls asleep at 23:00 and wakes up at 8:00 the next day, the sleep period of the object to be processed is from 23:00 to 8:00, and the core sleep period of the group is from 22:00 to 06:00.
  • the overlapping duration of the sleep period of the object and the core sleep period of the group is 7 hours from 23:00 to 06:00; the ratio of the sleep duration to the total sleep duration of the object to be processed can be the sleep of the object to be processed during the core sleep period of the group
  • the ratio of the duration to the total sleep duration of the object to be processed for example, the sleep duration of the object to be processed in the core sleep period of the group is 23:00 to 06:00, and the total sleep duration of the object to be processed is 23:00 to 07:00 , the ratio of the sleep duration to the total sleep duration of the object to be processed is 7/8;
  • the deep sleep duration of the object to be processed in the core sleep period of the group is the total duration of the deep sleep state of the object to be processed in the core sleep period of the group;
  • the number of deep sleep times of the object to be processed in the core sleep period can be the total number of times the object to be processed enters the deep sleep state in the group core sleep period; the deep sleep and light sleep duration ratio of the object
  • Step 304 Determine the sleep feature according to the first sleep feature and the second sleep feature.
  • the first sleep feature and the second sleep feature may be spliced to determine the sleep feature.
  • the sleep characteristics include: the sleep duration of the object to be processed in the individual core sleep period, the ratio of the sleep duration to the total sleep duration of the object to be processed, the deep sleep duration of the object to be processed in the individual core sleep period, and the sleep duration of the object to be processed in the individual core sleep period.
  • the number of deep sleep times of the object to be processed, the proportion of deep sleep and light sleep duration of the object to be processed in the individual core sleep period, the waking time of the object to be processed in the individual core sleep period, the number of awakenings of the object to be processed in the individual core sleep period, and the individual core The proportion of the awake time and sleep time of the object to be processed in the sleep period, the sleep time of the object to be processed in the core sleep period of the group, the ratio of the sleep time to the total sleep time of the object to be processed, and the waiting time in the core sleep period of the group
  • the duration of deep sleep of the processing object, the number of deep sleep times of the object to be processed in the group core sleep period, the proportion of deep sleep and light sleep duration of the object to be processed in the group core sleep period, the awake time of the object to be processed in the group core sleep period, the group core The number of awakenings of the object to be processed in the sleep period and the ratio of the awake time to the sleep time of the object to be processed in the core sleep period of the group
  • Step 305 evaluating the sleep quality of the subject to be processed according to the sleep characteristics.
  • steps 301 and 305 can be implemented by using any one of the embodiments of the present disclosure, which is not limited in the embodiments of the present disclosure, and will not be repeated here.
  • the sleep data and the individual core sleep period determine the first sleep feature of the object to be processed in the individual core sleep period; according to the sleep data and the group core sleep period, determine the target to be processed in the group core sleep period.
  • the second sleep feature determine the sleep feature according to the first sleep feature and the second sleep feature.
  • FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure.
  • a quality evaluation model may be used to evaluate the sleep quality of the subject to be processed according to the sleep characteristics and attribute information of the subject to be processed.
  • the embodiment shown in Figure 4 may include the following steps:
  • Step 401 determine sleep data of an object to be processed.
  • Step 402 extracting sleep features according to the preset reference core sleep period and sleep data of the object to be processed.
  • Step 403 acquiring a sleep quality assessment model.
  • the performance of the neural network model is more prominent when the input-output relationship is complex and there are enough labeled training samples.
  • the cyclic neural network is a black-box model, and it is impossible to explain why the pre-trained neural network makes a specific decision.
  • the tree model has good interpretability, therefore, this disclosure adopts the pre-trained neural network as the teacher model to train the tree model.
  • the training data includes: a preset number of sample sleep features; according to a pre-trained neural network model and a preset number of sample sleep features, determine the sample sleep discomfort symptoms corresponding to the sample sleep features;
  • the initial tree model is trained by using the sleep characteristics of the samples and the corresponding sleep discomfort symptoms of the samples to obtain a trained tree model; the trained tree model is used as a sleep quality evaluation model.
  • the sample sleep characteristics and attribute information can be used as the input of the neural network model to train the neural network model to obtain the pre-trained neural network model, so that the output of the neural network model can be compared with the questionnaire obtained
  • the sleep discomfort symptoms of the pre-trained neural network are used as the sample sleep discomfort symptoms corresponding to the sample sleep characteristics, wherein the sleep discomfort symptoms obtained by the questionnaire survey may include at least one of the following symptoms: easy to wake up, Waking up early, insomnia, dreaminess, tiredness after waking up, going to the bathroom at night, difficulty falling asleep at night, headache and dizziness after waking up, easy tiredness and lethargy during the day, difficulty falling asleep again after waking up, difficulty falling asleep when waking up early, difficulty falling asleep when waking up in the morning Headache, dizziness and frequent snoring, awakened by snoring during sleep.
  • the attribute information may include: at least one of gender, age, height and weight.
  • the sleep discomfort symptoms output by the neural network (recurrent neural network) model and the sleep discomfort symptoms obtained by the questionnaire survey can be used to construct a loss function, which can be specifically expressed as the following formula:
  • Y ⁇ y i ⁇
  • Z ⁇ z i ⁇
  • y i and z i represent the sleep discomfort symptoms obtained by the questionnaire survey and the sleep discomfort symptoms output by the neural network, respectively.
  • sample sleep characteristics and attribute information are used as the input of the tree model, and the tree model is trained so that the output of the tree model matches the sample sleep discomfort symptoms corresponding to the sample sleep characteristics of the output of the pre-trained neural network model.
  • Loss L2(Y,Z)+(1-Corr(Y,Z)) 2 ;
  • Corr(Y,Z) ⁇ (y i - ⁇ y )(zi - ⁇ z )/( ⁇ y ⁇ z );
  • ⁇ y and ⁇ z are the means of Y and Z, respectively
  • ⁇ y and ⁇ z are the standard deviations of Y and Z, respectively.
  • the trained tree model is used as the sleep quality assessment model.
  • Step 404 input the sleep characteristics and attribute information into the sleep quality assessment model to obtain sleep discomfort symptoms.
  • sleep characteristics and attribute information can be input into the sleep quality assessment model, and the sleep quality assessment model can output sleep discomfort symptoms.
  • Step 405 determining sleep discomfort symptoms as the sleep quality evaluation result of the object to be processed.
  • sleep discomfort symptoms are used as the sleep quality evaluation results of the subject to be processed.
  • steps 401-402 may be implemented in any of the embodiments of the present disclosure, which is not limited in the embodiments of the present disclosure, and will not be repeated here.
  • the sleep quality of the target to be processed can be evaluated by using the quality assessment model, and the sleep quality of the target to be processed can be evaluated more accurately.
  • the sleep quality evaluation method of the embodiment of the present disclosure determines the sleep data of the object to be processed; extracts the sleep characteristics according to the preset reference core sleep period and sleep data of the object to be processed; evaluates the sleep characteristics of the object to be processed according to the sleep characteristics Sleep quality, this method evaluates the sleep quality of the object to be processed according to the sleep characteristics extracted from the reference core sleep period of the object to be processed and the sleep data of the object to be processed, and takes into account the individual factors of the object to be processed when extracting the sleep characteristics, which can be more accurate To accurately assess the sleep quality of the subject to be treated.
  • the present disclosure also proposes a method for monitoring bed occupancy based on wearable devices.
  • FIG. 6 is a schematic diagram according to the fifth embodiment of the present disclosure.
  • the bed occupancy monitoring method based on the wearable device of the embodiment of the present disclosure can be applied to the bed occupancy monitoring method based on the wearable device of the embodiment of the present disclosure.
  • a device which can be configured in an electronic device.
  • the electronic device may be a mobile terminal, for example, a mobile phone, a tablet computer, a personal digital assistant, and other hardware devices with various operating systems.
  • the bed state monitoring method based on the wearable device may include the following steps:
  • Step 601 acquiring an acceleration signal output by a wearable device within a preset time period.
  • the present disclosure utilizes a wearable device that is portable and convenient to use in various daily scenarios to monitor the user's bed state.
  • the sensors in the wearable device can intuitively reflect the user's behavior.
  • the acceleration signal output by the sensor can reflect the movement of the user's limb where the wearable device is located.
  • the acceleration signal may be output by the sensor in the wearable device, which may be a single-axis acceleration sensor, a biaxial acceleration sensor or a three-axis acceleration sensor, which is not limited in the present disclosure.
  • the present disclosure can choose a three-axis acceleration sensor, that is, by collecting acceleration signals on the three axes of the spatial coordinate system, the results determined based on the acceleration signals more reliable.
  • the following embodiments of the present disclosure are described by taking the acceleration signal as an example of a three-axis acceleration signal.
  • the preset time period may be a preset time period of any length.
  • the acceleration signal within a preset time period can be acquired. That is, the acceleration signal may be a sequence including a series of acceleration values.
  • Step 602 according to the acceleration signal, determine the motion characteristics of the object to be processed within a preset time period.
  • the acceleration signal may be analyzed first to determine the running characteristics of the object to be processed within the preset time period corresponding to the acceleration signal. For example, if each acceleration value in the acceleration signal is less than the threshold value, it can be determined that the motion characteristic of the object to be processed within the preset time period is: no motion; or, if each acceleration value in the acceleration signal is greater than the threshold value , then it may be determined that the motion feature of the object to be processed within the preset time period is: motion, etc., which is not limited in the present disclosure.
  • the preset time period can also be divided into multiple time windows based on the specified time length, and then according to each moment in each time window Acceleration, determine the average absolute deviation corresponding to each time window, and then determine the motion characteristics within the preset time period based on the average absolute deviation of each time window. That is, the above step 102 may include:
  • the type label corresponding to each time window, wherein the type label is used to represent the activity of the object to be processed in the corresponding time window state;
  • the motion characteristics of the object to be processed within the preset time period are determined.
  • the specified time length can be preset, or can also be determined by the wearable device according to the current time. For example, when the user just starts sleeping or in the early morning, there may be frequent movements such as turning over, and During deep sleep, the probability of action is low, so it can be set that the corresponding specified time length is longer in the middle of the night, and the corresponding specified time length is shorter in the early morning or when you just start sleeping.
  • the device when determining the average absolute deviation corresponding to each time window, the device needs to combine the three-axis acceleration measurement values obtained by the sensor to calculate the resultant acceleration.
  • the three-axis acceleration measurement values corresponding to each moment of each time window are respectively marked as acc x , acc y and acc z .
  • the device can calculate the resultant acceleration gacc at each moment of the time window by using the following formula based on the three-axis acceleration measurement values acc x , acc y and acc z :
  • the device can also calculate the resultant acceleration of the current window by calculating the three-axis acceleration measurement value of the current time window, wherein the resultant acceleration can be used as the window acceleration of the current time window.
  • the device can calculate the resultant acceleration of each time window according to the number n of moments contained in each time window and the resultant acceleration at each moment in each time window Mean mean gacc , the formula is as follows:
  • gacc i is the resultant acceleration corresponding to the i-th moment, and i is a positive integer.
  • the device can calculate the mean absolute deviation MAD corresponding to each time window according to the mean gacc of the resultant acceleration in each time window and the resultant acceleration gacc i at each moment in each time window, the formula is as follows:
  • the device may determine the type label corresponding to each time window according to the size relationship between the average absolute deviation corresponding to each time window and the activity threshold.
  • the activity threshold may be preset in the wearable device, or may be automatically generated by the wearable device according to the historical motion information of the object to be processed, which is not limited in the present disclosure.
  • the type label in the present disclosure may be "low activity level”, “medium activity level”, “high activity level”, etc., which is not limited in the present disclosure.
  • the number of activity thresholds can be determined according to the number of type tags. For example, if there are two type tags corresponding to the time window, which are "low activity” and "high activity”, then the device can Set an activity threshold. According to the size relationship between the average absolute deviation corresponding to each time window and the activity threshold, the device can mark the time window with the average absolute deviation lower than or equal to the activity threshold as "low activity”, and mark the time window with the average absolute deviation high The time window at the activity threshold is marked as "high activity", which is not limited by the present disclosure.
  • the device can set two activity thresholds.
  • the activity threshold of the highest activity is recorded as A
  • the large activity threshold is recorded as B.
  • the time window whose average absolute deviation is lower than or equal to the activity threshold A can be marked as "low activity”
  • the average absolute deviation higher than The time window of activity threshold B is marked as "high activity level”
  • the time window in which the average absolute deviation is higher than activity threshold A and less than or equal to activity threshold B is marked as "medium activity level", which is not limited in this disclosure.
  • different feature values may be used to represent different type tags. For example, the eigenvalue corresponding to "low activity” is 0, the eigenvalue corresponding to "medium activity” is 1, and the eigenvalue corresponding to "high activity” is 2.
  • the feature values corresponding to each time window can be fused, such as permutation, or weighted summation, etc., to determine the motion characteristics of the preset time period .
  • the preset time period is divided into 5 time windows, and the eigenvalues corresponding to the 5 time windows are: 0, 0, 1, 0, 0, then the preset time period corresponds to
  • the motion feature of can be that the multiple eigenvalues are arranged in order to form a eigenvector, namely [0, 0, 1, 0, 0].
  • weighted summation may be performed on the multiple feature values to determine the motion feature corresponding to the continuous period. The present disclosure does not limit this.
  • Step 603 according to the acceleration signal, determine the posture characteristics of the object to be processed within a preset time period.
  • the acceleration signal may be analyzed first to determine the posture characteristics of the object to be processed within the preset time period corresponding to the acceleration signal. For example, if each acceleration value in the acceleration signal is less than the threshold value, it can be determined that the attitude feature of the object to be processed within the preset time period is: lying posture; or, if each acceleration value in the acceleration signal is greater than threshold, it can be determined that the posture characteristics of the object to be processed within the preset time period are: sitting posture or standing posture, etc., which is not limited in the present disclosure.
  • the preset time period can also be divided into multiple time windows based on the specified time length, and then the window acceleration corresponding to each time window can be determined , and then according to the window acceleration corresponding to each time window, determine the posture characteristics in the preset time period.
  • the above step 603 may include: dividing the preset time period into multiple time windows based on the specified time length; determining the window acceleration corresponding to each time window according to the acceleration at each moment in each time window; When the window acceleration corresponding to the time window is within the specified range, determine the acceleration vector corresponding to any time window; determine the distance value between each acceleration vector and the specified spherical area; according to multiple distance values in the preset time period, Determine the gesture characteristics of the object to be processed within a preset time period.
  • the device can determine the acceleration vector corresponding to any time window.
  • the specified range in the present disclosure may be 1g ⁇ 0.1g, 1g ⁇ 0.2g, 1g ⁇ 0.5g, etc., which is not limited.
  • the device can first obtain the window acceleration of the time window, and then normalize the three-axis acceleration based on the window acceleration, so as to obtain the unit vectors of the current window in the three axes.
  • the measured acceleration values of the three-axis acceleration are acc x , acc y and acc z .
  • the measured acceleration values of the three axes are respectively normalized based on the window acceleration, that is, the three-axis acceleration measurements are calculated.
  • the three components u x , u y and u z of the acceleration measurement value of each axis on the three-dimensional Cartesian coordinate system are as follows:
  • the angle u long between the acceleration vector and the x-axis and the angle u la between the acceleration vector and the z-axis are obtained based on u x , u y and u z , wherein, in this disclosure, u long is used as the longitude, and u la is used as the latitude to calculate specifically
  • u long is used as the longitude
  • u la is used as the latitude to calculate specifically
  • the device can determine the spherical distance Dis between the vector of the current time window and the marked area point according to the spherical distance formula, the formula is as follows:
  • Dis acos(cos(u la )*cos(i la )*cos(u long -i long )+sin(u la )*sin(i la ));
  • the longitude i long is the angle between the vector formed by connecting the marked area point and the origin of the spherical coordinate system and the x-axis
  • the latitude i la is the angle between the z-axis
  • the longitude u long is the angle between the acceleration vector and the x-axis
  • the latitude u la is the angle between the acceleration vector and the z axis.
  • the area point is any point in the designated spherical area, or a designated point, which is not limited in the present disclosure.
  • the device can directly sum the 5 distance values or perform a weighted sum according to the weight, and then The device may use the final summed value as the gesture feature, which is not limited in the present disclosure.
  • window accelerations corresponding to five time windows may be directly vector-added to obtain a sum vector.
  • the present disclosure may use the spherical distance value between the sum vector and any point in the designated spherical area as the gesture feature, which is not limited in the present disclosure.
  • Step 604 according to the posture feature and the motion feature, determine the bed occupancy state of the subject to be processed within a preset time period.
  • the device can respectively determine the bed state of the object to be processed within a preset time period according to the user's posture characteristics and motion characteristics, which can be "out of bed”, “suspected out of bed” and “not out of bed” and other states, which are not limited in the present disclosure. Since the features combined by this device include posture features and motion features, this device can determine the bed state of the object to be processed within the preset time period from two angles based on this, so that the sleep time can be more accurately calculated. Computing, not only expands the use of wearable devices, but also makes the results of determining bed state more accurate and reliable.
  • the wearable device first acquires the acceleration signal output by the sensor within the preset time period, and then determines the motion characteristics of the object to be processed within the preset time period according to the acceleration signal, and determines the motion characteristics of the object to be processed according to the acceleration signal.
  • the posture characteristics of the object within the preset time period, and finally according to the posture characteristics and the motion characteristics determine the bed occupancy state of the object to be processed within the preset time period.
  • the bed occupancy state of the subject to be processed is determined by fusing the motion features and posture features, which improves the accuracy and reliability of the monitoring results.
  • FIG. 7 is a schematic diagram according to a sixth embodiment of the present disclosure.
  • the bed state monitoring method based on the wearable device may include the following steps:
  • Step 701 acquiring an acceleration signal output by a wearable device within a preset time period.
  • Step 702 according to the acceleration signal, determine the motion characteristics of the object to be processed within a preset time period.
  • the device when determining the motion characteristics of the object to be processed within the preset time period according to the acceleration signal, the device needs to divide the preset time period into multiple time windows based on the specified time length, and determine the time window for each time window.
  • the specific implementation process please refer to step 102 above.
  • the device can determine the number of windows of each type in the preset time period according to the type tags of each time window in the preset time period, so that the device can then determine the number of windows of each type in the preset time period based on the number of windows of each type in the preset time period Quantity, to determine the movement characteristics of the preset time period.
  • the device may respectively record the number of windows corresponding to the three types of tags as count low , count mid , and count high .
  • the number of tags of various types corresponding to multiple time windows included in the continuous period can be used to characterize the motion characteristics of the preset time period, such as [count low , count mid , count high ], which is not limited in this disclosure .
  • the device may also determine a time window type sequence within the preset time period according to the type labels of each time window within the preset time period.
  • the device can determine the type tags of the 5 time windows one by one, so as to obtain the sequence of time window types within the preset time period. For example, if the type tags of the five time windows are respectively "low activity”, “low activity”, “low activity”, “high activity” and “high activity” in chronological order, the "low activity” If the type label of "quantity” is marked as M, and "high activity level” is marked as N, then the time window type sequence can be "M, M, M, N, N".
  • the device can determine the number of windows of each type within the preset time period, for example, the device can learn the "low activity” type from "M, M, M, N, N". There are 3 time windows for "high activity” type, and there are 2 time windows for "high activity level", which are not limited in this disclosure.
  • the device can determine a A feature vector [1, 1, 1, 2, 2], so that the device can use the feature vector as a motion feature, which is not limited in the present disclosure.
  • the device determines the type tags corresponding to each time window of the object to be processed according to the acceleration signal, it can determine the activity change point within the preset time period through the type tags corresponding to each time window.
  • the activity change point can be used to determine the motion feature of the time interval, and accordingly, the present disclosure can also update the motion feature based on it.
  • step 702 may also include:
  • the motion characteristics of the object to be processed within the preset time period are updated.
  • the active change point can be the intermediate time point when any time window changes from one type of label to another type of label. For example, if a preset time period is divided into 5 time windows, the first three The type tags corresponding to the time window are both "low activity”, and the type tags corresponding to the last two time windows are both "medium activity”, then the device can combine the "low activity” time window with the "medium activity” time window The time points between the windows serve as active change points.
  • the time point when the time window changes from “low activity level” to “medium activity level” can be taken as the “medium activity change point”, and the time point when "medium activity level” is changed to "high activity level” As a “high activity change point”, this disclosure does not limit it.
  • This device can determine the time interval between each time window and the previous activity change point, such as the time interval CP high from the current window to the last "high activity change point", and the time interval from the last "medium activity change point” CP mid , or the time interval from the last adjacent active change point of any type, is not limited in this disclosure. It can be understood that the device can use the time interval as a motion feature for determining the bed state of the subject to be processed.
  • the device may update the motion characteristics of the object to be processed within a preset time period according to the time interval corresponding to each time window. It can be understood that after obtaining the time interval between each time window of the current preset time period and the adjacent previous activity change point, the present disclosure also determines the motion feature of "time interval", and furthermore, the device The motion characteristics of the object to be processed in the preset time period may be re-determined or supplemented, which is not limited in the present disclosure.
  • Step 703 according to the acceleration signal, determine the posture characteristics of the object to be processed within a preset time period.
  • the present disclosure can determine the attitude feature according to the specific implementation of step 603 above; if the window acceleration corresponding to any time window is not within the specified range, the device can According to the window accelerations corresponding to the remaining time windows in the preset time period, the posture characteristics of the object to be processed in the preset time period are determined.
  • the device may determine the window acceleration corresponding to the remaining time windows. For example, if there are currently 5 time windows, and it can be determined that the window acceleration of the second time window exceeds the specified range, then the device can calculate the window acceleration of the remaining 4 time windows, for example, the The window accelerations of the 4 time windows are vector summed to obtain the sum vector. Further, referring to the above step 603, the spherical distance value between the sum vector and any point in the designated spherical area may be calculated as the pose feature, which is not limited in the present disclosure.
  • the present disclosure may directly sum the window accelerations of the remaining four time windows and the spherical distance value of any point in the designated spherical area or carry out weighted summation according to the weight, and then the device may use the final summed value as
  • the gesture feature is not limited in the present disclosure.
  • Step 704 determine the first bed state of the subject to be processed within a preset time period.
  • the device may compare the feature value corresponding to the posture feature in the preset time period with the preset threshold value through a threshold value comparison method.
  • the preset threshold may be one or more, which is not limited. If there is currently only one preset threshold value, then if the current feature value is higher than the threshold value, the first bed state of the object to be processed within the preset time period can be determined as "out of bed state", if the current feature value is smaller than the threshold value , the first in-bed state of the object to be treated within a preset time period may be determined as the "bed-in state", which is not limited in the present disclosure.
  • Step 705 determine the second in-bed state of the subject to be processed within a preset time period.
  • the device may input each motion feature into a pre-trained decision tree model to obtain an output result of the second bed state.
  • the motion feature can be the number of each time window of each type of label in the preset time period and the time interval between each time window in the preset time period and the adjacent previous activity change point, etc. This is not limited.
  • template matching can also be used, for example, a template library pre-established in the bed state, wherein the template library contains feature vectors of various samples, and the device obtains the object to be processed by inputting the vector of the motion feature of the object to be processed
  • the degree of matching with the feature vectors of various samples in the template library is used to further determine whether the second bed occupancy state is "out of bed”, “suspected out of bed” or “not out of bed”, which is not limited in the present disclosure.
  • Step 706 Determine the bed occupancy state of the object to be processed within the preset time period according to the first bed occupancy state and the second bed occupancy state within the preset time period.
  • the device may determine that the in-bed state of the subject to be processed within the preset time period is the first in-bed state. It can be understood that if the first in-bed state and the second in-bed state are the same, for example, both are "leaving the bed", then the device can use the first in-bed state "leaving the bed” as the object to be processed in the preset The bed state of the time period. Since the first bed state is the same as the second bed state, the device can also use the second bed state as the bed state of the object to be processed in the preset time period. Not limited.
  • the device can determine that the object to be processed is in the preset The bed state in the time period is the non-bed state.
  • the out-of-bed state may be "out of bed” or “suspected out of bed", which is not limited.
  • the device can determine that the object to be processed is Let the bed state in the time period be the non-bed state.
  • the device may also determine that the in-bed state of the subject to be processed is an out-of-bed state within a preset time period, which is not limited in the present disclosure.
  • the wearable device first acquires the acceleration signal output by the sensor within the preset time period, and then determines the motion characteristics of the object to be processed within the preset time period according to the acceleration signal, and determines the motion characteristics of the object to be processed according to the acceleration signal.
  • the posture characteristics of the object within the preset time period and then according to the posture characteristics, determine the first bed state of the object to be processed within the preset time period, and determine the second bed state of the object to be processed within the preset time period according to the motion characteristics.
  • In-bed state finally, according to the first in-bed state and the second in-bed state in the preset time period, determine the in-bed state of the subject to be processed within the preset time period. Therefore, the bed occupancy state within the preset time period is respectively determined through the motion feature and the posture feature, and the accuracy and reliability of the monitoring result are improved.
  • Fig. 8 is a schematic diagram according to a seventh embodiment of the present disclosure.
  • the bed state monitoring method based on the wearable device may include the following steps:
  • Step 801 acquiring an acceleration signal output by a wearable device within a preset time period.
  • Step 802 in the case that the duration of the preset time period is greater than the time threshold, divide the preset time period into multiple time segments based on the time threshold.
  • the present disclosure divides the preset time period into multiple time segments based on the time threshold, wherein.
  • the multiple time segments may be equally divided or not equally divided, which is not limited in the present disclosure.
  • Step 803 according to the acceleration signal, determine the motion characteristics of the object to be processed in each time segment.
  • the acceleration signal may be analyzed first to determine the motion characteristics of the object to be processed in each time segment corresponding to the acceleration signal. For example, if the acceleration value in the acceleration signal of any time segment is less than the threshold value, it can be determined that the motion characteristic of the object to be processed in the time segment is: no motion; or, if the acceleration value in the acceleration signal of any time segment If the value is greater than the threshold, it can be determined that the motion feature of the object to be processed within the time segment is: motion, etc., which is not limited in the present disclosure.
  • Step 804 according to the acceleration signal, determine the gesture feature of the object to be processed in each time segment.
  • the distance value between each window acceleration and the specified spherical area can also be determined according to the window acceleration corresponding to each time segment, and then based on the distance value between the window acceleration corresponding to each time segment and the specified spherical area, Determining pose characteristics for a preset time period.
  • the attitude feature of the object to be processed in this time segment is: lying posture; If the distance between the window acceleration and the specified spherical area is greater than the threshold, it can be determined that the posture characteristics of the object to be processed within the preset time period are: sitting posture or standing posture, etc., which is not limited in the present disclosure.
  • Step 805 according to the posture feature corresponding to each time segment, determine the third bed state of the object to be processed in each time segment.
  • the device may calculate the acceleration vector at each moment by determining the acceleration at each moment corresponding to each time segment. Then calculate the distance value between each acceleration vector and the designated spherical area according to the acceleration vector at each moment, sum or weighted sum the distance values at each moment, and finally use the obtained value as the attitude feature.
  • the device can compare the feature value corresponding to the posture feature in the preset time period with the preset threshold value through the threshold value comparison method to determine the third in-bed state. For details, refer to step 704, and the present disclosure will not repeat it here. .
  • Step 806 according to the motion characteristics corresponding to each time segment, determine the fourth bed state of the object to be processed in each time segment.
  • the specific process for the device to determine the bed state of the subject to be processed according to the motion characteristics corresponding to each time segment can refer to the preset time period, the above step 205 , which will not be described in detail here in the present disclosure.
  • Step 807 according to the third in-bed state and the fourth in-bed state corresponding to each time segment in the preset time period, determine the in-bed state of the object to be processed in each time segment in the preset time period.
  • the present disclosure can include other types of bed states, And when the time interval between the two same bed states is short, other bed states between the two same bed states will also be transformed into the same bed state as the two same bed states. bed state.
  • the device can It is determined that the in-bed states corresponding to each time segment between the i-th time segment and the i+m-th time segment are all out-of-bed states, where i and m are both positive integers.
  • the two time segments and the two time segments can be Each time segment in between is considered to be out of bed.
  • the corresponding serial numbers are 1, 2, 3, 4, 5, and 6 respectively.
  • the acceleration information corresponding to each time segment it is determined that both time segment 4 and time segment 6 are out of bed, and the difference between the two time segments "1" is less than the specified value "2", so that the device can set time segment 4
  • Both the time segment 6 and the time segment 5 in between are considered to be out-of-bed states, which is not limited in the present disclosure.
  • the object to be processed has transitioned from the state of leaving the bed to the state of being in the bed, or from the state of being in the bed to the state of leaving the bed, it may be identified as a "suspected state of leaving the bed", and the sleep quality is based on the state of being in bed Therefore, in order to further improve the accuracy of the sleep quality determined by the wearable device, in the present disclosure, the "suspected out-of-bed state" adjacent to the "out-of-bed state" can be uniformly transformed into "out-of-bed state". ".
  • the device may determine that the in-bed state corresponding to other time segments adjacent to the jth time segment is the "out-of-bed state", where j is a positive integer.
  • the device may consider other time segments adjacent to the j-th time segment as "out-of-bed states”.
  • the preset time period is divided into multiple time segments based on the time threshold first, and then according to the acceleration signal, the motion characteristics and posture characteristics of the object to be processed in each time segment are determined, and then according to each time According to the posture characteristics corresponding to the segment, the third in-bed state of the object to be processed in each time segment is determined, and the fourth in-bed state of the object to be processed in each time segment is determined according to the motion characteristics corresponding to each time segment Finally, according to the third in-bed state and the fourth in-bed state corresponding to each time segment in the preset time period, determine the in-bed state of the object to be processed in each time segment in the preset time period. Therefore, by determining the bed occupancy state of the object to be processed in the preset time period according to the movement characteristics and posture characteristics of each time segment, the accuracy and reliability of the bed occupancy state monitoring are improved.
  • the above embodiment of the wearable device-based bed occupancy monitoring method can be executed alone, or can be executed in combination with the embodiment of the sleep quality assessment method.
  • FIG. 9 is a schematic diagram according to an eighth embodiment of the present disclosure.
  • the sleep quality assessment device 900 includes: a determination module 910 , an extraction module 920 and an evaluation module 930 .
  • the determination module 910 is used to determine the sleep data of the object to be processed; the extraction module 920 is used to extract sleep characteristics according to the preset reference core sleep period and sleep data of the object to be processed; the evaluation module 930 is used to extract sleep characteristics according to the sleep feature, assessing the sleep quality of the subject to be processed.
  • the determining module 910 is specifically configured to: acquire the sign data of the subject to be treated when the subject to be treated is in a non-bed state within a preset period of time; Perform sleep recognition on the sign data, and obtain the sleep data of the subject to be processed.
  • the reference core sleep period includes: the individual core sleep period of the object to be processed within a preset time period, and/or, the object to be processed within the preset time period Group core sleep period in the region it belongs to.
  • the reference core sleep period includes: the individual core sleep period and the group core sleep period; the extraction module 920 is specifically configured to: according to the sleep data and the individual core sleep period , determine the first sleep feature of the object to be processed in the individual core sleep period; determine the second sleep feature of the object to be processed in the group core sleep period according to the sleep data and the group core sleep period; according to the first sleep feature and the second sleep Characteristics, to determine sleep characteristics.
  • the sleep quality evaluation apparatus 900 further includes: a first acquiring module.
  • the first acquisition module is configured to acquire the attribute information of the object to be processed;
  • the evaluation module 930 is specifically configured to: perform sleep quality evaluation processing on the object to be processed according to the sleep characteristics and the attribute information.
  • the evaluation module 930 is also configured to: obtain a sleep quality evaluation model; input sleep characteristics and the attribute information into the sleep quality evaluation model to obtain sleep discomfort symptoms; sleep discomfort symptoms It is determined as the sleep quality evaluation result of the object to be processed.
  • the sleep quality evaluation model is a tree model; the evaluation module 930 is also used to: acquire training data, wherein the training data includes: a preset number of sample sleep features; The neural network model of the neural network model and the preset number of sample sleep characteristics determine the sample sleep discomfort symptoms corresponding to the sample sleep characteristics; the initial tree model is trained by using the sample sleep characteristics and the corresponding sample sleep discomfort symptoms, and the trained tree model is obtained. Model; the trained tree model is used as the sleep quality assessment model.
  • the sleep quality assessment apparatus 900 further includes: a second acquiring module, a first determining module, a second determining module and a third determining module.
  • the second acquisition module is used to acquire the acceleration signal output by the wearable device within a preset time period;
  • the first determination module is used to determine the motion characteristics of the object to be processed within a preset time period according to the acceleration signal ;
  • the second determination module is used to determine the posture characteristics of the object to be processed within a preset time period according to the acceleration signal;
  • the third determination module is used to determine the posture characteristics of the object to be processed within a preset time period according to the posture characteristics and the motion characteristics Bed presence status within the segment.
  • the first determining module includes: a dividing unit, a first determining unit, a second determining unit, and a third determining unit.
  • the division unit is used to divide the preset time period into a plurality of time windows based on the specified time length; the first determination unit is used to determine each time window according to the acceleration at each moment in each time window. The average absolute deviation corresponding to the window; the second determining unit is used to determine the type label corresponding to each time window according to the size relationship between the average absolute deviation corresponding to each time window and the activity threshold, wherein, the type label is used To characterize the activity state of the object to be processed within the corresponding time window; the third determination unit is configured to determine the motion characteristics of the object to be processed within the preset time period according to the type labels of each time window within the preset time period.
  • the first determination module is further configured to: determine the moment corresponding to the activity change point within the preset time period according to the type label corresponding to each time window; The time interval between adjacent previous activity change points; according to the time interval corresponding to each time window, the motion characteristics of the object to be processed within the preset time period are updated.
  • the third determining unit is specifically configured to: determine a time window type sequence within a preset time period according to the type labels of each time window within a preset time period; and/ Or, according to the type label of each time window in the preset time period, the number of windows of each type in the preset time period is determined.
  • the acceleration signal is the acceleration at each moment
  • the second determination module is specifically configured to: divide the preset time period into multiple time windows based on a specified time length ; According to the acceleration at each moment in each time window, determine the window acceleration corresponding to each time window; when the window acceleration corresponding to any time window is within a specified range, determine the acceleration vector corresponding to any time window ; Determine the distance value between each acceleration vector and the designated spherical area; determine the posture characteristics of the object to be processed within the preset time period according to the multiple distance values within the preset time period.
  • the second determination module is further configured to: if the window acceleration corresponding to any time window is not within the specified range, according to the acceleration corresponding to the other time windows in the preset time period Window acceleration, to determine the gesture characteristics of the object to be processed within a preset time period.
  • the third determining module includes: a fourth determining unit, configured to determine the first bed state of the subject to be processed within a preset time period according to the posture characteristics; the fifth determining A unit, configured to determine the second in-bed state of the subject to be treated within a preset time period according to the motion characteristics; a sixth determining unit, configured to determine the first in-bed state and the second in-bed state
  • the bed occupancy state of the subject to be processed within the preset time period is the first bed occupancy state.
  • the sixth determining unit is further configured to: when the first bed state is different from the second bed state, and the first bed state and the second bed state are If any one is in the out-of-bed state, it is determined that the in-bed state of the object to be processed within the preset time period is the out-of-bed state.
  • the third determination module includes: a second division unit, configured to divide the preset time period into multiple time segments based on a time threshold; a seventh determination unit, configured to The posture feature corresponding to each time segment determines the third in-bed state of the object to be processed in each time segment; the eighth determining unit is used to determine the object to be processed in each time segment according to the motion feature corresponding to each time segment The fourth in-bed state within the time segment; the ninth determining unit, configured to determine the object to be processed in the preset Bed occupancy for various time segments within the time period.
  • the ninth determination unit is specifically configured to: at least one bed state corresponding to the i-th time segment is the out-of-bed state, and at least one state corresponding to the i+m-th time segment
  • the bed state is the out-of-bed state and m is less than the specified value, it is determined that the in-bed states corresponding to each time segment between the i-th time segment and the i+1-th time segment are all out-of-bed states, where i and m are both positive integers.
  • the ninth determining unit is further configured to: at least one bed state corresponding to the jth time segment is an out-of-bed state and adjacent to the jth time segment In the case that at least one in-bed state corresponding to other time segments is a suspected out-of-bed state, it is determined that the in-bed state corresponding to other time segments adjacent to the jth time segment is an out-of-bed state, wherein j is positive integer.
  • the sleep quality assessment device of the embodiment of the present disclosure determines the sleep data of the subject to be processed; extracts sleep characteristics according to the preset reference core sleep period and sleep data of the subject to be processed; evaluates the sleep quality of the subject to be processed according to the sleep characteristics , the device can realize the sleep quality assessment of the target to be processed according to the preset reference core sleep period of the target to be processed and the sleep characteristics extracted from the sleep data of the target to be processed, and the individual factors of the target to be processed are taken into account when extracting the sleep features, The sleep quality of the subject to be treated can be more accurately assessed.
  • FIG. 10 is a schematic diagram according to a ninth embodiment of the present disclosure.
  • the wearable device-based bed occupancy monitoring device 1000 includes: an acquisition module 1010 , a first determination module 1020 , a second determination module 1030 and a third determination module 1040 .
  • the obtaining module 1010 is used to obtain the acceleration signal output by the wearable device within a preset time period; the first determination module 1020 is used to determine the motion characteristics of the object to be processed within a preset time period according to the acceleration signal ; The second determination module 1030 is used to determine the posture characteristics of the object to be processed within a preset time period according to the acceleration signal; the third determination module 1040 is used to determine the posture characteristics of the object to be processed within a preset time period according to the posture characteristics and motion characteristics Bed presence status within the segment.
  • the acceleration signal is the acceleration at each moment
  • the first determination module 1020 includes: a division unit, a first determination unit, a second determination unit, and a third determination unit.
  • the division unit is used to divide the preset time period into a plurality of time windows based on the specified time length; the first determination unit is used to determine each The average absolute deviation corresponding to each of the time windows; the second determination unit is used to determine the type corresponding to each of the time windows according to the size relationship between the average absolute deviation corresponding to each of the time windows and the activity threshold tag, wherein the type tag is used to characterize the activity state of the object to be processed in the corresponding time window; the third determination unit is used to determine the type tag of each time window in the preset time period. The motion characteristics of the object to be processed within the preset time period.
  • the first determining module is further configured to: determine the moment corresponding to the activity change point within the preset time period according to the type label corresponding to each time window; determine each time window The time interval between the adjacent previous activity change point; according to the time interval corresponding to each time window, the motion characteristics of the object to be processed within the preset time period are updated.
  • the third determining unit is specifically configured to: determine a time window type sequence within a preset time period according to the type label of each time window within the preset time period; and /or, according to the type label of each time window in the preset time period, determine the number of windows of each type in the preset time period.
  • the acceleration signal is the acceleration at each moment
  • the second determination module 1030 is specifically configured to: divide the preset time period into multiple time windows based on the specified time length; Acceleration at each moment in each time window, determine the window acceleration corresponding to each time window; when the window acceleration corresponding to any time window is within the specified range, determine the acceleration vector corresponding to any time window; determine each The distance value between the acceleration vector and the specified spherical area; according to the multiple distance values within the preset time period, the attitude characteristics of the object to be processed within the preset time period are determined.
  • the second determining module 1030 is further configured to:
  • window acceleration corresponding to any time window is not within the specified range, determine that the object to be processed is within the preset time period according to the window accelerations corresponding to the remaining time windows in the preset time period stance features.
  • the third determining module includes: a fourth determining unit, a fifth determining unit, and a sixth determining unit.
  • the fourth determination unit is used to determine the first bed state of the object to be processed within a preset time period according to the posture characteristics; the fifth determination unit is used to determine the first bed state of the object to be processed within a preset time period according to the motion characteristics the second in-bed state within; the sixth determining unit is used to determine that the bed-in-bed state of the object to be treated within the preset time period is the first in-bed state in the case that the first in-bed state is the same as the second in-bed state bed state.
  • the sixth determination unit is further configured to: when the first bed state is different from the second bed state, and the first bed state is different from the second If any of the in-bed states is an out-of-bed state, then it is determined that the in-bed state of the object to be processed within the preset time period is an out-of-bed state.
  • the third determining module 1040 includes: a second dividing unit, a seventh determining unit, an eighth determining unit, and a ninth determining unit.
  • the second division unit is used to divide the preset time period into multiple time segments based on the time threshold;
  • the seventh determination unit is used to determine the object to be processed according to the gesture feature corresponding to each time segment The third in-bed state in each time segment;
  • the eighth determination unit configured to determine the fourth in-bed state of the subject to be processed in each time segment according to the motion characteristics corresponding to each time segment;
  • the ninth determination A unit configured to determine the bed occupancy status of the object to be processed in each time segment within the preset time period according to the third in-bed state and the fourth in-bed state corresponding to each time segment in the preset time period.
  • the ninth determination unit is specifically configured to: at least one bed state corresponding to the i-th time segment is the out-of-bed state, and at least one bed state corresponding to the i+m-th time segment When the bed state is the out-of-bed state and m is less than the specified value, it is determined that the in-bed states corresponding to each time segment between the i-th time segment and the i+1-th time segment are all out-of-bed states, where i and m are both positive integers.
  • the ninth determining unit is further configured to: at least one bed state corresponding to the jth time segment is the out-of-bed state, and is the same as the jth time segment When at least one in-bed state corresponding to other adjacent time segments is a suspected out-of-bed state, determine that the in-bed state corresponding to other time segments adjacent to the jth time segment is an out-of-bed state, wherein j is is a positive integer.
  • the wearable device first obtains the output acceleration signal within a preset time period, and then determines the movement characteristics of the object to be processed within the preset time period according to the acceleration signal, and determines the time period of the object to be processed according to the acceleration signal.
  • the posture characteristics within the preset time period, and finally according to the posture characteristics and the motion characteristics determine the bed occupancy state of the subject to be processed within the preset time period.
  • the bed occupancy state of the subject to be processed is determined by fusing the motion features and posture features, which improves the accuracy and reliability of the monitoring results.
  • the present disclosure further proposes an electronic device, including: at least one processor; and a memory connected to the at least one processor in communication; Instructions executed by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor can execute the sleep quality assessment method described in FIGS. 1 to 5 above.
  • the present disclosure proposes a wearable device, including: an acceleration sensor; a wearable accessory; at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory Instructions that can be executed by the at least one processor are stored, and the instructions are executed by the at least one processor, so that the at least one processor can execute the executable based on the above-mentioned embodiments in FIGS. 6 to 8 .
  • Bed occupancy monitoring method for wearable devices including: an acceleration sensor; a wearable accessory; at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory Instructions that can be executed by the at least one processor are stored, and the instructions are executed by the at least one processor, so that the at least one processor can execute the executable based on the above-mentioned embodiments in FIGS. 6 to 8 .
  • Bed occupancy monitoring method for wearable devices including: an acceleration sensor; a wearable accessory; at least one processor; and a memory communicatively connected to the at
  • the present disclosure proposes a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the sleep quality assessment described in the embodiments of FIG. 1 to FIG. 5 method, or execute the bed occupancy monitoring method based on the wearable device described in the embodiment in FIG. 6 to FIG. 8 .
  • the present disclosure proposes a computer program product, including a computer program.
  • the computer program When the computer program is executed by a processor, the sleep quality assessment method described in the embodiments of FIG. 1 to FIG. Refer to the bed occupancy monitoring method based on the wearable device described in the embodiment of FIG. 8 .
  • FIG. 11 shows a schematic block diagram of an example electronic device 1100 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the electronic device includes: one or more processors 1101 , a memory 1102 , and interfaces for connecting various components, including high-speed interfaces and low-speed interfaces.
  • the various components are interconnected using different buses and can be mounted on a common motherboard or otherwise as desired.
  • the processor may process instructions executed within the electronic device, including instructions stored in or on the memory, to display graphical information of a GUI on an external input/output device such as a display device coupled to an interface.
  • multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired.
  • multiple electronic devices may be connected, with each device providing some of the necessary operations (eg, as a server array, a set of blade servers, or a multi-processor system).
  • a processor 1101 is taken as an example.
  • the memory 1102 is a non-transitory computer-readable storage medium provided in this application.
  • the memory stores instructions executable by at least one processor, so that the at least one processor executes the health management method provided in this application.
  • the non-transitory computer-readable storage medium of the present application stores computer instructions, and the computer instructions are used to make the computer execute the sleep quality assessment method provided in the present application.
  • the memory 1102 can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as program instructions/modules corresponding to the health management method in the embodiment of the present application (for example, attached The determination module 910, the extraction module 920 and the evaluation module 930 shown in FIG. 9).
  • the processor 1101 executes various functional applications and data processing of the server by running the non-transitory software programs, instructions and modules stored in the memory 1102, that is, implements the sleep quality evaluation method in the above method embodiments.
  • the memory 1102 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created by the use of the electronic device according to the training of the model, etc. .
  • the memory 1102 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
  • the memory 1102 may optionally include memory located remotely relative to the processor 1101, and these remote memories may be connected to the electronic device for training the model through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the electronic device of the sleep quality assessment method may further include: an input device 1103 and an output device 1104 .
  • the processor 1101, the memory 1102, the input device 1103, and the output device 1104 may be connected through a bus or in other ways, and connection through a bus is taken as an example in FIG. 11 .
  • the input device 1103 can receive input digital or character information, and generate key signal input related to user settings and function control of electronic equipment for sleep quality assessment, such as touch screen, small keyboard, mouse, trackpad, touchpad, indicator stick, One or more input devices such as mouse buttons, trackballs, joysticks, etc.
  • the output device 1104 may include a display device, an auxiliary lighting device (eg, LED), a tactile feedback device (eg, a vibration motor), and the like.
  • the display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
  • Various implementations of the systems and techniques described herein can be implemented in digital electronic circuitry, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpreted on a programmable system including at least one programmable processor, the programmable processor Can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • machine-readable medium and “computer-readable medium” refer to any computer program product, device, and/or means for providing machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memories, programmable logic devices (PLDs), including machine-readable media that receive machine instructions as machine-readable signals.
  • machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: local area networks (LANs), wide area networks (WANs), the Internet, and blockchain networks.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can also be a server of a distributed system, or a server combined with a blockchain.
  • artificial intelligence is a discipline that studies the use of computers to simulate certain human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), including both hardware-level technology and software-level technology.
  • Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, and big data processing; artificial intelligence software technologies mainly include computer vision technology, speech recognition technology, natural language processing technology, and machine learning/depth Learning, big data processing technology, knowledge map technology and other major directions.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种睡眠质量评估方法、装置、电子设备及存储介质,涉及人工智能、深度学习技术领域,尤其涉及睡眠质量评估方法、装置、电子设备及存储介质。具体实现方案为:确定待处理对象的睡眠数据;根据待处理对象的预设的参考核心睡眠时段以及睡眠数据,提取睡眠特征;根据睡眠特征,评估待处理对象的睡眠质量。该方法根据待处理对象的参考核心睡眠时段以及待处理对象的睡眠数据提取的睡眠特征对待处理对象进行睡眠质量评估,在提取睡眠特征时考虑了待处理对象的个体因素,可以更准确地评估待处理对象的睡眠质量。

Description

睡眠质量评估方法、在床状态监测方法及其装置
相关申请的交叉引用
本公开要求安徽华米健康科技有限公司于2021年08月26日提交的、发明名称为“基于可穿戴设备的在床状态监测方法、装置及计算机设备”的、中国专利申请号“202110987315.X”的优先权,要求安徽华米健康科技有限公司于2021年09月15日提交的、发明名称为“睡眠质量评估方法、装置、电子设备及存储介质”的、中国专利申请号“202111082791.3”的优先权。
技术领域
本公开涉及人工智能、深度学习技术领域,特别涉及一种睡眠质量评估方法、在床状态监测方法及其装置。
背景技术
睡眠作为一个复杂的生命行为,与人的健康息息相关,睡眠大约占据人类寿命三分之一的时间。然而现代生活节奏普遍加快,生活工作压力不断增加,睡眠缺失愈发常见,严重影响了人们日常生活质量甚至身体健康。因此对睡眠质量进行检测,评估人的睡眠状况,对不良睡眠及时进行引导治疗具有重要意义。
相关技术中,对于睡眠质量的检测,通常使用多导睡眠图(Polysomnography,PSG)评估用户的睡眠质量。但是,多导睡眠图评估采用的睡眠特征大多来自医学领域先验,着重普适性和一般性,缺乏对用户亚群、用户个体的个性化考虑,无法准确地评估用户亚群、用户个体的睡眠质量。比如,对睡眠质量评估时以“00:00前入睡”作为其中一项睡眠指标,该睡眠特征无法适用采用单时区的用户,并且也无法应对因工作性质等原因导致用户个人日常作息常年与大众相反的情况。
发明内容
本公开提供了一种用于睡眠质量评估方法、在床状态监测方法及其装置。
根据本公开的一方面,提供了一种睡眠质量评估方法,包括:确定待处理对象的睡眠数据;根据待处理对象的预设的参考核心睡眠时段以及睡眠数据,提取睡眠特征;根据睡眠特征,评估所述待处理对象的睡眠质量。
在该技术方案中,通过确定待处理对象的睡眠数据;根据待处理对象的参考核心睡眠时段以及睡眠数据,提取睡眠特征;根据睡眠特征,对待处理对象进行睡眠质量评估处理,该方法根据待处理对象的参考核心睡眠时段以及待处理对象的睡眠数据提取的睡眠特征对待处理对象进行睡眠质量评估,在提取睡眠特征时考虑了待处理对象的个体因素,可以更准确地评估待处理对象的睡眠质量。
根据本公开的另一方面,提供了一种基于可穿戴设备的在床状态监测方法,包括:获取所述可穿戴设备在预设时间段内输出的加速度信号;根据所述加速度信号,确定待处理对象在所述预设时间段内的运动特征;根据所述加速度信号,确定所述待处理对象在所述预设时间段内的姿态特征;根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态。
根据本公开的另一方面,提供了一种睡眠质量评估装置,包括:确定模块,用于确定待处理对象的睡眠数据;提取模块,用于根据所述待处理对象的预设的参考核心睡眠时段以及所述睡眠数据,提取睡眠特征;评估模块,用于根据所述睡眠特征,评估所述待处理对象的睡眠质量。
根据本公开的另一方面,提供了一种基于可穿戴设备的在床状态监测装置,包括:第一获取模块,用于获取所述可穿戴设备中的传感器在预设时间段内输出的加速度信号;第一确定模块,用于根据所述加速度信号,确定待处理对象在所述预设时间段内的运动特征;第二确定模块,用于根据所述加速度信号,确定所述待处理对象在所述预设时间段内的姿态特征;第三确定模块,用于根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态。
根据本公开的另一方面,提供了一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述 至少一个处理器执行,以使所述至少一个处理器能够执行本公开第一方面实施例所述的方法。
根据本公开的另一方面,提供了一种可穿戴设备,包括:加速度传感器;可穿戴配件;至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本公开第二方面实施例所述的方法。
根据本公开的另一方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行本公开第一方面实施例所述的方法,或者,执行本公开第二方面实施例所述的方法。
根据本公开的另一方面,提供了一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现本公开第一方面实施例所述的方法,或者,实现本公开第二方面实施例所述的方法。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图说明
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是根据本公开第一实施例的示意图;
图2是根据本公开第二实施例的示意图;
图3是根据本公开第三实施例的示意图;
图4是根据本公开第四实施例的示意图;
图5是根据本公开实施例的树模型训练示意图;
图6是根据本公开第五实施例的示意图;
图7是根据本公开第六实施例的示意图;
图8是根据本公开第七实施例的示意图;
图9是根据本公开第八实施例的示意图;
图10是根据本公开第九实施例的示意图;
图11是用来实现本公开实施例的睡眠质量评估的电子设备的框图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
睡眠作为一个复杂的生命行为,与人的健康息息相关,睡眠大约占据人类寿命三分之一的时间。然而现代生活节奏普遍加快,生活工作压力不断增加,睡眠缺失愈发常见,严重影响了人们日常生活质量甚至身体健康。因此对睡眠质量进行检测,评估人的睡眠状况,对不良睡眠及时进行引导治疗具有重要意义。
相关技术中,对于睡眠质量的检测,通常使用多导睡眠图和穿戴式设备捕获生理学数据方法,但是,多导睡眠图评估采用的睡眠特征大多来自医学领域先验,着重普适性和一般性,缺乏对用户亚群、用户个体的个性化考虑,无法准确地评估用户亚群、用户个体的睡眠质量。比如,对睡眠质量评估时以“00:00前入睡”作为其中一项睡眠特征,该睡眠特征无法适用采用单时区的用户,并且也无法应对因工作性质等原因导致用户个人日常作息常年与大众相反的情况;穿戴式设备捕获生理学数据方法在分析睡眠质量时,需要直接与人体接触,给被测者带来行动不便和心理负担,对被测者睡眠过程造成干扰,影响被测者的睡眠习惯,最终影响对被测者睡眠质量的评估准确度。另外,穿戴式设备捕获生理学数据方法在分析睡眠质量时所采用的评估模型,主要是根据专业人员的领域知识对各种睡眠特征人工设定总分合成规则,比如分段线性加权,这种合成规则对不同睡眠指标的处理较为粗糙,无法准确地评估用户的睡眠质量。
针对上述问题,本公开提出睡眠质量评估方法、在床状态监测方法及其装置。
图1是根据本公开第一实施例的示意图。需要说明的是,本公开实施例的睡眠质量评估方法可应用于本公开实施例的睡眠质量评估装置,该装置可被配置于电子设备中。其中,该电子设备可以是移动终端,例如,手机、平板电脑、个人数字助理等具有各种操作系统的硬件设备。
如图1所示,该睡眠质量评估方法可包括如下步骤:
步骤101,确定待处理对象的睡眠数据。
在本公开实施例中,可通过待处理对象的体征数据确定待处理对象的睡眠数据。比如,睡眠质量评估装置可采集待处理对象的体征数据,根据体征数据确定待处理对象的睡眠数据。其中,体征数据可包括:脉搏、呼吸频率和心率等,睡眠数据可包括:睡眠时长、深睡时长和睡眠中断次数等。
步骤102,根据待处理对象的预设的参考核心睡眠时段以及睡眠数据,提取睡眠特征。
在本公开实施例中,参考核心睡眠时段可包括:预设时间段内待处理对象的个体核心睡眠时段,和/或,预设时间段内待处理对象所属区域内的群体核心睡眠时段。比如,预设时间段内待处理对象的个体核心睡眠时段为180天内待处理对象的个体核心睡眠时段为22:00至06:00,预设时间段内待处理对象所属区域内的群体核心睡眠时段为180天内待处理对象所属区域内的群体核心睡眠时段为22:00至06:00。
需要说明的是,在根据待处理对象的参考核心睡眠时段以及睡眠数据,提取睡眠特征时,待处理对象的参考核心睡眠时段的不同,提取的睡眠特征也不同。
作为一种示例,参考核心睡眠时段包括:预设时间段内待处理对象的个体核心睡眠时段,可根据预设时间段内待处理对象的个体核心睡眠时段以及睡眠数据确定待处理对象在个体核心睡眠时段内的第一睡眠特征,将第一睡眠特征作为提取到的睡眠特征。其中,需要说明的是,第一睡眠特征可包括以下特征中的至少一个:个体核心睡眠时段内待处理对象的睡眠时长、睡眠时长与待处理对象的总睡眠时长的占比、个体核心睡眠时段内待处理对象的深睡时长、个体核心睡眠时段内待处理对象的深睡次数、个体核心睡眠时段内待处理对象的深睡浅睡时长比例、个体核心睡眠时段内待处理对象的清醒时长、个体核心睡眠时段内待处理对象的清醒次数和个体核心睡眠时段内待处理对象的清醒时长与睡眠时长的占比。
作为另一种示例,参考核心睡眠时段包括:预设时间段内待处理对象所属区域内的群体核心睡眠时段,可根据预设时间段内待处理对象所属区域内的群体核心睡眠时段以及睡眠数据确定待处理对象在群体核心睡眠时段的第二睡眠特征,可将第二睡眠特征作为提取到的睡眠特征。第二睡眠特征可包括以下特征中的至少一个:群体核心睡眠时段内待处理对象的睡眠时长、所述睡眠时长与待处理对象的总睡眠时长的占比、群体核心睡眠时段内待处理对象的深睡时长、群体核心睡眠时段内待处理对象的深睡次数、群体核心睡眠时段内待处理对象的深睡浅睡时长比例、群体核心睡眠时段内待处理对象的清醒时长、群体核心睡眠时段内待处理对象的清醒次数和群体核心睡眠时段所述待处理对象的清醒时长与睡眠时长的占比。
作为另一种示例,参考核心睡眠时段包括:预设时间段内待处理对象的个体核心睡眠时段和预设时间段内待处理对象所属区域内的群体核心睡眠时段,可根据预设时间段内待处理对象的个体核心睡眠时段以及睡眠数据确定待处理对象在个体核心睡眠时段内的第一睡眠特征,以及根据预设时间段内待处理对象所属区域内的群体核心睡眠时段以及睡眠数据确定待处理对象在群体核心睡眠时段的第二睡眠特征,确定睡眠特征。
步骤103,根据睡眠特征,评估待处理对象的睡眠质量。
可选地,可根据睡眠特征与睡眠质量评估模型,对待处理对象进行睡眠质量评估处理。
综上,通过确定待处理对象的睡眠数据;根据待处理对象的预设的参考核心睡眠时段以及睡眠数据,提取睡眠特征;根据睡眠特征,评估待处理对象的睡眠质量,该方法根据待处理对象的参考核心睡眠时段以及待处理对象的睡眠数据提取的睡眠特征对待处理对象进行睡眠质量评估,在提取睡眠特征时考虑了待处理对象的个体因素,可以更准确地评估待处理对象的睡眠质量。
为了更加准确地获取待处理对象的睡眠数据,如图2所示,图2是根据本公开第二实施例的示意图。在本公开实施例中,可根据待处理对象的体征数据,获取待处理对象的睡眠数据,图2所示实施例可包括如下步骤:
步骤201,在待处理对象在预设时间段内的在床状态为非离床状态时,获取待处理对象的体征数据。
需要说明的是,获取待处理对象的睡眠数据之前,可先确定待处理对象在预设时间段内的在床状态,待处理对象在预设时间段内的在床状态的确定方式,可参见后续实施例的描述。
在本公开实施例中,在待处理对象在预设时间段内的在床状态为非离床状态时,睡眠质量评估装置可监测并采集待处理对象的体征数据,比如,脉搏、呼吸频率和心率等。
步骤202,对体征数据进行睡眠识别,获取待处理对象的睡眠数据。
进一步地,可对体征数据进行睡眠识别,获取待处理对象的睡眠数据。比如,人体处于睡眠状态时,呼吸频率下降,在监测到呼吸频率下降时,可确定待处理对象处于睡眠状态,可将睡眠状态的持续时间作为待处理对象的睡眠时长。又比如,可对心率进行睡眠识别,确定待处理对象的深睡时长和睡眠中断 次数。
步骤203,根据待处理对象的预设的参考核心睡眠时段以及睡眠数据,提取睡眠特征。
步骤204,根据睡眠特征,评估待处理对象的睡眠质量。
其中,步骤203-204可以分别采用本公开的各实施例中的任一种方式实现,本公开实施例并不对此作出限定,也不再赘述。
综上,通过获取待处理对象的体征数据;对体征数据进行睡眠识别,获取待处理对象的睡眠数据,由此,可以准确地获取待处理对象的睡眠数据。
为了准确地确定待处理对象的睡眠特征,如图3所示,图3是根据本公开第三实施例的示意图。在本公开实施例中,参考核心睡眠时段包括:预设时间段内待处理对象的个体核心睡眠时段和预设时间段内待处理对象所属区域内的群体核心睡眠时段,可根据预设时间段内待处理对象的个体核心睡眠时段以及睡眠数据确定待处理对象在个体核心睡眠时段内的第一睡眠特征,以及根据预设时间段内待处理对象所属区域内的群体核心睡眠时段以及睡眠数据确定待处理对象在群体核心睡眠时段的第二睡眠特征,确定睡眠特征。图3所示实施例可包括如下步骤:
步骤301,确定待处理对象的睡眠数据。
步骤302,根据睡眠数据与个体核心睡眠时段,确定待处理对象在个体核心睡眠时段的第一睡眠特征。
在本公开实施例中,第一睡眠可包括以下特征中的至少一个:个体核心睡眠时段内待处理对象的睡眠时长、睡眠时长与待处理对象的总睡眠时长的占比、个体核心睡眠时段内待处理对象的深睡时长、个体核心睡眠时段内待处理对象的深睡次数、个体核心睡眠时段内待处理对象的深睡浅睡时长比例、个体核心睡眠时段内待处理对象的清醒时长、个体核心睡眠时段内待处理对象的清醒次数和个体核心睡眠时段内待处理对象的清醒时长与睡眠时长的占比。
其中,个体核心睡眠时段内待处理对象的睡眠时长可为睡眠数据中待处理对象睡眠时段与个体核心睡眠时段的交叠时长。如,睡眠数据中的待处理对象23:00入睡于次日8:00起床,待处理对象睡眠时段为23:00至8:00,个体核心睡眠时段为22:00至06:00,待处理对象睡眠时段与体核心睡眠时段的交叠时长为23:00至06:00共7小时;所述睡眠时长与待处理对象的总睡眠时长的占比可为个体核心睡眠时段内待处理对象的睡眠时长与待处理对象的总睡眠时长的占比,如,个体核心睡眠时段内待处理对象的睡眠时长为23:00至06:00,待处理对象的总睡眠时长为23:00至07:00,睡眠时长与待处理对象的总睡眠时长的占比为7/8;个体核心睡眠时段内待处理对象的深睡时长为个体核心睡眠时段内,待处理对象处于深睡状态的总时长;个体核心睡眠时段内待处理对象的深睡次数可为个体核心睡眠时段内,待处理对象进入深睡状态的总次数;个体核心睡眠时段内待处理对象的深睡浅睡时长比例可为个体核心睡眠时段内,待处理对象处于深睡状态的总时长与待处理对象处于浅睡状态的总时长的比值;个体核心睡眠时段内待处理对象的清醒时长为个体核心睡眠时段内,待处理对象进入清醒状态的总时长;个体核心睡眠时段内待处理对象的清醒次数为个体核心睡眠时段内,待处理对象睡眠中断进入清醒状态的总次数;个体核心睡眠时段内待处理对象的清醒时长与睡眠时长的占比为个体核心睡眠时段内待处理对象睡眠进入清醒状态的总时长与待处理对象处于睡眠状态的总时长的比值。
步骤303,根据睡眠数据与群体核心睡眠时段,确定待处理对象在群体核心睡眠时段内的第二睡眠特征。
同时,第二睡眠特征可包括以下特征中的至少一个:群体核心睡眠时段内待处理对象的睡眠时长、所述睡眠时长与待处理对象的总睡眠时长的占比、群体核心睡眠时段内待处理对象的深睡时长、群体核心睡眠时段内待处理对象的深睡次数、群体核心睡眠时段内待处理对象的深睡浅睡时长比例、群体核心睡眠时段内待处理对象的清醒时长、群体核心睡眠时段内待处理对象的清醒次数和群体核心睡眠时段所述待处理对象的清醒时长与睡眠时长的占比。
其中,群体核心睡眠时段内待处理对象的睡眠时长可为睡眠数据中待处理对象睡眠时段与群体核心睡眠时段的交叠时长。如,睡眠数据中的待处理对象23:00入睡于次日8:00起床,待处理对象睡眠时段为23:00至8:00,群体核心睡眠时段为22:00至06:00,待处理对象睡眠时段与群体核心睡眠时段的交叠时长为23:00至06:00共7小时;所述睡眠时长与待处理对象的总睡眠时长的占比可为群体核心睡眠时段待处理对象的睡眠时长与待处理对象的总睡眠时长的占比,如,群体核心睡眠时段内待处理对象的睡眠时长为23:00至06:00,待处理对象的总睡眠时长为23:00至07:00,睡眠时长与待处理对象的总睡眠时长的占比为7/8;群体核心睡眠时段内待处理对象的深睡时长为群体核心睡眠时段内,待处理对象处于深睡状态的总时长;群体核心睡眠时段内待处理对象的深睡次数可为群体核心睡眠时段内,待处理对象进入深睡状态的总次数;群体核心睡眠时段内待处理对象的深睡浅睡时长比例可为群体核心睡眠时 段内,待处理对象处于深睡状态的总时长与待处理对象处于浅睡状态的总时长的比值;群体核心睡眠时段内待处理对象的清醒时长为群体核心睡眠时段内,待处理对象进入清醒状态的总时长;群体核心睡眠时段内待处理对象的清醒次数为群体核心睡眠时段内,待处理对象睡眠中断进入清醒状态的总次数;群体核心睡眠时段内待处理对象的清醒时长与睡眠时长的占比为群体核心睡眠时段内待处理对象睡眠进入清醒状态的总时长与待处理对象处于睡眠状态的总时长的比值。
步骤304,根据第一睡眠特征以及第二睡眠特征,确定睡眠特征。
可选地,可将第一睡眠特征与第二睡眠特征进行拼接,确定睡眠特征。比如,睡眠特征包括:个体核心睡眠时段内待处理对象的睡眠时长、睡眠时长与待处理对象的总睡眠时长的占比、个体核心睡眠时段内待处理对象的深睡时长、个体核心睡眠时段内待处理对象的深睡次数、个体核心睡眠时段内待处理对象的深睡浅睡时长比例、个体核心睡眠时段内待处理对象的清醒时长、个体核心睡眠时段内待处理对象的清醒次数、个体核心睡眠时段内待处理对象的清醒时长与睡眠时长的占比、群体核心睡眠时段内待处理对象的睡眠时长、所述睡眠时长与待处理对象的总睡眠时长的占比、群体核心睡眠时段内待处理对象的深睡时长、群体核心睡眠时段内待处理对象的深睡次数、群体核心睡眠时段内待处理对象的深睡浅睡时长比例、群体核心睡眠时段内待处理对象的清醒时长、群体核心睡眠时段内待处理对象的清醒次数和群体核心睡眠时段所述待处理对象的清醒时长与睡眠时长的占比。
步骤305,根据睡眠特征,评估待处理对象的睡眠质量。
其中,步骤301、305可以分别采用本公开的各实施例中的任一种方式实现,本公开实施例并不对此作出限定,也不再赘述。
综上,通过根据睡眠数据与个体核心睡眠时段,确定待处理对象在个体核心睡眠时段的第一睡眠特征;根据睡眠数据与群体核心睡眠时段,确定所述待处理对象在群体核心睡眠时段内的第二睡眠特征;根据第一睡眠特征以及第二睡眠特征,确定睡眠特征。由此,在考虑待处理对象的个体因素的同时可以准确地确定待处理对象的睡眠特征。
为了更加准确地评估待处理对象的睡眠质量,如图4所示,图4是根据本公开第四实施例的示意图。在本公开实施例中,可根据待处理对象的睡眠特征和属性信息,采用质量评估模型对待处理对象的睡眠质量进行评估。图4所示实施例可包括如下步骤:
步骤401,确定待处理对象的睡眠数据。
步骤402,根据待处理对象的预设的参考核心睡眠时段以及睡眠数据,提取睡眠特征。
步骤403,获取睡眠质量评估模型。
需要理解的是,在输入输出关系复杂,标注训练样本足够多时,神经网络模型的表现更为突出,但是,循环神经网络是黑盒模型,对于预训练的神经网络为何做出特定的决策无法解释,相反,树模型具有很好的可解释性,因此,本公开采用预训练的神经网络作为教师模型训练树模型。
可选地,获取训练数据,其中,训练数据包括:预设数量的样本睡眠特征;根据经过预训练的神经网络模型以及预设数量的样本睡眠特征,确定样本睡眠特征对应的样本睡眠不适症状;采用样本睡眠特征以及对应的样本睡眠不适症状对初始的树模型进行训练,得到训练好的树模型;将训练好的树模型,作为睡眠质量评估模型。
也就是说,如图5所示,可将样本睡眠特征和属性信息作为神经网络模型的输入,对神经网络模型进行训练,以获取预训练的神经网络模型,使神经网络模型输出与问卷调查获取的睡眠不适症状匹配,将预训练的神经网络的输出的睡眠不适症状作为样本睡眠特征对应的样本睡眠不适症状,其中,问卷调查获取的睡眠不适症状可包括以下症状中的至少一个:易醒、醒的早、失眠、多梦、起床后疲倦、夜起上厕所、晚上入睡困难、起床后头疼头晕、白天容易疲倦困乏嗜睡、醒后再次入睡困难、早醒难以睡着、早上醒来时容易头疼头晕和经常打呼噜睡眠中被呼噜憋醒。属性信息可包括:性别、年龄、身高和体重中的至少一种。在本公开实施例中,可将神经网络(循环神经网络)模型输出的睡眠不适症状与问卷调查获取的睡眠不适症状构造损失函数,具体可表现为如下公式:
Loss=L2(Y,Z);
L2(Y,Z)=∑(y i-z i) 2/N;
其中,Y={y i},Z={z i},y i和z i分别表示问卷调查获取的睡眠不适症状与神经网络输出的睡眠 不适症状。
进一步地,将样本睡眠特征和属性信息作为树模型的输入,对树模型进行训练,使树模型的输出与预训练的神经网络模型的输出的样本睡眠特征对应的样本睡眠不适症状匹配,具体可表现为如下公式:
Loss=L2(Y,Z)+(1-Corr(Y,Z)) 2
Corr(Y,Z)=∑(y iy)(z iz)/(σ yσ z);
其中,Y={y i},Z={z i},y i和z i分别表示预训练的神经网络输出的样本睡眠特征对应的样本睡眠不适症状与树模型输出的睡眠不适症状,μ y和μ z分别是Y和Z的均值,σ y和σ z分别是Y和Z的标准差。
进而,将训练好的树模型作为睡眠质量评估模型。
步骤404,将睡眠特征以及属性信息,输入睡眠质量评估模型,获取睡眠不适症状。
在本公开实施例中,可将睡眠特征以及属性信息,输入睡眠质量评估模型,睡眠质量评估模型可输出睡眠不适症状。
步骤405,将睡眠不适症状确定为待处理对象的睡眠质量评估结果。
进一步地,将睡眠不适症状作为待处理对象的睡眠质量评估结果。
其中,步骤401-402可以分别采用本公开的各实施例中的任一种方式实现,本公开实施例并不对此作出限定,也不再赘述。
综上,通过获取睡眠质量评估模型;将睡眠特征以及属性信息,输入睡眠质量评估模型,获取睡眠不适症状;将睡眠不适症状确定为待处理对象的睡眠质量评估结果。由此,可根据待处理对象的睡眠特征和属性信息,采用质量评估模型对待处理对象的睡眠质量进行评估,可更加准确地评估待处理对象的睡眠质量。
本公开实施例的睡眠质量评估方法,通过确定待处理对象的睡眠数据;根据待处理对象的预设的参考核心睡眠时段以及睡眠数据,提取睡眠特征;根据睡眠特征,评估所述待处理对象的睡眠质量,该方法根据待处理对象的参考核心睡眠时段以及待处理对象的睡眠数据提取的睡眠特征对待处理对象进行睡眠质量评估,在提取睡眠特征时考虑了待处理对象的个体因素,可以更准确地评估待处理对象的睡眠质量。
为了实现上述实施例,本公开还提出一种基于可穿戴设备的在床状态监测方法。
图6是根据本公开第五实施例的示意图,需要说明的是,本公开实施例的基于可穿戴设备的在床状态监测方法可应用于本公开实施例的基于可穿戴设备的在床状态监测装置,该装置可被配置于电子设备中。其中,该电子设备可以是移动终端,例如,手机、平板电脑、个人数字助理等具有各种操作系统的硬件设备。
如图6所示,该基于可穿戴设备的在床状态监测方法可以包括以下步骤:
步骤601,获取可穿戴设备在预设时间段内输出的加速度信号。
可以理解的是,为了对用户的在床状态进行准确的分析,本公开利用便携性好,便于日常各种场景使用的可穿戴设备对用户进行在床状态监测。当用户在配戴可穿戴设备时,可穿戴设备中的传感器可以直观的反映出用户的行为。比如,传感器输出的加速度信号,可以反应出该可穿戴设备所在的用户肢体的运动情况。
需要说明的是,本公开中可以通过可穿戴设备中的传感器输出加速度信号,其可以为单轴加速度传感器、双轴加速度传感器或三轴加速度传感器,本公开对此不做限定。
可以理解的是,为了更准确推算用户的活动状态,本公开可以选择三轴加速度传感器,即通过在空间坐标系的三个轴向上分别进行加速度信号的采集,从而基于该加速度信号确定的结果更可靠。本公开以下各实施例,以该加速度信号为三轴加速度信号为例进行展开说明。
其中,预设时间段可以为预先设定的一个任意长度的时间段。
通常,由于用户的在床状态,通常为一种连续事件,因此,本公开中,可以获取预设时间段内的加速度信号。即该加速度信号,可以为包括一系列加速度值的序列。
步骤602,根据加速度信号,确定待处理对象在预设时间段内的运动特征。
本公开中,可以首先对加速度信号进行分析,以确定待处理对象在该加速度信号对应的预设时间段 内的运行特征。比如,若该加速度信号中的各个加速度值均小于阈值,则可以确定该待处理对象在预设时间段内的运动特征为:无运动;或者,若该加速度信号中的各个加速度值均大于阈值,则可以确定该待处理对象在预设时间段内的运动特征为:运动,等等,本公开对此不做限定。
可选的,若预设时间段对应的时间长度较长,本公开中,还可以基于指定的时间长度,将预设时间段划分为多个时间窗口,进而根据每个时间窗口内每个时刻的加速度,确定每个时间窗口对应的平均绝对离差,然后再基于每个时间窗口的平均绝对离差,确定预设时间段内的运动特征。即上述步骤102,可以包括:
基于指定的时间长度,将预设时间段划分为多个时间窗口;
根据每个时间窗口内每个时刻的加速度,确定每个时间窗口对应的平均绝对离差;
根据每个时间窗口对应的平均绝对离差与活动阈值间的大小关系,确定每个时间窗口对应的类型标签,其中,类型标签,用于表征在对应的时间窗口内所述待处理对象的活动状态;
根据预设时间段内的各个时间窗口的类型标签,确定待处理对象在预设时间段内的运动特征。
其中,指定的时间长度,可以为预先设置的,或者还可以为可穿戴设备根据当前的时间确定的,比如,用户在刚开始睡眠时,或者清晨时,可能会有频繁的翻身等动作,而在深度睡眠时,动作的概率较低,从而可以设置在深夜时对应的指定的时间长度较长,而清晨或者刚开始睡眠时对应的指定的时间长度较短。
需要说明的是,在确定每个时间窗口对应的平均绝对离差时,本设备需要先结合传感器所获取的三轴加速度测量值,计算出合加速度。为方便说明,本公开将每个时间窗口各时刻对应的三轴加速度测量值分别记为acc x、acc y和acc z
进而,本设备可以根据三轴加速度测量值acc x、acc y和acc z,利用以下公式对时间窗口各时刻的合加速度gacc进行计算:
Figure PCTCN2022105838-appb-000001
需要说明的是,本设备也可以通过计算当前时间窗口的三轴加速度测量值,计算当前窗口的合加速度,其中,该合加速度可以作为当前时间窗口的窗口加速度。
在获取时间窗口各时刻的合加速度之后,本设备可以根据每个时间窗口中包含的时刻的数量n,以及每个时间窗口中各个时刻的合加速度,计算出该每个时间窗口的合加速度的均值mean gacc,公式如下:
Figure PCTCN2022105838-appb-000002
其中,gacc i为第i个时刻对应的合加速度,i为正整数。
之后,本设备可以根据每个时间窗口的合加速度的均值mean gacc以及每个时间窗口中的每个时刻的合加速度gacc i,计算每个时间窗口对应的平均绝对离差MAD,公式如下:
Figure PCTCN2022105838-appb-000003
进一步地,本设备可以根据每个时间窗口对应的平均绝对离差与活动阈值间的大小关系,确定每个时间窗口对应的类型标签。其中,活动阈值,可以为该可穿戴设备中预置的,或者,还可以为该可穿戴设备,根据该待处理对象的历史运动信息,自动生成的,本公开对此不做限定。
本公开中的类型标签可以为“低活动量”、“中活动量”、“高活动量”等,本公开对此不进行限定。能够理解的是,活动阈值的数量可以根据类型标签的数量确定,比如,若时间窗口对应的类型标签的数量有两个,分别为“低活动量”和“高活动量”,那么本设备可以设置一个活动阈值。根据每个时间窗口对应的平均绝对离差与活动阈值间的大小关系,本设备可以将平均绝对离差低于或等于活动阈值的时间窗口标记为“低活动量”,将平均绝对离差高于活动阈值的时间窗口标记为“高活动量,本公开对此 不进行限定。
或者,若时间窗口对应的类型标签的数量有三个,分别为“低活动量”、“中活动量”、“高活动量”,那么本设备可以设置两个活动阈值,此处本公开将小的活动阈值记为A,将大的活动阈值记为B。根据每个时间窗口对应的平均绝对离差与活动阈值间的大小关系,可以将平均绝对离差低于或等于活动阈值A的时间窗口标记为“低活动量”,将平均绝对离差高于活动阈值B的时间窗口标记为“高活动量”,将平均绝对离差高于活动阈值A且小于或等于活动阈值B的时间窗口标记为“中活动量”,本公开对此不进行限定。
本公开中,为了便于可穿戴设备对连续时段内各个窗口的类型标签进行统计分析,可以利用不同的特征值表征不同的类型标签。比如,“低活动量”对应的特征值为0,“中活动量”对应的特征值为1,“高活动量”对应的特征值为2。
相应的,在确定了每个时间窗口的类型标签即特征值后,即可将各个时间窗口对应的特征值进行融合,比如排列,或者加权求和等,以确定该预设时间段的运动特征。举例来说,若该预设时间段内共分为5个时间窗口,且该5个时间窗口分别对应的特征值依次为:0、0、1、0、0,那么该预设时间段对应的运动特征可以为将该多个特征值依次排列,组成一个特征向量,即[0、0、1、0、0]。或者,还可以将该多个特征值进行加权求和等以确定该连续时段对应的运动特征。本公开对此不做限定。
步骤603,根据加速度信号,确定待处理对象在预设时间段内的姿态特征。
本公开中,可以首先对加速度信号进行分析,以确定待处理对象在该加速度信号对应的预设时间段内的姿态特征。比如,若该加速度信号中的各个加速度值均小于阈值,则可以确定该待处理对象在预设时间段内的姿态特征为:静卧姿态;或者,若该加速度信号中的各个加速度值均大于阈值,则可以确定该待处理对象在预设时间段内的姿态特征为:坐姿或站姿,等等,本公开对此不做限定。
可选的,若预设时间段对应的时间长度较长,本公开中,还可以基于指定的时间长度,将预设时间段划分为多个时间窗口,然后确定每个时间窗口对应的窗口加速度,进而再根据每个时间窗口对应的窗口加速度,确定预设时间段内的姿态特征。即上述步骤603,可以包括:基于指定的时间长度,将预设时间段划分为多个时间窗口;根据每个时间窗口内每个时刻的加速度,确定每个时间窗口对应的窗口加速度;在任一时间窗口对应的窗口加速度在指定范围内的情况下,确定任一时间窗口对应的加速度向量;确定每个加速度向量与指定的球面区域的距离值;根据预设时间段内的多个距离值,确定待处理对象在预设时间段内的姿态特征。
其中,对于确定窗口加速度的具体实施过程,可以参考上述步骤102处的详细描述,此处不再赘述。
在任一时间窗口对应的窗口加速度在指定范围的情况下,本设备可以确定任一时间窗口对应的加速度向量。
可选的,本公开中指定范围可以为1g±0.1g、1g±0.2g、1g±0.5g等等,对此不进行限定。
其中,g为重力加速度。
具体的,若任一时间窗口对应的窗口加速度在指定范围内,则表明加速度向量是稳定的。那么本设备可以首先获取该时间窗口的窗口加速度,然后基于窗口加速度,对三轴加速度进行归一化,从而可以得到当前窗口在三个轴向上的单位向量。
举例来说,若当前窗口加速度为gacc,三轴加速度测量值分别为acc x、acc y和acc z,基于该窗口加速度分别对三个轴向的加速度测量值进行归一化,也即计算三个轴向的加速度测量值在三维直角坐标系上的三个分量u x、u y以及u z,公式如下:
u x=acc x/gacc
u y=acc y/gacc
u z=acc z/gacc
进一步地,基于u x、u y以及u z获取该加速度向量与x轴夹角u long以及与z轴夹角u la,其中,本公开以u long作为经度,以u la作为纬度,具体计算过程如下:
若u x<0,u y>0,则u long=atan(u y/u x)+pi;
若u x<0,u y≤0,则u long=atan(u y/u x)+pi;
若u x=0,则u long=atan(u y/eps);
若u x>0,则u long=atan(u y/u x);
另外,u la=asin(u z)。
进而,本设备可以根据球面距离公式确定当前时间窗口的向量与标记的区域点的球面距离Dis,公式如下:
Dis=acos(cos(u la)*cos(i la)*cos(u long-i long)+sin(u la)*sin(i la));
其中,经度i long为标记的区域点与球坐标系原点连接构成的向量与x轴的夹角,纬度i la为其与z轴的夹角,经度u long为加速度向量与x轴的夹角,纬度u la为加速度向量与z轴的夹角。
需要说明的是,区域点为指定的球面区域中的任一点,或者指定的点,本公开对此不进行限定。
举例来说,若将预设时间段分为5个时间窗口,每个时间窗口均对应有一个距离值,那么本设备可以将5个距离值进行直接求和或者根据权重进行加权求和,进而本设备可以将最后求和的数值作为姿态特征,本公开对此不进行限定。
或者,本公开可以将5个时间窗口对应的窗口加速度进行直接矢量相加,以获得和矢量。进而本公开可以将该和矢量与指定的球面区域中的任一点的球面距离值作为姿态特征,本公开对此不进行限定。
步骤604,根据姿态特征及所述运动特征,确定待处理对象在预设时间段内的在床状态。
具体的,本设备可以根据用户的姿态特征及运动特征,分别确定待处理对象在预设时间段内的在床状态,其可以为“离床”、“疑似离床”和“非离床”等状态,本公开对此不进行限定。由于本设备结合的特征有姿态特征及运动特征,因而,本设备可以据此从两个角度确定待处理对象在预设时间段内的在床状态,由此,可以对睡眠时间进行更准确的计算,不仅拓展了可穿戴设备的用途,而且使得确定的在床状态的结果更加准确和可靠。
本公开实施例中,可穿戴设备首先获取传感器在预设时间段内输出的加速度信号,然后根据加速度信号,确定待处理对象在预设时间段内的运动特征,以及根据加速度信号,确定待处理对象在预设时间段内的姿态特征,最后根据姿态特征及所述运动特征,确定待处理对象在所述预设时间段内的在床状态。由此,通过融合运动特征以及姿态特征来确定待处理对象的在床状态,提高了监测结果的准确性和可靠性。
为了实现上述实施例,本公开提出另一种基于可穿戴设备的在床状态监测方法。图7是根据本公开第六实施例的示意图。
如图7所示,该基于可穿戴设备的在床状态监测方法可以包括以下步骤:
步骤701,获取可穿戴设备在预设时间段内输出的加速度信号。
步骤702,根据加速度信号,确定待处理对象在预设时间段内的运动特征。
需要说明的是,在根据加速度信号确定待处理对象在预设时间段内的运动特征时,本设备需要基于指定的时间长度,将预设时间段划分为多个时间窗口,并确定各个时间窗口的类型标签,具体实现过程可参照上述步骤102。
可选的,本设备可以根据预设时间段内的各个时间窗口的类型标签,确定在预设时间段内各类型窗口的数量,从而本设备之后可以基于该预设时间段内各类型窗口的数量,确定该预设时间段的运动特征。
举例来说,若在预设时间段中包含的时间窗口为N个,时间窗口对应的类型标签有三个,分别为“低活动量”、“中活动量”、“高活动量”,那么本设备可以分别将这三个类型标签对应的窗口数量记为count low、count mid、count high
其中count low+count mid+count high=N。
之后,即可用该连续时段内包含的多个时间窗口对应的各类型标签的数量表征该预设时间段的运动特征,比如[count low、count mid、count high],本公开对此不进行限定。
另外,本设备还可以根据预设时间段内的各个时间窗口的类型标签,确定在预设时间段内的时间窗口类型序列。
举例来说,若在预设时间段中包含了5个时间窗口,本设备可以通过一一确定该5个时间窗口的类型标签,进而获取该预设时间段内的时间窗口类型序列。比如,若该5个时间窗口的类型标签按照时间顺序分别为“低活动量”、“低活动量”、“低活动量”、“高活动量”、“高活动量”,将“低活动量”类型标签记为M,“高活动量”记为N,那么该时间窗口类型序列可以为“M、M、M、N、N”。
进而,在确定时间窗口类型序列之后,本设备可以确定在预设时间段内各类型窗口的数量,比如本设备可以从“M、M、M、N、N”中获知“低活动量”类型的时间窗口有3个,“高活动量”类型的时间窗口有2个,本公开对此不进行限定。另外,若将M对应的时间窗口的特征值记为1,N对应的时间窗口的特征值记为2,那么本设备可以根据时间窗口类型序列“M、M、M、N、N”确定一个特征向量[1、1、1、2、2],从而本设备可以将该特征向量作为运动特征,本公开对此不进行限定。
需要说明的是,本设备在根据加速度信号,确定待处理对象在各个时间窗口对应的类型标签之后,可以通过各个时间窗口对应的类型标签,确定预设时间段内的活动变点。该活动变点可以用于确定时间间隔这一运动特征,相应的,本公开还可以据此对运动特征进行更新。
即上述步骤702,还可以包括:
根据各个时间窗口对应的类型标签,确定预设时间段内的活动变点对应的时刻;
确定每个时间窗口与相邻的前一次活动变点间的时间间隔;
根据每个时间窗口对应的时间间隔,对待处理对象在预设时间段内的运动特征进行更新。
其中,活动变点可以为任一时间窗口由一种类型标签转变为另一种类型标签时的中间时刻点,比如,若将一预设时间段分为了5个时间窗口,其中,前三个时间窗口对应的类型标签均为“低活动量”,后两个时间窗口对应的类型标签均为“中活动量”,那么本设备可以将“低活动量”时间窗口与“中活动量”时间窗口之间的时间点作为活动变点。
本公开中,可以将时间窗口由“低活动量”转变为“中活动量”时的时间点作为“中活动变点”,将“中活动量”转变为“高活动量”时的时间点作为“高活动变点”,本公开对此不进行限定。
可以理解的是,若在连续的时间段中有多个活动变点,那么相应的,会出现时间间隔。本设备可以确定每个时间窗口与之前的一次活动变点间的时间间隔,比如当前窗口距离上次“高活动变点”的时间间隔CP high、距离上次“中活动变点”的时间间隔CP mid,或者距离上次相邻任一类型的活动变点的时间间隔,本公开对此不进行限定。可以理解的是,本设备可以将该时间间隔作为运动特征,用于确定待处理对象的在床状态。
进一步地,本设备可以根据每个时间窗口对应的时间间隔,对待处理对象在预设时间段内的运动特征进行更新。可以理解的是,本公开在获取当前预设时间段的各个时间窗口与相邻的前一次活动变点间的时间间隔之后,也即确定了“时间间隔”这一运动特征,进而,本设备可以对待处理对象在预设时间段中的运动特征进行重新确定或补充,本公开对此不进行限定。
步骤703,根据加速度信号,确定待处理对象在预设时间段内的姿态特征。
具体的,若任一时间窗口对应的窗口加速度在指定范围内,本公开可以依据上述步骤603的具体实施方式确定姿态特征,若任一时间窗口对应的窗口加速度未在指定范围内,本设备可以根据预设时间段内的其余时间窗口对应的窗口加速度,确定待处理对象在预设时间段内的姿态特征。
可以理解的是,若任一时间窗口对应的窗口加速度未在指定范围内,那么本设备可以通过确定其余时间窗口对应的窗口加速度。举例来说,若当前共有5个时间窗口,且可以确定的是,第2个时间窗口的窗口加速度超出了指定范围,那么本设备可以通过计算其余4个时间窗口的窗口加速度,比如,将该4个时间窗口的窗口加速度进行矢量求和,以获取和矢量。进一步的,可以参照上述步骤603,计算该和矢量与指定的球面区域中的任一点的球面距离值作为姿态特征,本公开对此不进行限定。或者,本公开可以将其余4个时间窗口的窗口加速度与指定的球面区域中的任一点的球面距离值进行直接求和或者根据权重进行加权求和,进而本设备可以将最后求和的数值作为姿态特征,本公开对此不进行限定。
步骤704,根据姿态特征,确定待处理对象在预设时间段内的第一在床状态。
可选的,在确定第一在床状态时,本设备可以通过阈值比较的方法,将预设时间段的姿态特征对应的特征值跟预设的阈值进行比较。其中,预设阈值可以为一个或多个,对此不进行限定。若当前只有一个预设阈值,那么若当前的特征值高于阈值,可以将待处理对象在预设时间段内的第一在床状态确定为“离床状态”,若当前的特征值小于阈值,可以将待处理对象在预设时间段内的第一在床状态确定为“在床状态”,本公开对此不进行限定。
步骤705,根据运动特征,确定待处理对象在预设时间段内的第二在床状态。
可选的,在确定第二在床状态时,本设备可以将各运动特征输入到预先训练好的决策树模型中,以获取第二在床状态的输出结果。其中,运动特征可以为各个类型标签在预设时间段中的各时间窗口的数量以及预设时间段中各时间窗口距离相邻的前一次活动变点之间的时间间隔等等,本公开对此不进行限定。
或者,还可以通过模板匹配的方法,比如,预先建立在床状态的模板库,其中,模板库中包含多种样本的特征向量,本设备通过输入待处理对象运动特征的向量,获取待处理对象与模板库中各种样本的特征向量的匹配度,从而进一步确定第二在床状态是“离床”、“疑似离床”或“非离床”,本公开对此不进行限定。
步骤706,根据预设时间段内的第一在床状态及第二在床状态,确定待处理对象在预设时间段内的在床状态。
可选的,在第一在床状态与第二在床状态相同的情况下,本设备可以确定待处理对象在预设时间段内的在床状态为第一在床状态。可以理解的是,若第一在床状态与第二在床状态相同,比如,均为“离床”,那么本设备则可以将第一在床状态“离床”作为待处理对象在预设时间段的在床状态,由于第一在床状态与第二在床状态相同,因而本设备也可以将第二在床状态作为待处理对象在预设时间段的在床状态,本公开对此不进行限定。
另外,若第一在床状态与第二在床状态不相同、且第一在床状态与第二在床状态中的任意一个为非在床状态,则本设备可以确定待处理对象在预设时间段内的在床状态为非在床状态。其中,非在床状态可以为“离床”或“疑似离床”,对此不进行限定。
举例来说,若第一在床状态为“离床”,第二在床状态为“非离床”,由于第一在床状态为非在床状态,那么本设备可以确定待处理对象在预设时间段内的在床状态为非在床状态。又或者,若第二在床状态为“疑似离床”,本设备同样可以确定待处理对象在预设时间段内的在床状态为非在床状态,本公开对此不进行限定。
本公开实施例中,可穿戴设备首先获取传感器在预设时间段内输出的加速度信号,然后根据加速度信号,确定待处理对象在预设时间段内的运动特征,以及根据加速度信号,确定待处理对象在预设时间段内的姿态特征,然后根据姿态特征,确定待处理对象在预设时间段内的第一在床状态,根据运动特征,确定待处理对象在预设时间段内的第二在床状态,最后根据预设时间段内的第一在床状态及第二在床状态,确定待处理对象在预设时间段内的在床状态。从而通过运动特征以及姿态特征分别确定预设时间段内的在床状态,提高了监测结果的准确性和可靠性。
为了实现上述实施例,本公开提出另一种基于可穿戴设备的在床状态监测方法。图8是根据本公开 第七实施例的示意图。
如图8所示,该基于可穿戴设备的在床状态监测方法可以包括以下步骤:
步骤801,获取可穿戴设备在预设时间段内输出的加速度信号。
步骤802,在预设时间段的时长大于时间阈值的情况下,基于时间阈值,将预设时间段分为多个时间片段。
具体的,若预设时间段的时长大于时间阈值,则说明预设时间段过长。为了更准确的确定待处理对象的在床状态,本公开将基于时间阈值将预设时间段分为多个时间片段,其中。该多个时间片段可以是等分的,也可以是不等分的,本公开对此不进行限定。
步骤803,根据加速度信号,确定待处理对象在每个时间片段的运动特征。
本公开中,可以首先对加速度信号进行分析,以确定待处理对象在该加速度信号对应的每个时间片段内的运动特征。比如,若在任一时间片段的加速度信号中的加速度值小于阈值,则可以确定该待处理对象在该时间片段内的运动特征为:无运动;或者,若在任一时间片段的加速度信号中的加速度值大于阈值,则可以确定该待处理对象在该时间片段内的运动特征为:运动,等等,本公开对此不做限定。
步骤804,根据加速度信号,确定待处理对象在每个时间片段的姿态特征。
本公开中,还可以根据每个时间片段对应的窗口加速度,确定每个窗口加速度与指定的球面区域的距离值,然后再基于每个时间片段对应的窗口加速度与指定的球面区域的距离值,确定预设时间段内的姿态特征。比如,若在任一时间片段的窗口加速度与指定的球面区域的距离值小于阈值,则可以确定该待处理对象在该时间片段内的姿态特征为:静卧姿态;或者,若在任一时间片段的窗口加速度与指定的球面区域的距离值大于阈值,则可以确定该待处理对象在预设时间段内的姿态特征为:坐姿或站姿,等等,本公开对此不做限定。
步骤805,根据每个时间片段对应的姿态特征,确定待处理对象在每个时间片段内的第三在床状态。
本公开中,本设备可以通过确定每个时间片段对应的各个时刻的加速度,计算各个时刻的加速度向量。然后根据各个时刻的加速度向量计算每个加速度向量与指定的球面区域的距离值,将各个时刻距离值进行求和或者加权求和,最后将所得值作为姿态特征。本设备可以通过阈值比较的方法,将预设时间段的姿态特征对应的特征值跟预设的阈值进行比较,以确定第三在床状态,具体可参照步骤704,本公开在此不进行赘述。
步骤806,根据每个时间片段对应的运动特征,确定待处理对象在每个时间片段内的第四在床状态。
本公开中,本设备根据每个时间片段对应的运动特征确定待处理对象的在床状态的具体过程可以参照预设时间段,上述步骤205,本公开在此不进行赘述。
步骤807,根据预设时间段内每个时间片段对应的第三在床状态及第四在床状态,确定待处理对象在预设时间段内各个时间片段的在床状态。
可选的,由于待处理对象,从一种在床状态转换至另一在床状态时,需要时间,因此本公开中可以将两个相同的在床状态之间包含其它类型的在床状态、且该两个相同的在床状态间的时间间隔时间较短时,将位于该两个相同的在床状态间的其他在床状态,也转化为与该两个相同的在床状态相同的在床状态。
即在第i个时间片段对应的至少一个在床状态为离床状态、第i+m个时间片段对应的至少一个在床状态为离床状态、且m小于指定值的情况下,本设备可以确定第i个时间片段与第i+m个时间片段间的各时间片段对应的在床状态均为离床状态,其中,i和m均为正整数。
可以理解的是,若两个时间片段的间隔小于指定值,且该两个时间片段均对应的至少一个在床状态为离床状态时,即可以将该两个时间片段以及该两个时间片段之间的各时间片段认为是离床状态。
举例来说,本设备若将连续的时间段分为了6个时间片段,对应的序号分别为1、2、3、4、5、6。根据各个时间片段对应的加速度信息确定时间片段4与时间片段6均为离床状态,且该两个时间片段的差值“1”小于指定值“2”,从而本设备即可以将时间片段4与时间片段6以及之间的时间片段5均认为是离床状态,本公开对此不进行限定。
可选的,考虑到待处理对象有离床状态转换为在床状态或者,由在床状态转换为离床状态的过程,可能被识别为“疑似离床状态”,而睡眠质量是基于在床时长与睡眠时长确定的,因此为了进一步提高可穿戴设备确定的睡眠质量的准确性,本公开中,可以将与“离床状态”相邻的“疑似离床状态”统一转化为“离床状态”。
即在第j个时间片段对应的至少一个在床状态为离床状态、且与第j个时间片段相邻的其他时间片段对应的至少一个在床状态为“疑似离床状态”的情况下,本设备可以确定与第j个时间片段相邻的其他时间片段对应的在床状态为“离床状态”,其中,j均为正整数。
可以理解的是,若第j个时间片段对应的第三在床状态与第四在床状态中至少一个为离床状态,且与第j个时间片段相邻的第j-1个时间片段对应的至少一个在床状态为疑似“离床状态”时,本设备可以将与第j个时间片段相邻的其他时间片段均认为是“离床状态”。
可以理解的是,通过以上结果融合的方式确定当前检测对象的在床状态,可以提高在床状态监测的稳定性和可靠性,结果更加稳定。
本公开实施例中,通过首先基于时间阈值,将预设时间段分为多个时间片段,然后根据加速度信号,确定待处理对象在每个时间片段的运动特征以及姿态特征,之后根据每个时间片段对应的姿态特征,确定待处理对象在每个时间片段内的第三在床状态,根据每个时间片段对应的运动特征,确定待处理对象在每个所述时间片段内的第四在床状态,最后根据预设时间段内每个时间片段对应的第三在床状态及第四在床状态,确定待处理对象在预设时间段内各个时间片段的在床状态。由此,通过根据每个时间片段的运动特征以及姿态特征,确定待处理对象在预设时间段的在床状态,提高了对在床状态监测的精确度和可靠度。
需要说明的是,上述基于可穿戴设备的在床状态监测方法的实施例可单独被执行,也可结合睡眠质量评估方法的实施例一起被执行。
为了实现上述睡眠质量评估方法的实施例,本公开还提出一种睡眠质量评估装置。图9是根据本公开第八实施例的示意图。
如图9所示,该睡眠质量评估装置900包括:确定模块910、提取模块920和评估模块930。
其中,确定模块910,用于确定待处理对象的睡眠数据;提取模块920,用于根据待处理对象的预设的参考核心睡眠时段以及睡眠数据,提取睡眠特征;评估模块930,用于根据睡眠特征,评估待处理对象的睡眠质量。
作为本公开实施例的一种可能实现方式,确定模块910具体用于:在所述待处理对象在预设时间段内的在床状态为非离床状态时,获取待处理对象的体征数据;对体征数据进行睡眠识别,获取待处理对象的睡眠数据。
作为本公开实施例的一种可能实现方式,参考核心睡眠时段包括:预设时间段内所述待处理对象的个体核心睡眠时段,和/或,所述预设时间段内所述待处理对象所属区域内的群体核心睡眠时段。
作为本公开实施例的一种可能实现方式,参考核心睡眠时段包括:所述个体核心睡眠时段以及所述群体核心睡眠时段;提取模块920,具体用于:根据所述睡眠数据与个体核心睡眠时段,确定待处理对象在个体核心睡眠时段的第一睡眠特征;根据睡眠数据与群体核心睡眠时段,确定待处理对象在群体核心睡眠时段内的第二睡眠特征;根据第一睡眠特征以及第二睡眠特征,确定睡眠特征。
作为本公开实施例的一种可能实现方式,睡眠质量评估装置900还包括:第一获取模块。
其中,第一获取模块,用于获取所述待处理对象的属性信息;评估模块930,具体用于:根据睡眠特征以及所述属性信息,对待处理对象进行睡眠质量评估处理。
作为本公开实施例的一种可能实现方式,评估模块930,还用于:获取睡眠质量评估模型;将睡眠特征以及所述属性信息,输入睡眠质量评估模型,获取睡眠不适症状;将睡眠不适症状确定为待处理对象的睡眠质量评估结果。
作为本公开实施例的一种可能实现方式,睡眠质量评估模型为树模型;评估模块930,还用于:获取训练数据,其中,训练数据包括:预设数量的样本睡眠特征;根据经过预训练的神经网络模型以及预设数量的样本睡眠特征,确定所述样本睡眠特征对应的样本睡眠不适症状;采用样本睡眠特征以及对应的样本睡眠不适症状对初始的树模型进行训练,得到训练好的树模型;将训练好的树模型,作为所述睡眠质量评估模型。
作为本公开实施例的一种可能实现方式,睡眠质量评估装置900还包括:第二获取模块、第一确定模块、第二确定模块和第三确定模块。
其中,第二获取模块,用于获取所述可穿戴设备在预设时间段内输出的加速度信号;第一确定模块,用于根据加速度信号,确定待处理对象在预设时间段内的运动特征;第二确定模块,用于根据加速度信号,确定待处理对象在预设时间段内的姿态特征;第三确定模块,用于根据姿态特征及所述运动特征,确定待处理对象在预设时间段内的在床状态。
作为本公开实施例的一种可能实现方式,第一确定模块,包括:划分单元、第一确定单元、第二确定单元和第三确定单元。
其中,划分单元,用于基于指定的时间长度,将所述预设时间段划分为多个时间窗口;第一确定单元,用于根据每个时间窗口内每个时刻的加速度,确定每个时间窗口对应的平均绝对离差;第二确定单元,用于根据每个时间窗口对应的平均绝对离差与活动阈值间的大小关系,确定每个时间窗口对应的类 型标签,其中,类型标签,用于表征在对应的时间窗口内待处理对象的活动状态;第三确定单元,用于根据预设时间段内的各个时间窗口的类型标签,确定待处理对象在预设时间段内的运动特征。
作为本公开实施例的一种可能实现方式,第一确定模块,还用于:根据各个时间窗口对应的类型标签,确定预设时间段内的活动变点对应的时刻;确定每个时间窗口与相邻的前一次活动变点间的时间间隔;根据每个时间窗口对应的时间间隔,对待处理对象在预设时间段内的运动特征进行更新。
作为本公开实施例的一种可能实现方式,第三确定单元,具体用于:根据预设时间段内的各个时间窗口的类型标签,确定在预设时间段内的时间窗口类型序列;和/或,根据预设时间段内的各个时间窗口的类型标签,确定在预设时间段内各类型窗口的数量。
作为本公开实施例的一种可能实现方式,加速度信号为各个时刻的加速度,所述第二确定模块,具体用于:基于指定的时间长度,将所述预设时间段划分为多个时间窗口;根据每个所述时间窗口内每个时刻的加速度,确定每个时间窗口对应的窗口加速度;在任一时间窗口对应的窗口加速度在指定范围内的情况下,确定任一时间窗口对应的加速度向量;确定每个加速度向量与指定的球面区域的距离值;根据预设时间段内的多个距离值,确定待处理对象在预设时间段内的姿态特征。
作为本公开实施例的一种可能实现方式,第二确定模块,还用于:在任一时间窗口对应的窗口加速度未在指定范围内的情况下,根据预设时间段内的其余时间窗口对应的窗口加速度,确定待处理对象在预设时间段内的姿态特征。
作为本公开实施例的一种可能实现方式,第三确定模块,包括:第四确定单元,用于根据姿态特征,确定待处理对象在预设时间段内的第一在床状态;第五确定单元,用于根据运动特征,确定待处理对象在预设时间段内的第二在床状态;第六确定单元,用于在第一在床状态与第二在床状态相同的情况下,确定待处理对象在预设时间段内的在床状态为第一在床状态。
作为本公开实施例的一种可能实现方式,第六确定单元,还用于:在第一在床状态与第二在床状态不相同、且第一在床状态与第二在床状态中的任意一个为非在床状态,则确定待处理对象在预设时间段内的在床状态为非在床状态。
作为本公开实施例的一种可能实现方式,第三确定模块,包括:第二划分单元,用于基于时间阈值,将预设时间段分为多个时间片段;第七确定单元,用于根据每个时间片段对应的姿态特征,确定待处理对象在每个时间片段内的第三在床状态;第八确定单元,用于根据每个时间片段对应的运动特征,确定待处理对象在每个所述时间片段内的第四在床状态;第九确定单元,用于根据预设时间段内每个时间片段对应的第三在床状态及第四在床状态,确定待处理对象在预设时间段内各个时间片段的在床状态。
作为本公开实施例的一种可能实现方式,第九确定单元,具体用于:在第i个时间片段对应的至少一个在床状态为离床状态、第i+m个时间片段对应的至少一个在床状态为离床状态、且m小于指定值的情况下,确定第i个时间片段与第i+1个时间片段间的各时间片段对应的在床状态均为离床状态,其中,i和m均为正整数。
作为本公开实施例的一种可能实现方式,第九确定单元,还用于:在第j个时间片段对应的至少一个在床状态为离床状态、且与所述第j个时间片段相邻的其他时间片段对应的至少一个在床状态为疑似离床状态的情况下,确定与所述第j个时间片段相邻的其他时间片段对应的在床状态为离床状态,其中,j均为正整数。
本公开实施例的睡眠质量评估装置,通过确定待处理对象的睡眠数据;根据待处理对象的预设的参考核心睡眠时段以及睡眠数据,提取睡眠特征;根据睡眠特征,评估待处理对象的睡眠质量,该装置可实现根据待处理对象的预设的参考核心睡眠时段以及待处理对象的睡眠数据提取的睡眠特征对待处理对象进行睡眠质量评估,在提取睡眠特征时考虑了待处理对象的个体因素,可以更准确地评估待处理对象的睡眠质量。
为了实现上述基于可穿戴设备的在床状态监测的实施例,本公开实施例还提出一种基于可穿戴设备的在床状态监测装置。图10是根据本公开第九实施例的示意图。
如图10所示,该基于可穿戴设备的在床状态监测装置1000包括:获取模块1010、第一确定模块1020、第二确定模块1030和第三确定模块1040。
其中,获取模块1010,用于获取所述可穿戴设备在预设时间段内输出的加速度信号;第一确定模块1020,用于根据加速度信号,确定待处理对象在预设时间段内的运动特征;第二确定模块1030,用于根据加速度信号,确定待处理对象在预设时间段内的姿态特征;第三确定模块1040,用于根据姿态特征及运动特征,确定待处理对象在预设时间段内的在床状态。
作为本公开实施例的一种可能的实现方式,加速度信号为各个时刻的加速度,第一确定模块1020,包括:划分单元、第一确定单元、第二确定单元和第三确定单元。
其中,划分单元,用于基于指定的时间长度,将所述预设时间段划分为多个时间窗口;第一确定单元,用于根据每个所述时间窗口内每个时刻的加速度,确定每个所述时间窗口对应的平均绝对离差;第二确定单元,用于根据每个所述时间窗口对应的平均绝对离差与活动阈值间的大小关系,确定每个所述时间窗口对应的类型标签,其中,所述类型标签,用于表征在对应的时间窗口内所述待处理对象的活动状态;第三确定单元,用于根据预设时间段内的各个时间窗口的类型标签,确定所述待处理对象在所述预设时间段内的运动特征。
作为本公开实施例的一种可能的实现方式,第一确定模块,还用于:根据各个时间窗口对应的类型标签,确定预设时间段内的活动变点对应的时刻;确定每个时间窗口与相邻的前一次活动变点间的时间间隔;根据每个时间窗口对应的时间间隔,对待处理对象在预设时间段内的运动特征进行更新。
作为本公开实施例的一种可能的实现方式,第三确定单元,具体用于:根据预设时间段内的各个时间窗口的类型标签,确定在预设时间段内的时间窗口类型序列;和/或,根据预设时间段内的各个时间窗口的类型标签,确定在预设时间段内各类型窗口的数量。
作为本公开实施例的一种可能的实现方式,加速度信号为各个时刻的加速度,第二确定模块1030,具体用于:基于指定的时间长度,将预设时间段划分为多个时间窗口;根据每个时间窗口内每个时刻的加速度,确定每个时间窗口对应的窗口加速度;在任一时间窗口对应的窗口加速度在指定范围内的情况下,确定任一时间窗口对应的加速度向量;确定每个加速度向量与指定的球面区域的距离值;根据预设时间段内的多个距离值,确定待处理对象在预设时间段内的姿态特征。
作为本公开实施例的一种可能的实现方式,第二确定模块1030,还用于:
在任一时间窗口对应的窗口加速度未在所述指定范围内的情况下,根据所述预设时间段内的其余时间窗口对应的窗口加速度,确定所述待处理对象在所述预设时间段内的姿态特征。
作为本公开实施例的一种可能的实现方式,第三确定模块,包括:第四确定单元、第五确定单元和第六确定单元。
其中,第四确定单元,用于根据姿态特征,确定待处理对象在预设时间段内的第一在床状态;第五确定单元,用于根据运动特征,确定待处理对象在预设时间段内的第二在床状态;第六确定单元,用于在第一在床状态与第二在床状态相同的情况下,确定待处理对象在预设时间段内的在床状态为第一在床状态。
作为本公开实施例的一种可能的实现方式,第六确定单元,还用于:在第一在床状态与所述第二在床状态不相同、且所述第一在床状态与第二在床状态中的任意一个为非在床状态,则确定待处理对象在预设时间段内的在床状态为非在床状态。
作为本公开实施例的一种可能的实现方式,第三确定模块1040,包括:第二划分单元、第七确定单元、第八确定单元和第九确定单元。
其中,第二划分单元,用于基于所述时间阈值,将所述预设时间段分为多个时间片段;第七确定单元,用于根据每个时间片段对应的姿态特征,确定待处理对象在每个时间片段内的第三在床状态;第八确定单元,用于根据每个时间片段对应的运动特征,确定待处理对象在每个时间片段内的第四在床状态;第九确定单元,用于根据预设时间段内每个时间片段对应的第三在床状态及第四在床状态,确定待处理对象在预设时间段内各个时间片段的在床状态。
作为本公开实施例的一种可能的实现方式,第九确定单元具体用于:在第i个时间片段对应的至少一个在床状态为离床状态、第i+m个时间片段对应的至少一个在床状态为离床状态、且m小于指定值的情况下,确定第i个时间片段与第i+1个时间片段间的各时间片段对应的在床状态均为离床状态,其中,i和m均为正整数。
作为本公开实施例的一种可能的实现方式,第九确定单元,还用于:在第j个时间片段对应的至少一个在床状态为离床状态、且与所述第j个时间片段相邻的其他时间片段对应的至少一个在床状态为疑似离床状态的情况下,确定与所述第j个时间片段相邻的其他时间片段对应的在床状态为离床状态,其中,j均为正整数。
本公开实施例中,可穿戴设备首先获取预设时间段内输出的加速度信号,然后根据加速度信号,确定待处理对象在预设时间段内的运动特征,以及根据加速度信号,确定待处理对象在预设时间段内的姿态特征,最后根据姿态特征及所述运动特征,确定待处理对象在所述预设时间段内的在床状态。由此,通过融合运动特征以及姿态特征来确定待处理对象的在床状态,提高了监测结果的准确性和可靠性。
为了实现上述图1至图5实施例,本公开还提出一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述图1至图5所述的睡眠质 量评估方法。
为了实现上述图6至图8实施例,本公开提出一种可穿戴设备,包括:加速度传感器;可穿戴配件;至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述图6至图8实施例中所述的基于可穿戴设备的在床状态监测方法。
为了实现上述实施例,本公开提出一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行图1至图5实施例所述的睡眠质量评估方法,或者,执行图6至图8实施例所述的基于可穿戴设备的在床状态监测方法。
为了实现上述实施例,本公开提出一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现图1至图5实施例所述的睡眠质量评估方法,或者,执行图6至图8实施例所述的基于可穿戴设备的在床状态监测方法。
图11示出了可以用来实施本公开的实施例的示例电子设备1100的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图11所示,该电子设备包括:一个或多个处理器1101、存储器1102,以及用于连接各部件的接口,包括高速接口和低速接口。各个部件利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示GUI的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。图11中以一个处理器1101为例。
存储器1102即为本申请所提供的非瞬时计算机可读存储介质。其中,所述存储器存储有可由至少一个处理器执行的指令,以使所述至少一个处理器执行本申请所提供的健康管理方法。本申请的非瞬时计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行本申请所提供的睡眠质量评估方法。
存储器1102作为一种非瞬时计算机可读存储介质,可用于存储非瞬时软件程序、非瞬时计算机可执行程序以及模块,如本申请实施例中的健康管理方法对应的程序指令/模块(例如,附图9所示的确定模块910、提取模块920和评估模块930)。处理器1101通过运行存储在存储器1102中的非瞬时软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的睡眠质量评估方法。
存储器1102可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据模型的训练的电子设备的使用所创建的数据等。此外,存储器1102可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器1102可选包括相对于处理器1101远程设置的存储器,这些远程存储器可以通过网络连接至模型的训练的电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
睡眠质量评估方法的电子设备还可以包括:输入装置1103和输出装置1104。处理器1101、存储器1102、输入装置1103和输出装置1104可以通过总线或者其他方式连接,图11中以通过总线连接为例。
输入装置1103可接收输入的数字或字符信息,以及产生与睡眠质量评估的电子设备的用户设置以及功能控制有关的键信号输入,例如触摸屏、小键盘、鼠标、轨迹板、触摸板、指示杆、一个或者多个鼠标按钮、轨迹球、操纵杆等输入装置。输出装置1104可以包括显示设备、辅助照明装置(例如,LED)和触觉反馈装置(例如,振动电机)等。该显示设备可以包括但不限于,液晶显示器(LCD)、发光二极管(LED)显示器和等离子体显示器。在一些实施方式中,显示设备可以是触摸屏。
此处描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、专用ASIC(专用集成电路)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
这些计算程序(也称作程序、软件、软件应用、或者代码)包括可编程处理器的机器指令,并且可以利用高级过程和/或面向对象的编程语言、和/或汇编/机器语言来实施这些计算程序。如本文使用的,术语“机器可读介质”和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(PLD)),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)、互联网和区块链网络。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器也可以为分布式系统的服务器,或者是结合了区块链的服务器。
其中,需要说明的是,人工智能是研究使计算机来模拟人的某些思维过程和智能行为(如学习、推理、思考、规划等)的学科,既有硬件层面的技术也有软件层面的技术。人工智能硬件技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理等技术;人工智能软件技术主要包括计算机视觉技术、语音识别技术、自然语言处理技术以及机器学习/深度学习、大数据处理技术、知识图谱技术等几大方向。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (62)

  1. 一种睡眠质量评估方法,其特征在于,包括:
    确定待处理对象的睡眠数据;
    根据所述待处理对象的预设的参考核心睡眠时段以及所述睡眠数据,提取睡眠特征;
    根据所述睡眠特征,评估所述待处理对象的睡眠质量。
  2. 根据权利要求1所述的方法,其特征在于,所述确定待处理对象的睡眠数据,包括:
    在所述待处理对象在预设时间段内的在床状态为非离床状态时,获取所述待处理对象的体征数据;
    对所述体征数据进行睡眠识别,获取所述待处理对象的睡眠数据。
  3. 根据权利要求1所述的方法,其特征在于,所述参考核心睡眠时段包括:预设时间段内所述待处理对象的个体核心睡眠时段,和/或,所述预设时间段内所述待处理对象所属区域内的群体核心睡眠时段。
  4. 根据权利要求3所述的方法,其特征在于,所述参考核心睡眠时段包括:所述个体核心睡眠时段以及所述群体核心睡眠时段;
    所述根据所述待处理对象的参考核心睡眠时段以及所述睡眠数据,提取睡眠特征,包括:
    根据所述睡眠数据与所述个体核心睡眠时段,确定所述待处理对象在所述个体核心睡眠时段的第一睡眠特征;
    根据所述睡眠数据与所述群体核心睡眠时段,确定所述待处理对象在所述群体核心睡眠时段内的第二睡眠特征;
    根据所述第一睡眠特征以及所述第二睡眠特征,确定所述睡眠特征。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:获取所述待处理对象的属性信息;
    所述根据所述睡眠特征,评估所述待处理对象的睡眠质量,包括:
    根据所述睡眠特征以及所述属性信息,评估所述待处理对象的睡眠质量。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述睡眠特征以及所述属性信息,评估所述待处理对象的睡眠质量,包括:
    获取睡眠质量评估模型;
    将所述睡眠特征以及所述属性信息,输入所述睡眠质量评估模型,获取睡眠不适症状;
    将所述睡眠不适症状确定为所述待处理对象的睡眠质量评估结果。
  7. 根据权利要求6所述的方法,其特征在于,所述睡眠质量评估模型为树模型;
    所述获取睡眠质量评估模型,包括:
    获取训练数据,其中,所述训练数据包括:预设数量的样本睡眠特征;
    根据经过预训练的神经网络模型以及预设数量的所述样本睡眠特征,确定所述样本睡眠特征对应的样本睡眠不适症状;
    采用所述样本睡眠特征以及对应的样本睡眠不适症状对初始的树模型进行训练,得到训练好的树模型;
    将训练好的所述树模型,作为所述睡眠质量评估模型。
  8. 根据权利要求1所述的方法,其特征在于,确定待处理对象的睡眠数据之前,还包括:
    获取所述可穿戴设备在预设时间段内输出的加速度信号;
    根据所述加速度信号,确定待处理对象在所述预设时间段内的运动特征;
    根据所述加速度信号,确定所述待处理对象在所述预设时间段内的姿态特征;
    根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态。
  9. 根据权利要求8所述的方法,其特征在于,所述加速度信号为各个时刻的加速度,所述根据所述加速度信号,确定待处理对象在所述预设时间段内的运动特征,包括:
    基于指定的时间长度,将所述预设时间段划分为多个时间窗口;
    根据每个所述时间窗口内每个时刻的加速度,确定每个所述时间窗口对应的平均绝对离差;
    根据每个所述时间窗口对应的平均绝对离差与活动阈值间的大小关系,确定每个所述时间窗口对应的类型标签,其中,所述类型标签,用于表征在对应的时间窗口内所述待处理对象的活动状态;
    根据所述预设时间段内的各个时间窗口的类型标签,确定所述待处理对象在所述预设时间段内的运动特征。
  10. 根据权利要求9所述的方法,其特征在于,在所述确定每个所述时间窗口对应的类型标签之后, 还包括:
    根据各个所述时间窗口对应的类型标签,确定所述预设时间段内的活动变点对应的时刻;
    确定每个时间窗口与相邻的前一次活动变点间的时间间隔;
    根据每个所述时间窗口对应的时间间隔,对所述待处理对象在所述预设时间段内的运动特征进行更新。
  11. 根据权利要求9所述的方法,其特征在于,所述根据所述预设时间段内的各个时间窗口的类型标签,确定所述待处理对象在所述预设时间段内的运动特征,包括:
    根据所述预设时间段内的各个时间窗口的类型标签,确定在所述预设时间段内的时间窗口类型序列;
    和/或,
    根据所述预设时间段内的各个时间窗口的类型标签,确定在所述预设时间段内各类型窗口的数量。
  12. 根据权利要求8所述的方法,其特征在于,所述加速度信号为各个时刻的加速度,所述根据所述加速度信号,确定所述待处理对象在所述预设时间段内的姿态特征,包括:
    基于指定的时间长度,将所述预设时间段划分为多个时间窗口;
    根据每个所述时间窗口内每个时刻的加速度,确定每个所述时间窗口对应的窗口加速度;
    在任一所述时间窗口对应的窗口加速度在指定范围内的情况下,确定任一所述时间窗口对应的加速度向量;
    确定每个所述加速度向量与指定的球面区域的距离值;
    根据所述预设时间段内的多个距离值,确定所述待处理对象在所述预设时间段内的姿态特征。
  13. 根据权利要求12所述的方法,其特征在于,在所述确定每个所述时间窗口对应的窗口加速度之后,还包括:
    在任一时间窗口对应的窗口加速度未在所述指定范围内的情况下,根据所述预设时间段内的其余时间窗口对应的窗口加速度,确定所述待处理对象在所述预设时间段内的姿态特征。
  14. 根据权利要求8-13任一所述的方法,其特征在于,所述根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态,包括:
    根据所述姿态特征,确定所述待处理对象在所述预设时间段内的第一在床状态;
    根据所述运动特征,确定所述待处理对象在所述预设时间段内的第二在床状态;
    在所述第一在床状态与所述第二在床状态相同的情况下,确定所述待处理对象在所述预设时间段内的在床状态为第一在床状态。
  15. 根据权利要求14所述的方法,其特征在于,在所述确定所述待处理对象在所述预设时间段内的第二在床状态之后,还包括:
    在所述第一在床状态与所述第二在床状态不相同、且所述第一在床状态与所述第二在床状态中的任意一个为非在床状态,则确定所述待处理对象在所述预设时间段内的在床状态为非在床状态。
  16. 根据权利要求8-13任一所述的方法,其特征在于,所述预设时间段的时长大于时间阈值,所述根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态,包括:
    基于所述时间阈值,将所述预设时间段分为多个时间片段;
    根据每个所述时间片段对应的姿态特征,确定所述待处理对象在每个所述时间片段内的第三在床状态;
    根据每个所述时间片段对应的运动特征,确定所述待处理对象在每个所述时间片段内的第四在床状态;
    根据所述预设时间段内每个所述时间片段对应的第三在床状态及第四在床状态,确定所述待处理对象在所述预设时间段内各个时间片段的在床状态。
  17. 根据权利要求16所述的方法,其特征在于,所述根据所述预设时间段内每个所述时间片段对应的第三在床状态及第四在床状态,确定所述待处理对象在所述预设时间段内各个时间片段的在床状态,包括:
    在第i个时间片段对应的至少一个在床状态为离床状态、第i+m个时间片段对应的至少一个在床状态为离床状态、且m小于指定值的情况下,确定第i个时间片段与第i+1个时间片段间的各时间片段对应的在床状态均为离床状态,其中,i和m均为正整数。
  18. 根据权利要求16所述的方法,其特征在于,所述根据所述预设时间段内每个所述时间片段对应的第三在床状态及第四在床状态,确定所述待处理对象在所述预设时间段内各个时间片段的在床状态,包括:
    在第j个时间片段对应的至少一个在床状态为离床状态、且与所述第j个时间片段相邻的其他时间片段对应的至少一个在床状态为疑似离床状态的情况下,确定与所述第j个时间片段相邻的其他时间片段对应的在床状态为离床状态,其中,j均为正整数。
  19. 一种基于可穿戴设备的在床状态监测方法,其特征在于,包括:
    获取所述可穿戴设备在预设时间段内输出的加速度信号;
    根据所述加速度信号,确定待处理对象在所述预设时间段内的运动特征;
    根据所述加速度信号,确定所述待处理对象在所述预设时间段内的姿态特征;
    根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态。
  20. 根据权利要求19所述的方法,其特征在于,所述加速度信号为各个时刻的加速度,所述根据所述加速度信号,确定待处理对象在所述预设时间段内的运动特征,包括:
    基于指定的时间长度,将所述预设时间段划分为多个时间窗口;
    根据每个所述时间窗口内每个时刻的加速度,确定每个所述时间窗口对应的平均绝对离差;
    根据每个所述时间窗口对应的平均绝对离差与活动阈值间的大小关系,确定每个所述时间窗口对应的类型标签,其中,所述类型标签,用于表征在对应的时间窗口内所述待处理对象的活动状态;
    根据所述预设时间段内的各个时间窗口的类型标签,确定所述待处理对象在所述预设时间段内的运动特征。
  21. 如权利要求20所述的方法,其特征在于,在所述确定每个所述时间窗口对应的类型标签之后,还包括:
    根据各个所述时间窗口对应的类型标签,确定所述预设时间段内的活动变点对应的时刻;
    确定每个时间窗口与相邻的前一次活动变点间的时间间隔;
    根据每个所述时间窗口对应的时间间隔,对所述待处理对象在所述预设时间段内的运动特征进行更新。
  22. 根据权利要求20所述的方法,其特征在于,所述根据所述预设时间段内的各个时间窗口的类型标签,确定所述待处理对象在所述预设时间段内的运动特征,包括:
    根据所述预设时间段内的各个时间窗口的类型标签,确定在所述预设时间段内的时间窗口类型序列;
    和/或,
    根据所述预设时间段内的各个时间窗口的类型标签,确定在所述预设时间段内各类型窗口的数量。
  23. 根据权利要求19所述的方法,其特征在于,所述加速度信号为各个时刻的加速度,所述根据所述加速度信号,确定所述待处理对象在所述预设时间段内的姿态特征,包括:
    基于指定的时间长度,将所述预设时间段划分为多个时间窗口;
    根据每个所述时间窗口内每个时刻的加速度,确定每个所述时间窗口对应的窗口加速度;
    在任一所述时间窗口对应的窗口加速度在指定范围内的情况下,确定任一所述时间窗口对应的加速度向量;
    确定每个所述加速度向量与指定的球面区域的距离值;
    根据所述预设时间段内的多个距离值,确定所述待处理对象在所述预设时间段内的姿态特征。
  24. 根据权利要求23所述的方法,其特征在于,在所述确定每个所述时间窗口对应的窗口加速度之后,还包括:
    在任一时间窗口对应的窗口加速度未在所述指定范围内的情况下,根据所述预设时间段内的其余时间窗口对应的窗口加速度,确定所述待处理对象在所述预设时间段内的姿态特征。
  25. 根据权利要求19-24任一所述的方法,其特征在于,所述根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态,包括:
    根据所述姿态特征,确定所述待处理对象在所述预设时间段内的第一在床状态;
    根据所述运动特征,确定所述待处理对象在所述预设时间段内的第二在床状态;
    在所述第一在床状态与所述第二在床状态相同的情况下,确定所述待处理对象在所述预设时间段内的在床状态为第一在床状态。
  26. 根据权利要求25所述的方法,其特征在于,在所述确定所述待处理对象在所述预设时间段内的第二在床状态之后,还包括:
    在所述第一在床状态与所述第二在床状态不相同、且所述第一在床状态与所述第二在床状态中的任意一个为非在床状态,则确定所述待处理对象在所述预设时间段内的在床状态为非在床状态。
  27. 根据权利要求19-24任一所述的方法,其特征在于,所述预设时间段的时长大于时间阈值,所 述根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态,包括:
    基于所述时间阈值,将所述预设时间段分为多个时间片段;
    根据每个所述时间片段对应的姿态特征,确定所述待处理对象在每个所述时间片段内的第三在床状态;
    根据每个所述时间片段对应的运动特征,确定所述待处理对象在每个所述时间片段内的第四在床状态;
    根据所述预设时间段内每个所述时间片段对应的第三在床状态及第四在床状态,确定所述待处理对象在所述预设时间段内各个时间片段的在床状态。
  28. 根据权利要求27所述的方法,其特征在于,所述根据所述预设时间段内每个所述时间片段对应的第三在床状态及第四在床状态,确定所述待处理对象在所述预设时间段内各个时间片段的在床状态,包括:
    在第i个时间片段对应的至少一个在床状态为离床状态、第i+m个时间片段对应的至少一个在床状态为离床状态、且m小于指定值的情况下,确定第i个时间片段与第i+1个时间片段间的各时间片段对应的在床状态均为离床状态,其中,i和m均为正整数。
  29. 如权利要求27所述的方法,其特征在于,所述根据所述预设时间段内每个所述时间片段对应的第三在床状态及第四在床状态,确定所述待处理对象在所述预设时间段内各个时间片段的在床状态,包括:
    在第j个时间片段对应的至少一个在床状态为离床状态、且与所述第j个时间片段相邻的其他时间片段对应的至少一个在床状态为疑似离床状态的情况下,确定与所述第j个时间片段相邻的其他时间片段对应的在床状态为离床状态,其中,j均为正整数。
  30. 一种睡眠质量评估装置,其特征在于,包括:
    确定模块,用于确定待处理对象的睡眠数据;
    提取模块,用于根据所述待处理对象的预设的参考核心睡眠时段以及所述睡眠数据,提取睡眠特征;
    评估模块,用于根据所述睡眠特征,评估所述待处理对象的睡眠质量。
  31. 根据权利要求30所述的装置,其特征在于,所述确定模块,具体用于:
    在所述待处理对象在预设时间段内的在床状态为非离床状态时,获取所述待处理对象的体征数据;
    对所述体征数据进行睡眠识别,获取所述待处理对象的睡眠数据。
  32. 根据权利要求30所述的装置,其特征在于,所述参考核心睡眠时段包括:预设时间段内所述待处理对象的个体核心睡眠时段,和/或,所述预设时间段内所述待处理对象所属区域内的群体核心睡眠时段。
  33. 根据权利要求32所述的装置,其特征在于,所述参考核心睡眠时段包括:所述个体核心睡眠时段以及所述群体核心睡眠时段;
    所述提取模块,具体用于:
    根据所述睡眠数据与所述个体核心睡眠时段,确定所述待处理对象在所述个体核心睡眠时段的第一睡眠特征;
    根据所述睡眠数据与所述群体核心睡眠时段,确定所述待处理对象在所述群体核心睡眠时段内的第二睡眠特征;
    根据所述第一睡眠特征以及所述第二睡眠特征,确定所述睡眠特征。
  34. 根据权利要求30所述的装置,其特征在于,所述装置还包括:
    第一获取模块,用于获取所述待处理对象的属性信息;
    所述评估模块,具体用于:根据所述睡眠特征以及所述属性信息,对所述待处理对象进行睡眠质量评估处理。
  35. 根据权利要求34所述的装置,其特征在于,所述评估模块,还用于:
    获取睡眠质量评估模型;
    将所述睡眠特征以及所述属性信息,输入所述睡眠质量评估模型,获取睡眠不适症状;
    将所述睡眠不适症状确定为所述待处理对象的睡眠质量评估结果。
  36. 根据权利要求35所述的装置,其特征在于,所述睡眠质量评估模型为树模型;
    所述评估模块,还用于:
    获取训练数据,其中,所述训练数据包括:预设数量的样本睡眠特征;
    根据经过预训练的神经网络模型以及预设数量的所述样本睡眠特征,确定所述样本睡眠特征对应的样本睡眠不适症状;
    采用所述样本睡眠特征以及对应的样本睡眠不适症状对初始的树模型进行训练,得到训练好的树模型;
    将训练好的所述树模型,作为所述睡眠质量评估模型。
  37. 根据权利要求30所述的装置,其特征在于,所述装置,还包括:
    第二获取模块,用于获取所述可穿戴设备在预设时间段内输出的加速度信号;
    第一确定模块,用于根据所述加速度信号,确定待处理对象在所述预设时间段内的运动特征;
    第二确定模块,用于根据所述加速度信号,确定所述待处理对象在所述预设时间段内的姿态特征;
    第三确定模块,用于根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态。
  38. 根据权利要求37所述的装置,其特征在于,所述第一确定模块,包括:
    划分单元,用于基于指定的时间长度,将所述预设时间段划分为多个时间窗口;
    第一确定单元,用于根据每个所述时间窗口内每个时刻的加速度,确定每个所述时间窗口对应的平均绝对离差;
    第二确定单元,用于根据每个所述时间窗口对应的平均绝对离差与活动阈值间的大小关系,确定每个所述时间窗口对应的类型标签,其中,所述类型标签,用于表征在对应的时间窗口内所述待处理对象的活动状态;
    第三确定单元,用于根据所述预设时间段内的各个时间窗口的类型标签,确定所述待处理对象在所述预设时间段内的运动特征。
  39. 根据权利要求38所述的装置,其特征在于,所述第一确定模块,还用于:
    根据各个所述时间窗口对应的类型标签,确定所述预设时间段内的活动变点对应的时刻;
    确定每个时间窗口与相邻的前一次活动变点间的时间间隔;
    根据每个所述时间窗口对应的时间间隔,对所述待处理对象在所述预设时间段内的运动特征进行更新。
  40. 根据权利要求38所述的装置,其特征在于,所述第三确定单元,具体用于:
    根据所述预设时间段内的各个时间窗口的类型标签,确定在所述预设时间段内的时间窗口类型序列;
    和/或,
    根据所述预设时间段内的各个时间窗口的类型标签,确定在所述预设时间段内各类型窗口的数量。
  41. 根据权利要求37所述的装置,其特征在于,所述加速度信号为各个时刻的加速度,所述第二确定模块,具体用于:
    基于指定的时间长度,将所述预设时间段划分为多个时间窗口;
    根据每个所述时间窗口内每个时刻的加速度,确定每个所述时间窗口对应的窗口加速度;
    在任一所述时间窗口对应的窗口加速度在指定范围内的情况下,确定任一所述时间窗口对应的加速度向量;
    确定每个所述加速度向量与指定的球面区域的距离值;
    根据所述预设时间段内的多个距离值,确定所述待处理对象在所述预设时间段内的姿态特征。
  42. 根据权利要求41所述的装置,其特征在于,所述第二确定模块,还用于:
    在任一时间窗口对应的窗口加速度未在所述指定范围内的情况下,根据所述预设时间段内的其余时间窗口对应的窗口加速度,确定所述待处理对象在所述预设时间段内的姿态特征。
  43. 根据权利要求37-42任一所述的装置,其特征在于,所述第三确定模块,包括:
    第四确定单元,用于根据所述姿态特征,确定所述待处理对象在所述预设时间段内的第一在床状态;
    第五确定单元,用于根据所述运动特征,确定所述待处理对象在所述预设时间段内的第二在床状态;
    第六确定单元,用于在所述第一在床状态与所述第二在床状态相同的情况下,确定所述待处理对象在所述预设时间段内的在床状态为第一在床状态。
  44. 根据权利要求43所述的装置,其特征在于,所述第六确定单元,还用于:
    在所述第一在床状态与所述第二在床状态不相同、且所述第一在床状态与所述第二在床状态中的任意一个为非在床状态,则确定所述待处理对象在所述预设时间段内的在床状态为非在床状态。
  45. 根据权利要求37-42任一所述的装置,其特征在于,所述第三确定模块,包括:
    第二划分单元,用于基于所述时间阈值,将所述预设时间段分为多个时间片段;
    第七确定单元,用于根据每个所述时间片段对应的姿态特征,确定所述待处理对象在每个所述时间片段内的第三在床状态;
    第八确定单元,用于根据每个所述时间片段对应的运动特征,确定所述待处理对象在每个所述时间片段内的第四在床状态;
    第九确定单元,用于根据所述预设时间段内每个所述时间片段对应的第三在床状态及第四在床状态,确定所述待处理对象在所述预设时间段内各个时间片段的在床状态。
  46. 根据权利要求45所述的装置,其特征在于,所述第九确定单元,具体用于:
    在第i个时间片段对应的至少一个在床状态为离床状态、第i+m个时间片段对应的至少一个在床状态为离床状态、且m小于指定值的情况下,确定第i个时间片段与第i+1个时间片段间的各时间片段对应的在床状态均为离床状态,其中,i和m均为正整数。
  47. 根据权利要求45所述的装置,其特征在于,所述第九确定单元,还用于:
    在第j个时间片段对应的至少一个在床状态为离床状态、且与所述第j个时间片段相邻的其他时间片段对应的至少一个在床状态为疑似离床状态的情况下,确定与所述第j个时间片段相邻的其他时间片段对应的在床状态为离床状态,其中,j均为正整数。
  48. 一种基于可穿戴设备的在床状态监测装置,其特征在于,包括:
    获取模块,用于获取所述可穿戴设备在预设时间段内输出的加速度信号;
    第一确定模块,用于根据所述加速度信号,确定待处理对象在所述预设时间段内的运动特征;
    第二确定模块,用于根据所述加速度信号,确定所述待处理对象在所述预设时间段内的姿态特征;
    第三确定模块,用于根据所述姿态特征及所述运动特征,确定所述待处理对象在所述预设时间段内的在床状态。
  49. 根据权利要求48所述的装置,其特征在于,所述加速度信号为各个时刻的加速度,所述第一确定模块,包括:
    划分单元,用于基于指定的时间长度,将所述预设时间段划分为多个时间窗口;
    第一确定单元,用于根据每个所述时间窗口内每个时刻的加速度,确定每个所述时间窗口对应的平均绝对离差;
    第二确定单元,用于根据每个所述时间窗口对应的平均绝对离差与活动阈值间的大小关系,确定每个所述时间窗口对应的类型标签,其中,所述类型标签,用于表征在对应的时间窗口内所述待处理对象的活动状态;
    第三确定单元,用于根据所述预设时间段内的各个时间窗口的类型标签,确定所述待处理对象在所述预设时间段内的运动特征。
  50. 根据权利要求49所述的装置,其特征在于,所述第一确定模块,还用于:
    根据各个所述时间窗口对应的类型标签,确定所述预设时间段内的活动变点对应的时刻;
    确定每个时间窗口与相邻的前一次活动变点间的时间间隔;
    根据每个所述时间窗口对应的时间间隔,对所述待处理对象在所述预设时间段内的运动特征进行更新。
  51. 根据权利要求49所述的装置,其特征在于,所述第三确定单元,具体用于:
    根据所述预设时间段内的各个时间窗口的类型标签,确定在所述预设时间段内的时间窗口类型序列;
    和/或,
    根据所述预设时间段内的各个时间窗口的类型标签,确定在所述预设时间段内各类型窗口的数量。
  52. 根据权利要求48所述的装置,其特征在于,所述加速度信号为各个时刻的加速度,所述第二确定模块,具体用于:
    基于指定的时间长度,将所述预设时间段划分为多个时间窗口;
    根据每个所述时间窗口内每个时刻的加速度,确定每个所述时间窗口对应的窗口加速度;
    在任一所述时间窗口对应的窗口加速度在指定范围内的情况下,确定任一所述时间窗口对应的加速度向量;
    确定每个所述加速度向量与指定的球面区域的距离值;
    根据所述预设时间段内的多个距离值,确定所述待处理对象在所述预设时间段内的姿态特征。
  53. 根据权利要求52所述的装置,其特征在于,所述第二确定模块,还用于:
    在任一时间窗口对应的窗口加速度未在所述指定范围内的情况下,根据所述预设时间段内的其余时间窗口对应的窗口加速度,确定所述待处理对象在所述预设时间段内的姿态特征。
  54. 根据权利要求48-53任一所述的装置,其特征在于,所述第三确定模块,包括:
    第四确定单元,用于根据所述姿态特征,确定所述待处理对象在所述预设时间段内的第一在床状态;
    第五确定单元,用于根据所述运动特征,确定所述待处理对象在所述预设时间段内的第二在床状态;
    第六确定单元,用于在所述第一在床状态与所述第二在床状态相同的情况下,确定所述待处理对象在所述预设时间段内的在床状态为第一在床状态。
  55. 根据权利要求54所述的装置,其特征在于,所述第六确定单元,还用于:
    在所述第一在床状态与所述第二在床状态不相同、且所述第一在床状态与所述第二在床状态中的任意一个为非在床状态,则确定所述待处理对象在所述预设时间段内的在床状态为非在床状态。
  56. 根据权利要求48-53任一所述的装置,其特征在于,所述第三确定模块,包括:
    第二划分单元,用于基于所述时间阈值,将所述预设时间段分为多个时间片段;
    第七确定单元,用于根据每个所述时间片段对应的姿态特征,确定所述待处理对象在每个所述时间片段内的第三在床状态;
    第八确定单元,用于根据每个所述时间片段对应的运动特征,确定所述待处理对象在每个所述时间片段内的第四在床状态;
    第九确定单元,用于根据所述预设时间段内每个所述时间片段对应的第三在床状态及第四在床状态,确定所述待处理对象在所述预设时间段内各个时间片段的在床状态。
  57. 根据权利要求56所述装置,其特征在于,所述第九确定单元具体用于:
    在第i个时间片段对应的至少一个在床状态为离床状态、第i+m个时间片段对应的至少一个在床状态为离床状态、且m小于指定值的情况下,确定第i个时间片段与第i+1个时间片段间的各时间片段对应的在床状态均为离床状态,其中,i和m均为正整数。
  58. 根据权利要求56所述装置,其特征在于,所述第九确定单元,还用于:
    在第j个时间片段对应的至少一个在床状态为离床状态、且与所述第j个时间片段相邻的其他时间片段对应的至少一个在床状态为疑似离床状态的情况下,确定与所述第j个时间片段相邻的其他时间片段对应的在床状态为离床状态,其中,j均为正整数。
  59. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-18中任一项所述的方法。
  60. 一种可穿戴设备,包括:
    加速度传感器;
    可穿戴配件;
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求19-29中任一项所述的方法。
  61. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行权利要求1-18中任一项所述的方法,或者,执行权利要求19-29中任一项所述的方法。
  62. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现权利要求1-18中任一项所述的方法,或者,执行权利要求19-29中任一项所述的方法。
PCT/CN2022/105838 2021-08-26 2022-07-14 睡眠质量评估方法、在床状态监测方法及其装置 WO2023024748A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/419,199 US20240156397A1 (en) 2021-08-26 2024-01-22 Sleep Quality Assessment And In-Bed State Monitoring

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110987315.X 2021-08-26
CN202110987315.XA CN115732087A (zh) 2021-08-26 2021-08-26 基于可穿戴设备的在床状态监测方法、装置及计算机设备
CN202111082791.3A CN115804566A (zh) 2021-09-15 2021-09-15 睡眠质量评估方法、装置、电子设备及存储介质
CN202111082791.3 2021-09-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/419,199 Continuation US20240156397A1 (en) 2021-08-26 2024-01-22 Sleep Quality Assessment And In-Bed State Monitoring

Publications (1)

Publication Number Publication Date
WO2023024748A1 true WO2023024748A1 (zh) 2023-03-02

Family

ID=85322430

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105838 WO2023024748A1 (zh) 2021-08-26 2022-07-14 睡眠质量评估方法、在床状态监测方法及其装置

Country Status (2)

Country Link
US (1) US20240156397A1 (zh)
WO (1) WO2023024748A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160015315A1 (en) * 2014-07-21 2016-01-21 Withings System and method to monitor and assist individual's sleep
CN109222950A (zh) * 2018-10-19 2019-01-18 深圳和而泰数据资源与云技术有限公司 数据处理方法及装置
CN112754443A (zh) * 2021-01-20 2021-05-07 浙江想能睡眠科技股份有限公司 睡眠质量检测方法、系统、可读存储介质及床垫
CN112914506A (zh) * 2021-01-19 2021-06-08 青岛歌尔智能传感器有限公司 睡眠质量检测方法、装置和计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160015315A1 (en) * 2014-07-21 2016-01-21 Withings System and method to monitor and assist individual's sleep
CN109222950A (zh) * 2018-10-19 2019-01-18 深圳和而泰数据资源与云技术有限公司 数据处理方法及装置
CN112914506A (zh) * 2021-01-19 2021-06-08 青岛歌尔智能传感器有限公司 睡眠质量检测方法、装置和计算机可读存储介质
CN112754443A (zh) * 2021-01-20 2021-05-07 浙江想能睡眠科技股份有限公司 睡眠质量检测方法、系统、可读存储介质及床垫

Also Published As

Publication number Publication date
US20240156397A1 (en) 2024-05-16

Similar Documents

Publication Publication Date Title
Qi et al. An overview of data fusion techniques for Internet of Things enabled physical activity recognition and measure
Serpush et al. Wearable sensor‐based human activity recognition in the smart healthcare system
Shahmohammadi et al. Smartwatch based activity recognition using active learning
Awais et al. Physical activity classification for elderly people in free-living conditions
Liu et al. Impact of sampling rate on wearable-based fall detection systems based on machine learning models
Cornacchia et al. A survey on activity detection and classification using wearable sensors
He et al. A low power fall sensing technology based on FD-CNN
Chen et al. Intelligent fall detection method based on accelerometer data from a wrist-worn smart watch
Guo et al. Smartphone-based patients’ activity recognition by using a self-learning scheme for medical monitoring
Zhang et al. Fall detection by embedding an accelerometer in cellphone and using KFD algorithm
Wu et al. Applying deep learning technology for automatic fall detection using mobile sensors
Zhao et al. Recognition of human fall events based on single tri-axial gyroscope
Mimouna et al. Human action recognition using triaxial accelerometer data: selective approach
Sathyanarayana et al. Robust automated human activity recognition and its application to sleep research
Sideridis et al. Gesturekeeper: Gesture recognition for controlling devices in iot environments
US20230004795A1 (en) Systems and methods for constructing motion models based on sensor data
Ramachandran et al. Evaluation of feature engineering on wearable sensor-based fall detection
Chen et al. Atomic head movement analysis for wearable four-dimensional task load recognition
Postawka et al. Lifelogging system based on Averaged Hidden Markov Models: dangerous activities recognition for caregivers support
Kabir et al. Secure Your Steps: A Class-Based Ensemble Framework for Real-Time Fall Detection Using Deep Neural Networks
WO2023024748A1 (zh) 睡眠质量评估方法、在床状态监测方法及其装置
Jiang et al. Fall detection systems for internet of medical things based on wearable sensors: A review
Wu et al. Nonparametric activity recognition system in smart homes based on heterogeneous sensor data
Sharma et al. On the use of multi-modal sensing in sign language classification
Fan et al. Eating gestures detection by tracking finger motion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22860094

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22860094

Country of ref document: EP

Kind code of ref document: A1