CN115120837A - Sleep environment adjusting method, system, device and medium based on deep learning - Google Patents

Sleep environment adjusting method, system, device and medium based on deep learning Download PDF

Info

Publication number
CN115120837A
CN115120837A CN202210736281.1A CN202210736281A CN115120837A CN 115120837 A CN115120837 A CN 115120837A CN 202210736281 A CN202210736281 A CN 202210736281A CN 115120837 A CN115120837 A CN 115120837A
Authority
CN
China
Prior art keywords
information
sleep
human body
body posture
breathing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210736281.1A
Other languages
Chinese (zh)
Inventor
王炳坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
De Rucci Healthy Sleep Co Ltd
Original Assignee
De Rucci Healthy Sleep Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by De Rucci Healthy Sleep Co Ltd filed Critical De Rucci Healthy Sleep Co Ltd
Priority to CN202210736281.1A priority Critical patent/CN115120837A/en
Publication of CN115120837A publication Critical patent/CN115120837A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4809Sleep detection, i.e. determining whether a subject is asleep or not
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0066Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus with heating or cooling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/40Respiratory characteristics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/62Posture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Acoustics & Sound (AREA)
  • Databases & Information Systems (AREA)
  • Anesthesiology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Pulmonology (AREA)
  • Hematology (AREA)
  • Fuzzy Systems (AREA)
  • Dentistry (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Pain & Pain Management (AREA)
  • Oral & Maxillofacial Surgery (AREA)

Abstract

The invention discloses a sleep environment adjusting method, a system, a device and a medium based on deep learning, wherein the method comprises the following steps: acquiring first thermal infrared image information and first breathing sound information of a user during sleeping; inputting the first thermal infrared image information into a pre-trained human body posture recognition network to obtain first human body posture information, and inputting the first human body posture information and first breathing sound information into a pre-trained breathing state recognition network to obtain first breathing state information; inputting the first human body posture information and the first respiratory state information into a pre-trained sleep state recognition network to obtain first sleep state information; and adjusting indoor environmental parameters according to the first sleep state information. According to the invention, the physical sign parameters of the user do not need to be monitored in real time through wearable equipment, so that the accuracy of sleep state identification and the accuracy of sleep environment adjustment are improved, the sleep experience of the user is improved, and the method can be widely applied to the technical field of intelligent home.

Description

Sleep environment adjusting method, system, device and medium based on deep learning
Technical Field
The invention relates to the technical field of smart home, in particular to a sleep environment adjusting method, system, device and medium based on deep learning.
Background
At present, with the continuous improvement of living standard, the requirements of people on the quality of life are higher and higher, and especially the requirements on sleep are also more and more emphasized. Whether the sleep is sufficient and comfortable affects the daily work of people and the physical health condition of people; the quality of sleep is good, and the sleep environment is very important.
In order to improve the sleep quality, various products are available on the market, such as a mattress with better comfort, a mattress with a micro heating body and a mattress with a vibration massage function; there are some devices for improving the sleeping environment of a room, such as a light controller installed in the room, or a player installed in the room for playing hypnotic music. The products are helpful for improving the sleep quality of people to a certain extent, but have the following defects:
(1) most of the existing sleep-assisting products need manual adjustment and control of a user, and once the user enters sleep, intelligent adjustment cannot be carried out;
(2) part helps dormancy product can carry out intelligent regulation through acquireing user's sign parameter, but often need cooperate wearable equipment real-time supervision user's sign parameter, and dress wearable equipment can cause the discomfort of user's health in the sleep process, influences user's sleep experience.
Disclosure of Invention
The present invention aims to solve at least to some extent one of the technical problems existing in the prior art.
Therefore, an object of the embodiments of the present invention is to provide a sleep environment adjusting method based on deep learning, which improves accuracy of sleep state identification of a user, thereby improving accuracy of sleep environment adjustment and sleep experience of the user.
Another object of an embodiment of the present invention is to provide a sleep environment adjusting system based on deep learning.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the invention comprises the following steps:
in a first aspect, an embodiment of the present invention provides a sleep environment adjusting method based on deep learning, including the following steps:
acquiring first thermal infrared image information and first breathing sound information of a user during sleeping;
inputting the first thermal infrared image information into a pre-trained human body posture recognition network to obtain first human body posture information, and inputting the first human body posture information and the first breathing sound information into a pre-trained breathing state recognition network to obtain first breathing state information;
inputting the first human body posture information and the first respiratory state information into a pre-trained sleep state recognition network to obtain first sleep state information;
and adjusting indoor environment parameters according to the first sleep state information, wherein the environment parameters comprise at least one of temperature, light and music.
Further, in an embodiment of the present invention, the step of acquiring the first thermal infrared image information and the first breathing sound information of the user during sleep specifically includes:
acquiring first thermal infrared image information of a user during sleeping through a thermal infrared camera;
acquiring first environment sound information of a user during sleeping through sound acquisition equipment, and performing noise filtration on the first environment sound information to obtain first breathing sound information;
wherein, the thermal infrared camera and the sound collection equipment are both arranged on the bedstead.
Further, in an embodiment of the present invention, the step of performing noise filtering on the first environment sound information to obtain first breath sound information specifically includes:
performing audio frame division on the first environmental sound information to obtain a first audio frame sequence;
dividing the first audio frame sequence into a plurality of first audio frame subsequences corresponding to single breath according to a preset first spectrum distribution of the single breath;
and extracting the audio features of the first audio frame subsequence, matching the audio features with preset noise features, and filtering out the corresponding first audio frame subsequence when the matching degree is greater than or equal to a preset first threshold value to obtain first breathing sound information.
Further, in an embodiment of the present invention, the sleep environment adjusting method based on deep learning further includes a step of training a human posture recognition network in advance, which specifically includes:
acquiring a preset human body posture image data set, wherein the human body posture image data set comprises a plurality of human body posture images and corresponding posture labels;
carrying out heat map processing on the human body posture image to obtain a first training sample, and inputting the first training sample into a human body posture recognition network constructed in advance to obtain a posture recognition result;
determining a first loss value of the human body posture recognition network according to the posture recognition result and the posture label;
and updating the parameters of the human body posture recognition network according to the first loss value.
Further, in an embodiment of the present invention, the sleep environment adjusting method based on deep learning further includes a step of training a respiratory state recognition network in advance, which specifically includes:
acquiring second human body posture information and second breathing sound information of a tester during sleeping, and obtaining a breathing state label through manual marking;
inputting the second human body posture information and the second breathing sound information into a pre-constructed breathing state identification network to obtain a breathing state identification result;
determining a second loss value of the respiratory state identification network according to the respiratory state identification result and the respiratory state label;
and updating the parameters of the respiratory state identification network according to the second loss value.
Further, in an embodiment of the present invention, the method for adjusting sleep environment based on deep learning further includes a step of training a respiratory state recognition network and a sleep state recognition network in combination, which specifically includes:
acquiring physical sign data of the tester during sleeping through wearable equipment, and determining a sleeping state label according to the physical sign data;
inputting the second human body posture information and the breathing state identification result into a pre-constructed sleep state identification network to obtain a sleep state identification result;
determining a third loss value of the sleep state identification network according to the sleep state identification result and the sleep state label;
and updating the parameters of the sleep state identification network and the parameters of the respiratory state identification network through a back propagation algorithm according to the third loss value.
Further, in an embodiment of the present invention, the step of adjusting the indoor environment parameter according to the first sleep state information includes at least one of the following steps:
acquiring a preset temperature control curve, and adjusting the indoor temperature according to the first sleep state information and the temperature control curve;
acquiring a preset light control curve, and adjusting indoor light according to the first sleep state information and the light control curve;
and acquiring a preset music control curve, and adjusting indoor music according to the first sleep state information and the music control curve.
In a second aspect, an embodiment of the present invention provides a sleep environment adjusting system based on deep learning, including:
the data acquisition module is used for acquiring first thermal infrared image information and first breathing sound information when a user sleeps;
the first recognition module is used for inputting the first thermal infrared image information into a pre-trained human body posture recognition network to obtain first human body posture information, and inputting the first human body posture information and the first breathing sound information into a pre-trained breathing state recognition network to obtain first breathing state information;
the second identification module is used for inputting the first human body posture information and the first respiratory state information into a pre-trained sleep state identification network to obtain first sleep state information;
and the environment adjusting module is used for adjusting indoor environment parameters according to the first sleep state information, wherein the environment parameters comprise at least one of temperature, light and music.
In a third aspect, an embodiment of the present invention provides a sleep environment adjusting apparatus based on deep learning, including:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, the at least one program causes the at least one processor to implement a deep learning-based sleep environment adjustment method as described above.
In a fourth aspect, the present invention also provides a computer-readable storage medium, in which a processor-executable program is stored, and the processor-executable program is used for executing the above-mentioned deep learning-based sleep environment adjusting method when executed by a processor.
Advantages and benefits of the present invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention:
according to the embodiment of the invention, the thermal infrared image information and the breathing sound information of the user during sleeping are acquired, the human body posture information and the breathing state information of the user are acquired through the pre-trained human body posture identification network and the breathing state identification network, and then the sleeping state information of the user is acquired through the pre-trained sleeping state identification network, so that the indoor temperature, light and music can be adjusted according to the sleeping state information. According to the embodiment of the invention, the sleep state of the user is identified through the thermal infrared image information and the breathing sound information, the physical sign parameters of the user do not need to be monitored in real time through wearable equipment, and the sleep experience of the user is improved; the human body posture, the breathing state and the sleeping state of the user are identified step by step through the human body posture identification network, the breathing state identification network and the sleeping state identification network, so that the accuracy of the sleeping state identification of the user is improved, the accuracy of the sleep environment adjustment is further improved, and the sleeping experience of the user is further improved.
Drawings
In order to more clearly illustrate the technical solution in the embodiment of the present invention, the following description is made on the drawings required to be used in the embodiment of the present invention, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solution of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a sleep environment adjusting method based on deep learning according to an embodiment of the present invention;
fig. 2 is a block diagram illustrating a sleep environment adjusting system based on deep learning according to an embodiment of the present invention;
fig. 3 is a block diagram of a sleep environment adjusting apparatus based on deep learning according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, the meaning of a plurality is two or more, if there is a description that the first and the second are only used for distinguishing technical features, but not understood as indicating or implying relative importance or implicitly indicating the number of the indicated technical features or implicitly indicating the precedence of the indicated technical features. Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art.
Referring to fig. 1, an embodiment of the present invention provides a sleep environment adjusting method based on deep learning, which specifically includes the following steps:
s101, first thermal infrared image information and first breathing sound information of a user during sleep are obtained.
Specifically, a user often uses weak light for illumination when sleeping, and a clear visible light image cannot be acquired under the condition, so that the embodiment of the invention acquires a thermal infrared image to perform subsequent human body posture recognition, respiratory state recognition and sleep state recognition. Step S101 specifically includes:
s1011, acquiring first thermal infrared image information of a user during sleeping through a thermal infrared camera;
s1012, acquiring first environment sound information of a user during sleep through sound acquisition equipment, and performing noise filtration on the first environment sound information to obtain first breathing sound information;
wherein, thermal infrared camera appearance and sound collection equipment all set up on the bedstead.
Specifically, the embodiment of the invention adopts a thermal infrared camera to acquire thermal infrared image information of a user during sleeping, adopts sound acquisition equipment to acquire environmental sound information of the user during sleeping, and obtains breathing sound information through noise filtering. Thermal infrared appearance and the sound collection equipment of making a video recording all can set up on the bedstead, if set up on the head of a bed backup plate of bedstead, after detecting user's sleep action, open thermal infrared appearance and the sound collection equipment of making a video recording and carry out data acquisition.
As a further optional implementation manner, the step of performing noise filtering on the first environment sound information to obtain the first breath sound information specifically includes:
a1, performing audio frame division on the first environment sound information to obtain a first audio frame sequence;
a2, dividing the first audio frame sequence into a plurality of first audio frame subsequences corresponding to single breath according to the preset first frequency spectrum distribution of the single breath;
a3, extracting the audio features of the first audio frame subsequence, matching the audio features with preset noise features, and filtering out the corresponding first audio frame subsequence when the matching degree is greater than or equal to a preset first threshold value to obtain first breathing sound information.
S102, inputting the first thermal infrared image information into a pre-trained human body posture recognition network to obtain first human body posture information, and inputting the first human body posture information and first breathing sound information into a pre-trained breathing state recognition network to obtain first breathing state information.
As a further optional implementation, the sleep environment adjusting method based on deep learning further includes a step of training a human posture recognition network in advance, which specifically includes:
b1, acquiring a preset human body posture image data set, wherein the human body posture image data set comprises a plurality of human body posture images and corresponding posture labels;
b2, carrying out heat map processing on the human body posture image to obtain a first training sample, and inputting the first training sample into a human body posture recognition network constructed in advance to obtain a posture recognition result;
b3, determining a first loss value of the human body posture recognition network according to the posture recognition result and the posture label;
and B4, updating the parameters of the human body posture recognition network according to the first loss value.
Specifically, after the first training sample is input into the initialized human body posture recognition network, a posture recognition result output by the model can be obtained, and the accuracy of the human body posture recognition network can be evaluated according to the posture recognition result and the posture label, so that the parameters of the model are updated. For the human body posture recognition network, the accuracy of the posture recognition result can be measured by a Loss Function (Loss Function), the Loss Function is defined on a single training data and is used for measuring the prediction error of the training data, and specifically, the Loss value of the training data is determined according to the prediction result of the training data by the label and the model of the single training data. In actual training, a training data set has many training data, so a Cost Function (Cost Function) is generally adopted to measure the overall error of the training data set, and the Cost Function is defined on the whole training data set and is used for calculating the average value of prediction errors of all the training data, so that the prediction effect of the model can be measured better. For a general machine learning model, based on the cost function, and a regularization term for measuring the complexity of the model, the regularization term can be used as a training objective function, and based on the objective function, the loss value of the whole training data set can be obtained. There are many kinds of commonly used loss functions, such as 0-1 loss function, square loss function, absolute loss function, logarithmic loss function, cross entropy loss function, etc. all can be used as the loss function of the machine learning model, and are not described one by one here. In the embodiment of the invention, a loss function can be selected from the loss functions to determine the loss value of the training. And updating the parameters of the model by adopting a back propagation algorithm based on the trained loss value, and iterating for several rounds to obtain the trained human body posture recognition network. Specifically, the number of iteration rounds may be preset, or training may be considered complete when the test set meets the accuracy requirement.
As a further optional implementation, the sleep environment adjusting method based on deep learning further includes a step of training a respiratory state recognition network in advance, which specifically includes:
c1, acquiring second human body posture information and second breathing sound information of the tester during sleeping, and obtaining a breathing state label through manual labeling;
c2, inputting the second human body posture information and the second breathing sound information into a pre-constructed breathing state identification network to obtain a breathing state identification result;
c3, determining a second loss value of the respiratory state identification network according to the respiratory state identification result and the respiratory state label;
and C4, updating the parameters of the respiratory state identification network according to the second loss value.
Specifically, the training of the respiratory state identification network needs to acquire the respiratory state label through manual labeling, and can acquire image data of a tester (such as a test sleeper) during sleep test, acquire second human posture information and second respiratory sound information from the image data, and manually label the data of each stage by a professional to obtain the respiratory state label.
The training of the respiratory state recognition network is divided into two stages, the first stage is similar to the training process of the human body posture recognition network, and details are not repeated here. In the second stage, the respiratory state recognition network and the sleep state recognition network need to be trained jointly, that is, parameters of the respiratory state recognition network are updated reversely through loss values of the sleep state recognition network, and the training process in this stage will be described in the following contents.
S103, inputting the first human body posture information and the first respiratory state information into a pre-trained sleep state recognition network to obtain first sleep state information.
As a further optional implementation, the sleep environment adjusting method based on deep learning further includes a step of training a respiratory state recognition network and a sleep state recognition network in a combined manner, which specifically includes:
d1, acquiring physical sign data of the tester during sleeping through the wearable device, and determining a sleep state label according to the physical sign data;
d2, inputting the second human body posture information and the breathing state recognition result into a pre-constructed sleep state recognition network to obtain a sleep state recognition result;
d3, determining a third loss value of the sleep state identification network according to the sleep state identification result and the sleep state label;
d4, updating the parameters of the sleep state identification network and the parameters of the respiratory state identification network through a back propagation algorithm according to the third loss value.
Specifically, when the image data of the test person (such as a sleeping test person) during sleep test is collected, the physical sign data of the test person during sleep can be acquired through the wearable device, and the professional performs manual labeling according to the physical sign data to obtain the sleep state label.
When the sleep state recognition network is trained, the second human posture information obtained in the previous steps and the respiratory state recognition result output by the respiratory state recognition network can be integrated, a third loss value is calculated according to the sleep state recognition result output by the sleep state recognition network, and the third loss value is reversely transmitted to the sleep state recognition network and the respiratory state recognition network for network parameter updating.
In the embodiment of the invention, the network parameters are updated through the joint training of the sleep state identification network and the respiratory state identification network, so that the potential association between the characteristics of the sleep state identification network and the respiratory state identification network can be learned, the correlation between the posture characteristic and the respiratory characteristic is kept, and the accuracy of the respiratory state and the sleep state obtained through identification is improved.
And S104, adjusting indoor environment parameters according to the first sleep state information, wherein the environment parameters comprise at least one of temperature, light and music.
Specifically, the sleep state information may include the current sleep stage, such as an in-sleep stage, a light sleep stage, a deep sleep stage, and a rapid eye movement stage; the sleep state information may also include sleep quality of the current sleep stage, user mood, etc. Step S104 includes at least one of the following steps:
s1041, acquiring a preset temperature control curve, and adjusting the indoor temperature according to the first sleep state information and the temperature control curve;
s1042, acquiring a preset light control curve, and adjusting indoor light according to the first sleep state information and the light control curve;
and S1043, acquiring a preset music control curve, and adjusting indoor music according to the first sleep state information and the music control curve.
Specifically, the temperature control curve, the light control curve and the music control curve may be pre-drawn according to the influence of each factor on sleep, and many related research results exist in the prior art, which is not the focus of the embodiment of the present invention and is not described herein again.
In the embodiment of the invention, the sleep state information of the user is identified in real time and timed, and real-time adjustment control is carried out according to the current sleep state of the user, the time length for entering the current sleep state, a temperature control curve, a light control curve and a music control curve.
The method steps of the embodiments of the present invention are described above. It can be understood that the embodiment of the invention identifies the sleep state of the user through the thermal infrared image information and the respiratory sound information, and the physical sign parameters of the user do not need to be monitored in real time through wearable equipment, so that the sleep experience of the user is improved; the human body posture, the breathing state and the sleeping state of the user are identified step by step through the human body posture identification network, the breathing state identification network and the sleeping state identification network, so that the accuracy of the sleeping state identification of the user is improved, the accuracy of the sleep environment adjustment is further improved, and the sleeping experience of the user is further improved; network parameters are updated through the joint training of the sleep state recognition network and the breathing state recognition network, potential association between the features of the sleep state recognition network and the breathing state recognition network can be learned, the correlation between the posture features and the breathing features is kept, and the accuracy of the recognized breathing state and the sleep state is improved.
Referring to fig. 2, an embodiment of the present invention provides a sleep environment adjusting system based on deep learning, including:
the data acquisition module is used for acquiring first thermal infrared image information and first breathing sound information when a user sleeps;
the first recognition module is used for inputting the first thermal infrared image information into a pre-trained human body posture recognition network to obtain first human body posture information, and inputting the first human body posture information and the first breathing sound information into a pre-trained breathing state recognition network to obtain first breathing state information;
the second recognition module is used for inputting the first human body posture information and the first respiratory state information into a pre-trained sleep state recognition network to obtain first sleep state information;
and the environment adjusting module is used for adjusting indoor environment parameters according to the first sleep state information, wherein the environment parameters comprise at least one of temperature, light and music.
The contents in the above method embodiments are all applicable to the present system embodiment, the functions specifically implemented by the present system embodiment are the same as those in the above method embodiment, and the beneficial effects achieved by the present system embodiment are also the same as those achieved by the above method embodiment.
Referring to fig. 3, an embodiment of the present invention provides a sleep environment adjusting device based on deep learning, including:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one program causes the at least one processor to implement the method for adjusting sleep environment based on deep learning.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
An embodiment of the present invention also provides a computer-readable storage medium, in which a program executable by a processor is stored, and the program executable by the processor is used for executing the above-mentioned sleep environment adjusting method based on deep learning.
The computer-readable storage medium of the embodiment of the invention can execute the sleep environment adjusting method based on deep learning provided by the embodiment of the method of the invention, can execute any combination of the implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
The embodiment of the invention also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the above-described functions and/or features may be integrated in a single physical device and/or software module, or one or more of the functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is to be determined from the appended claims along with their full scope of equivalents.
The above functions, if implemented in the form of software functional units and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer readable medium could even be paper or another suitable medium upon which the above described program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following technologies, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A sleep environment adjusting method based on deep learning is characterized by comprising the following steps:
acquiring first thermal infrared image information and first breathing sound information of a user during sleeping;
inputting the first thermal infrared image information into a pre-trained human body posture recognition network to obtain first human body posture information, and inputting the first human body posture information and the first breathing sound information into a pre-trained breathing state recognition network to obtain first breathing state information;
inputting the first human body posture information and the first respiratory state information into a pre-trained sleep state recognition network to obtain first sleep state information;
and adjusting indoor environment parameters according to the first sleep state information, wherein the environment parameters comprise at least one of temperature, light and music.
2. The sleep environment adjusting method based on deep learning of claim 1, wherein the step of obtaining the first thermal infrared image information and the first breathing sound information of the user during sleep specifically comprises:
acquiring first thermal infrared image information of a user during sleeping through a thermal infrared camera;
acquiring first environment sound information of a user during sleeping through sound acquisition equipment, and performing noise filtration on the first environment sound information to obtain first breathing sound information;
wherein, the thermal infrared camera and the sound collection equipment are both arranged on the bed frame.
3. The deep learning-based sleep environment adjusting method according to claim 2, wherein the step of noise filtering the first environment sound information to obtain first breathing sound information specifically comprises:
performing audio frame division on the first environmental sound information to obtain a first audio frame sequence;
dividing the first audio frame sequence into a plurality of first audio frame subsequences corresponding to single breath according to a preset first spectrum distribution of the single breath;
and extracting the audio features of the first audio frame subsequence, matching the audio features with preset noise features, and filtering out the corresponding first audio frame subsequence when the matching degree is greater than or equal to a preset first threshold value to obtain first breathing sound information.
4. The deep learning-based sleep environment adjustment method according to claim 1, further comprising a step of pre-training a human posture recognition network, which specifically includes:
acquiring a preset human body posture image data set, wherein the human body posture image data set comprises a plurality of human body posture images and corresponding posture labels;
carrying out heat map processing on the human body posture image to obtain a first training sample, and inputting the first training sample into a human body posture recognition network constructed in advance to obtain a posture recognition result;
determining a first loss value of the human body posture recognition network according to the posture recognition result and the posture label;
and updating the parameters of the human body posture recognition network according to the first loss value.
5. The deep learning-based sleep environment adjusting method according to claim 1, further comprising a step of pre-training a respiratory state recognition network, which specifically includes:
acquiring second human body posture information and second breathing sound information of a tester during sleeping, and obtaining a breathing state label through manual marking;
inputting the second human body posture information and the second breathing sound information into a pre-constructed breathing state identification network to obtain a breathing state identification result;
determining a second loss value of the respiratory state identification network according to the respiratory state identification result and the respiratory state label;
and updating the parameters of the respiratory state identification network according to the second loss value.
6. The deep learning-based sleep environment adjusting method according to claim 5, further comprising a step of jointly training a respiratory state recognition network and a sleep state recognition network, which specifically includes:
acquiring physical sign data of the tester during sleeping through wearable equipment, and determining a sleeping state label according to the physical sign data;
inputting the second human body posture information and the breathing state identification result into a pre-constructed sleep state identification network to obtain a sleep state identification result;
determining a third loss value of the sleep state identification network according to the sleep state identification result and the sleep state label;
and updating the parameters of the sleep state identification network and the parameters of the respiratory state identification network through a back propagation algorithm according to the third loss value.
7. The deep learning-based sleep environment adjusting method according to any one of claims 1 to 6, wherein the step of adjusting the indoor environment parameter according to the first sleep state information comprises at least one of the following steps:
acquiring a preset temperature control curve, and adjusting the indoor temperature according to the first sleep state information and the temperature control curve;
acquiring a preset light control curve, and adjusting indoor light according to the first sleep state information and the light control curve;
and acquiring a preset music control curve, and adjusting indoor music according to the first sleep state information and the music control curve.
8. A sleep environment adjustment system based on deep learning, comprising:
the data acquisition module is used for acquiring first thermal infrared image information and first breathing sound information when a user sleeps;
the first recognition module is used for inputting the first thermal infrared image information into a pre-trained human body posture recognition network to obtain first human body posture information, and inputting the first human body posture information and the first breathing sound information into a pre-trained breathing state recognition network to obtain first breathing state information;
the second identification module is used for inputting the first human body posture information and the first respiratory state information into a pre-trained sleep state identification network to obtain first sleep state information;
and the environment adjusting module is used for adjusting indoor environment parameters according to the first sleep state information, wherein the environment parameters comprise at least one of temperature, light and music.
9. A sleep environment adjusting device based on deep learning is characterized by comprising:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor may implement a deep learning based sleep environment adjustment method according to any one of claims 1 to 7.
10. A computer-readable storage medium in which a processor-executable program is stored, the processor-executable program being configured to perform a deep learning based sleep environment adjusting method according to any one of claims 1 to 7 when being executed by a processor.
CN202210736281.1A 2022-06-27 2022-06-27 Sleep environment adjusting method, system, device and medium based on deep learning Pending CN115120837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210736281.1A CN115120837A (en) 2022-06-27 2022-06-27 Sleep environment adjusting method, system, device and medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210736281.1A CN115120837A (en) 2022-06-27 2022-06-27 Sleep environment adjusting method, system, device and medium based on deep learning

Publications (1)

Publication Number Publication Date
CN115120837A true CN115120837A (en) 2022-09-30

Family

ID=83379041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210736281.1A Pending CN115120837A (en) 2022-06-27 2022-06-27 Sleep environment adjusting method, system, device and medium based on deep learning

Country Status (1)

Country Link
CN (1) CN115120837A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117404783A (en) * 2023-10-18 2024-01-16 广州易而达科技股份有限公司 Air conditioner control method and device, air conditioner and storage medium
CN117404783B (en) * 2023-10-18 2024-05-31 广州易而达科技股份有限公司 Air conditioner control method and device, air conditioner and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130178756A1 (en) * 2010-09-29 2013-07-11 Fujitsu Limited Breath detection device and breath detection method
KR20180017392A (en) * 2016-08-09 2018-02-21 한국전자통신연구원 System and method for providing sleeping state monitoring service
CN107928673A (en) * 2017-11-06 2018-04-20 腾讯科技(深圳)有限公司 Acoustic signal processing method, device, storage medium and computer equipment
CN109568760A (en) * 2017-09-29 2019-04-05 中国移动通信有限公司研究院 Sleep environment adjusting method and system
CN110772700A (en) * 2019-09-18 2020-02-11 平安科技(深圳)有限公司 Automatic sleep-aiding music pushing method and device, computer equipment and storage medium
CN110974195A (en) * 2019-12-05 2020-04-10 珠海格力电器股份有限公司 Method, device and storage medium for adjusting sleep environment
CN111227791A (en) * 2020-01-09 2020-06-05 珠海格力电器股份有限公司 Sleep quality monitoring method and sleep monitoring device
CN111281347A (en) * 2020-03-06 2020-06-16 韩赛红 Passive lateral lying device, control method and sleep assisting system
CN111814830A (en) * 2020-06-08 2020-10-23 珠海格力电器股份有限公司 Sleep state detection model construction method, sleep state detection method and device
CN112307940A (en) * 2020-10-28 2021-02-02 有半岛(北京)信息科技有限公司 Model training method, human body posture detection method, device, equipment and medium
CN112826461A (en) * 2020-12-30 2021-05-25 深圳市携康网络科技有限公司 Sleep analysis method, system, computer device and storage medium
CN113448438A (en) * 2021-06-25 2021-09-28 内蒙古工业大学 Control system and method based on sleep perception
WO2022009008A1 (en) * 2020-07-10 2022-01-13 3M Innovative Properties Company Breathing apparatus and method of communicating using breathing apparatus
US20220047160A1 (en) * 2020-05-08 2022-02-17 Research & Business Foundation Sungkyunkwan University Ceiling ai health monitoring apparatus and remote medical-diagnosis method using the same
CN114376564A (en) * 2021-12-29 2022-04-22 华南理工大学 Sleep staging method, system, device and medium based on cardiac shock signal

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130178756A1 (en) * 2010-09-29 2013-07-11 Fujitsu Limited Breath detection device and breath detection method
KR20180017392A (en) * 2016-08-09 2018-02-21 한국전자통신연구원 System and method for providing sleeping state monitoring service
CN109568760A (en) * 2017-09-29 2019-04-05 中国移动通信有限公司研究院 Sleep environment adjusting method and system
CN107928673A (en) * 2017-11-06 2018-04-20 腾讯科技(深圳)有限公司 Acoustic signal processing method, device, storage medium and computer equipment
CN110772700A (en) * 2019-09-18 2020-02-11 平安科技(深圳)有限公司 Automatic sleep-aiding music pushing method and device, computer equipment and storage medium
CN110974195A (en) * 2019-12-05 2020-04-10 珠海格力电器股份有限公司 Method, device and storage medium for adjusting sleep environment
CN111227791A (en) * 2020-01-09 2020-06-05 珠海格力电器股份有限公司 Sleep quality monitoring method and sleep monitoring device
CN111281347A (en) * 2020-03-06 2020-06-16 韩赛红 Passive lateral lying device, control method and sleep assisting system
US20220047160A1 (en) * 2020-05-08 2022-02-17 Research & Business Foundation Sungkyunkwan University Ceiling ai health monitoring apparatus and remote medical-diagnosis method using the same
CN111814830A (en) * 2020-06-08 2020-10-23 珠海格力电器股份有限公司 Sleep state detection model construction method, sleep state detection method and device
WO2022009008A1 (en) * 2020-07-10 2022-01-13 3M Innovative Properties Company Breathing apparatus and method of communicating using breathing apparatus
CN112307940A (en) * 2020-10-28 2021-02-02 有半岛(北京)信息科技有限公司 Model training method, human body posture detection method, device, equipment and medium
CN112826461A (en) * 2020-12-30 2021-05-25 深圳市携康网络科技有限公司 Sleep analysis method, system, computer device and storage medium
CN113448438A (en) * 2021-06-25 2021-09-28 内蒙古工业大学 Control system and method based on sleep perception
CN114376564A (en) * 2021-12-29 2022-04-22 华南理工大学 Sleep staging method, system, device and medium based on cardiac shock signal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117404783A (en) * 2023-10-18 2024-01-16 广州易而达科技股份有限公司 Air conditioner control method and device, air conditioner and storage medium
CN117404783B (en) * 2023-10-18 2024-05-31 广州易而达科技股份有限公司 Air conditioner control method and device, air conditioner and storage medium

Similar Documents

Publication Publication Date Title
CN112166475A (en) Respiratory system based sound management of respiratory conditions
JP6114470B2 (en) HEALTHCARE DECISION SUPPORT SYSTEM, PATIENT CARE SYSTEM, AND HEALTHCARE DECISION METHOD
KR101535432B1 (en) Contents valuation system and contents valuating method using the system
CN109087706B (en) Human health assessment method and system based on sleep big data
US20220351859A1 (en) User interface for navigating through physiological data
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN107402635B (en) Mental health adjusting method and system combining brain waves and virtual reality
CN114376564A (en) Sleep staging method, system, device and medium based on cardiac shock signal
KR102600175B1 (en) Method, computing device and computer program for analyzing a user's sleep state through sound information
CN104699931A (en) Neural network blood pressure prediction method and mobile phone based on human face
CN110706816A (en) Method and equipment for regulating sleep environment based on artificial intelligence
JP3954295B2 (en) IDENTIFICATION / RESPONSE MEASUREMENT METHOD, COMPUTER-READABLE RECORDING MEDIUM CONTAINING IDENTIFICATION / REACTION MEASUREMENT PROGRAM
CN115422973A (en) Electroencephalogram emotion recognition method of space-time network based on attention
CN117598700B (en) Intelligent blood oxygen saturation detection system and method
CN116509336B (en) Sleep periodicity detection and adjustment method, system and device based on waveform analysis
CN115120837A (en) Sleep environment adjusting method, system, device and medium based on deep learning
CN108771539A (en) A kind of detection method and its device of the contactless heart rate based on camera shooting
CN113566395B (en) Air conditioner, control method and device thereof and computer readable storage medium
CN114488841B (en) Data collection processing method of intelligent wearable device
Zhang et al. Quantification of advanced dementia patients’ engagement in therapeutic sessions: An automatic video based approach using computer vision and machine learning
CN115349821A (en) Sleep staging method and system based on multi-modal physiological signal fusion
CN106502409A (en) A kind of Product Emotion analysis system of utilization brain information and method
CN112086193A (en) Face recognition health prediction system and method based on Internet of things
CN117408564B (en) Online academic counseling system
van Gorp et al. Aleatoric Uncertainty Estimation of Overnight Sleep Statistics Through Posterior Sampling Using Conditional Normalizing Flows

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220930

RJ01 Rejection of invention patent application after publication