CN114821674A - Sleep state monitoring method, electronic device and storage medium - Google Patents

Sleep state monitoring method, electronic device and storage medium Download PDF

Info

Publication number
CN114821674A
CN114821674A CN202210737853.8A CN202210737853A CN114821674A CN 114821674 A CN114821674 A CN 114821674A CN 202210737853 A CN202210737853 A CN 202210737853A CN 114821674 A CN114821674 A CN 114821674A
Authority
CN
China
Prior art keywords
depth
sleep state
human body
infant
body part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210737853.8A
Other languages
Chinese (zh)
Other versions
CN114821674B (en
Inventor
寇鸿斌
吴坚
陈智超
何武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Lushenshi Technology Co ltd
Original Assignee
Hefei Dilusense Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Dilusense Technology Co Ltd filed Critical Hefei Dilusense Technology Co Ltd
Priority to CN202210737853.8A priority Critical patent/CN114821674B/en
Publication of CN114821674A publication Critical patent/CN114821674A/en
Application granted granted Critical
Publication of CN114821674B publication Critical patent/CN114821674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4809Sleep detection, i.e. determining whether a subject is asleep or not
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4818Sleep apnoea
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/04Babies, e.g. for SIDS detection

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • General Physics & Mathematics (AREA)
  • Anesthesiology (AREA)
  • Multimedia (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the application relates to the technical field of video monitoring, and discloses a sleep state monitoring method, electronic equipment and a storage medium. The sleep state monitoring method comprises the following steps: acquiring continuous frame depth images containing a human body in a sleep area; identifying a specified body part of a human body in the depth image, and determining depth information of the specified body part in the depth image; and analyzing the depth information of the appointed body part in the depth image to determine the sleeping state of the human body. By analyzing the continuous depth information, various sleeping behaviors of the human body can be rapidly determined, so that the sleeping state of the human body can be judged.

Description

Sleep state monitoring method, electronic device and storage medium
Technical Field
The embodiment of the application relates to the technical field of video monitoring, in particular to a sleep state monitoring method, electronic equipment and a storage medium.
Background
The sleep and the physical and psychological health of people have an inseparable relationship, the sleeping state of people can be known by effectively monitoring the sleeping state of people, the targeted treatment or intervention can be carried out in time, and the physical and psychological damage and dangerous events are reduced.
Currently, a common sleep monitoring method is to wear an intelligent monitoring device, and evaluate the sleep state of a wearer by analyzing sleep data acquired by the intelligent monitoring device. However, this method is mostly only used for monitoring a simple sleep state, such as: the sleep states such as irregular heart rate change, apnea and snoring can not carry out complicated sleep behavior analysis, and intelligent monitoring equipment itself will influence the sleep travelling comfort of the wearer, and then lead to the sleep monitoring result inaccurate.
Disclosure of Invention
An object of the present invention is to provide a sleep state monitoring method, an electronic device, and a storage medium, which can determine various sleep behaviors of a human body quickly by analyzing depth information of a continuous frame depth image of the human body in a sleep area, so as to determine a sleep state of the human body.
In order to solve the above technical problem, an embodiment of the present application provides a sleep state monitoring method, including: acquiring continuous frame depth images containing a human body in a sleep area; identifying a specified body part of a human body in the depth image, and determining depth information of the specified body part in the depth image; and analyzing the depth information of the appointed body part in the depth image to determine the sleeping state of the human body.
An embodiment of the present application also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the sleep state monitoring method as mentioned in the above embodiments.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the sleep state monitoring method mentioned in the above embodiments.
According to the sleep state monitoring method provided by the embodiment of the application, the various sleep behaviors of the human body can be rapidly determined by acquiring the continuous frame depth images in the sleep area when the human body sleeps, identifying the specified body part of the continuous frame depth images, acquiring the depth information of the specified body part and analyzing the continuous depth information of the specified body part, so that the sleep state of the human body can be accurately measured. In addition, due to the privacy and the safety of the depth image, the personal privacy of the user can be greatly protected in the sleep monitoring process, and the user experience is improved.
In addition, the sleep state monitoring method according to an embodiment of the present application, in which the human body includes an infant and an adult, and the analyzing the depth information of the designated body part in the depth image to determine the sleep state of the human body includes: judging whether any body part of the adult is overlapped with the infant or not according to the three-dimensional information corresponding to the pixel points in the depth images of the infant and the adult, and judging whether the body part of the adult covers the body of the infant or not in the overlapped area; and if the body part of the adult is judged to cover the body of the infant, an alarm prompt is sent out. The body and the limbs of an adult and an infant can move unconsciously during sleeping, and when the body or the limbs of the adult are unconsciously moved to cover the body of the infant, the infant is extremely dangerous.
In addition, according to the sleep state monitoring method provided by the embodiment of the application, the designated body part comprises four limb parts; the analyzing the depth information of the designated body part in the depth image to determine the sleep state of the human body comprises: extracting the depth value of a first key point at each joint of the four limbs to form a depth value sequence corresponding to the first key point; counting three-dimensional information and frame time corresponding to pixel points in a depth image of each depth value in a depth value sequence corresponding to the first key point, and determining the motion acceleration of the first key point between two adjacent frame time according to the three-dimensional information and the frame time corresponding to the pixel points in the depth image of each depth value; and determining that the human body has a sleep state of limb twitching when the motion acceleration of the first key point between two adjacent frame moments is greater than a preset acceleration threshold. The motion acceleration between two adjacent frame moments is obtained by calculating the three-dimensional information of the first key points at the joints of the four limbs and the corresponding frame moments, and the limb twitching is a short-time quick behavior, so that when the motion acceleration between two adjacent frame moments is larger than a preset acceleration threshold value, the human body is considered to have the behavior of the limb twitching.
In addition, according to the sleep state monitoring method provided by the embodiment of the application, the designated body part comprises a chest cavity part; the analyzing the depth information of the designated body part in the depth image to determine the sleep state of the human body comprises: extracting the depth value of a second key point of the chest part in the continuous frame depth images to form a depth value sequence corresponding to the second key point; counting depth extreme values in the depth value sequence of the second key point, and determining the breathing time of the human body based on the frame time corresponding to the depth image where two adjacent depth extreme values are located; and when the breath time is greater than the preset standard breath time, determining that the human body has the sleep state of apnea. This application is through analyzing the continuous depth value sequence of thorax position, because the human thorax position fluctuates when breathing for the depth value of continuous frame depth image thorax position has periodic regular change, consequently acquires the frame time that two adjacent degree of depth extremums correspond and just can confirm human breath time fast, just can confirm whether there is apnea's sleep state through the monitoring to breath time.
In addition, the sleep state monitoring method provided in this embodiment of the present application, determining a motion acceleration of the first key point between two adjacent frame times according to the three-dimensional information corresponding to the pixel point in the depth image where each depth value is located and the frame time, includes: acquiring a first curve graph of the change of the three-dimensional information of the first key point along with time based on the three-dimensional information and the frame time corresponding to the pixel point in the depth image of each depth value; respectively calculating the instantaneous speed of the curve at each frame time on the first curve graph; and calculating the motion acceleration of the curve between every two adjacent frame moments according to the instantaneous speed of the curve at each frame moment on the first graph, and determining the acceleration as the motion acceleration of the first key point between the two adjacent frame moments.
In addition, in the sleep state monitoring method according to an embodiment of the present application, the determining the sleep state of the human body by analyzing the depth information of the designated body part in the depth image includes: extracting a depth value of a third key point of the mouth part in the continuous frame depth images, and determining an upper lip central point and a lower lip central point of the mouth part according to the depth value of the third key point; analyzing the coordinates of the center point of the upper lip and the coordinates of the center point of the lower lip of the mouth part in the continuous frame depth images to determine whether the human body has a sleep state of mouth breathing; and when the human body has a sleep state of mouth breathing, determining the frequency of mouth breathing according to the change rule of the coordinates of the central point of the upper lip and the central point of the lower lip.
In addition, a sleep state monitoring method according to an embodiment of the present application, wherein if it is determined that the body part of the adult covers the body of the infant, the method of sending an alarm prompt includes: and if the body part of the adult is judged to cover the body of the infant, and the covering area comprises the head of the infant, an alarm prompt is given.
In addition, the sleep state monitoring method provided by the embodiment of the present application further includes: judging the distance between the infant and the edge of the sleeping area according to the three-dimensional information corresponding to the pixel points in the depth image of the infant and the three-dimensional information of the sleeping area; and if the distance between the infant and the edge of the sleeping area is judged to be smaller than the preset distance, sending an alarm prompt.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings which correspond to and are not to be construed as limiting the embodiments, in which elements having the same reference numeral designations represent like elements throughout, and in which the drawings are not to be construed as limiting in scale unless otherwise specified.
Fig. 1 is a flowchart of a sleep state monitoring method provided by an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the following describes each embodiment of the present application in detail with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in various embodiments of the present application in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The following describes details of the sleep state monitoring method according to the present embodiment. The following disclosure provides implementation details for the purpose of facilitating understanding, and is not necessary to practice the present solution.
An embodiment of the present application relates to a sleep state monitoring method, as shown in fig. 1, including:
step 101, acquiring continuous frame depth images containing a human body in a sleep area.
According to the sleep monitoring method, the depth video during sleep is acquired through the depth cameras arranged around the sleep area, and the depth video is analyzed to complete sleep monitoring. One or more depth cameras may be provided around the sleeping area. Such as: when the number of the depth cameras is one, the depth cameras can be arranged right above the sleep area, so that the depth values between the depth cameras and all parts of the human body can be accurately acquired, and the sleep state of the human body can be conveniently judged. When the number of the depth cameras is multiple, one or more depth cameras can be arranged above, on the left side, on the right side and the like of a sleep area respectively, and when a depth video during sleep is obtained, the multiple depth cameras can shoot simultaneously and can also mutually control and detect the orientation of a human body so as to realize continuous shooting. In addition, the sleeping area in this embodiment is an area of the entire bed where the human body is located during sleeping.
Specifically, when a plurality of depth cameras are arranged around a sleep area, the plurality of cameras are respectively arranged above, on the left side and on the right side of the sleep area, the depth cameras above the sleep area are initially put into an operating state to start shooting a depth video, and the depth cameras at other positions are in a standby state and do not shoot the depth video. The method comprises the steps of obtaining a depth image from a depth video, calculating the depth average value of each body part in the depth image, determining that a human body is in a lying state when the depth average value of at least two body parts is within a first numerical range, determining that the human body is in a side state when the depth average value of at least two body parts is within a second numerical range, triggering a depth camera at an upper position to send a working instruction to a depth camera at a side position at the moment, enabling the depth camera at the side position to start to shoot the depth video, and enabling the depth camera at the upper position to continuously shoot for a short period of time and then enter a standby state. When none of the depth images acquired by the depth cameras at the side positions (including the left side and the right side) can detect the complete area of one eye, the depth camera at the side position sends a work instruction to the depth camera at the upper position, so that the depth camera at the upper position starts to shoot depth video, and the depth camera at the side position continues to shoot for a short time and then enters a standby state.
And 102, identifying the specified body part of the human body in the depth image, and determining the depth information of the specified body part in the depth image.
In this embodiment, the depth image of each frame includes a complete human body, the specified body part in the depth image is determined by an image recognition technology, the depth information of the body part in the depth image is acquired, and the sleep behavior of the body part can be determined by analyzing the depth information. Such as: for the head part, detecting the state of the mouth can determine the breathing pattern of the human body. The height of the whole head from the sleeping area can be detected to determine the height of the pillow when the human body sleeps. The sleeping posture of the human body can be determined by detecting the four limbs.
And 103, analyzing the depth information of the specified body part in the depth image, and determining the sleep state of the human body.
In one embodiment, the human body includes an infant and an adult, and the analyzing the depth information of the specified body part in the depth image to determine the sleep state of the human body includes: judging whether any body part of the adult is overlapped with the infant or not according to the three-dimensional information corresponding to the pixel points in the depth images of the infant and the adult, and judging whether the body part of the adult covers the body of the infant or not in the overlapped area; and if the body part of the adult is judged to cover the body of the infant, an alarm prompt is sent out.
Specifically, the method for determining whether or not an adult has any body part overlapping an infant and determining whether or not the body part of the adult covers the body of the infant in an overlapping area includes: (1) determining a human body area in the depth image, and distinguishing an infant human body area and an adult human body area according to the size of the human body area; (2) acquiring three-dimensional information (x, y, z) of an infant human body area and three-dimensional information (x 1, y1, z 1) of an adult human body area; (3) for each body part of the human body area of the infant, acquiring the depth change rate of the depth value of each pixel point in the body part in the corresponding direction of the body part; (4) and judging whether the depth change rate of each pixel point is within a preset numerical range, if so, not performing any processing, and if not, determining that the pixel point is a boundary pixel point of an overlapping area, and the body part of an adult is covered on the body of the infant nearby the boundary pixel point.
It should be noted that the human body region in the present embodiment refers to a minimum region including a human body. For a scene that an adult presses an infant, in an overlapped area of the adult and the infant, the depth value is obviously fluctuated compared with an un-overlapped area, so that whether the adult covers the body of the infant can be quickly determined according to the change rate of the depth value. In the non-overlapping region, the depth value of each body part is kept substantially around a stable value, i.e. the depth change rate fluctuates slightly substantially around a zero value (the preset value range is set accordingly). At the intersection of the overlapping and non-overlapping regions, the depth value change is large, i.e., the depth change rate differs from zero more.
During detection, the calculation can be performed according to a preset direction for each body part, such as: for the arm part in the depth image, the direction of the depth change rate can be calculated by taking the shoulder as a starting point, taking the hand as an end point and taking the direction from the starting point to the end point as the arm part; for the torso part in the depth image, the neck may be used as a starting point, the neck may be used as an end point at a preset body ratio below the neck, and the direction from the starting point to the end point may be used as the direction of the torso part to calculate the depth change rate. Further, in order to improve the detection accuracy, the arm part in the depth image may be divided into two parts: the upper arm portion and the lower arm portion are configured such that the direction from the shoulder to the elbow joint is the direction in which the depth change rate is calculated for the upper arm portion, and the direction from the elbow joint to the lower arm portion is the direction in which the depth change rate is calculated for the lower arm portion. Similarly, the leg position may also be determined according to the above method. The above calculation direction of the depth change rate may be calculated from two-dimensional coordinates of the start point and the end point.
Generally, for a scene that an adult presses an infant, the outline of the human body area of the adult is complete, and the coordinates of any body part of the adult can be easily obtained without being blocked. While the contour of the human body area of the infant is incomplete. Therefore, whether the situation that the adult covers the infant occurs can be determined according to the integrity of the outline of the infant and the integrity of the outline of the adult. Specifically, when it is detected that the outline of the infant is incomplete and the outline of the adult is complete, it is determined that the body part of the adult is overlaid on the body of the infant.
In addition, if the body part of the adult is judged to cover the body of the infant, and the covering area comprises the head of the infant, an alarm prompt is given. Specifically, after the boundary pixel points of the overlapping region are determined, the alarm prompts of the corresponding levels are generated according to the positions of the boundary pixel points. If the boundary pixel point is at the head, it indicates that the situation is dangerous for the infant, and a high-level alarm prompt is generated.
In addition, for a scene in which an infant exists in the sleep area, the distance between the infant and the edge of the sleep area can be judged according to the three-dimensional information corresponding to the pixel points in the depth image in which the infant is located and the three-dimensional information of the sleep area; and if the distance between the infant and the edge of the sleeping area is judged to be smaller than the preset distance, an alarm prompt is sent.
Specifically, the infant body area may be determined in the depth image, whether any pixel point coordinate in the infant body area falls within a boundary area of the sleep area (the boundary area is set according to a preset distance) is determined, and if yes, it is determined that the distance between the infant and the edge of the sleep area is smaller than the preset distance. In addition, the depth value of the pixel point in the boundary region of the sleep region can be judged, and when the depth value of the pixel point in the boundary region is smaller than the depth threshold value, the distance between the infant and the edge of the sleep region is determined to be smaller than the preset distance.
In one embodiment, the designated body part comprises a limb part; analyzing the depth information of the designated body part in the depth image, and determining the sleep state of the human body, wherein the method comprises the following steps: extracting the depth value of a first key point at each joint of the four limbs to form a depth value sequence corresponding to the first key point; counting three-dimensional information and frame time corresponding to pixel points in a depth image where each depth value in a depth value sequence corresponding to a first key point is located, and determining the motion acceleration of the first key point between two adjacent frame times according to the three-dimensional information and the frame time corresponding to the pixel points in the depth image where each depth value is located; and determining the sleep state that the human body has limb twitching when the motion acceleration of the first key point between two adjacent frame moments is greater than a preset acceleration threshold.
In this embodiment, when the designated body part is a limb part, a depth value sequence corresponding to a first key point of the limb part is obtained, where the first key point may include key points at positions such as an elbow joint, a hand joint, a knee joint, an ankle joint, and a toe joint, and each joint position may be provided with a plurality of key points. Taking a knee joint as an example, acquiring three-dimensional information of a first key point at the knee joint at different frame moments, and calculating the motion acceleration of the first key point with the same name at the knee joint between two adjacent frame moments, wherein the motion acceleration of the first key point comprises the motion accelerations in three directions (x direction, y direction and z direction), and when the motion acceleration of the first key point at the knee joint is greater than a preset acceleration threshold value, determining that the human body has a sleep state with limb twitching.
In addition, after the sleeping state that the human body has the limb tetany is determined, the face color image corresponding to the limb tetany moment can be acquired, the face color image is subjected to expression recognition to determine the face expression type when the limb tetany occurs, and possible reason for the limb tetany is given according to the face expression type. Causes of limb twitching include: calcium deficiency, epilepsy, high fever, and the like.
Since the twitch behavior of the limb is a short and rapid behavior, whether the limb moves rapidly in a short time is determined by the magnitude of the change in velocity. Specifically, determining the motion acceleration of the first key point between two adjacent frame times according to the three-dimensional information corresponding to the pixel point in the depth image where each depth value is located and the frame time includes: acquiring a first curve graph of the change of the three-dimensional information of the first key point along with time based on the three-dimensional information and the frame time corresponding to the pixel point in the depth image of each depth value; respectively calculating the instantaneous speed of the curve at each frame time on the first curve graph; and calculating the motion acceleration of the curve between every two adjacent frame time according to the instantaneous speed of the curve at each frame time on the first graph, and determining the acceleration as the motion acceleration of the first key point between the two adjacent frame time.
Wherein the first graph is a four-dimensional image, the four dimensions including a depth value, an x-coordinate, a y-coordinate, and a frame time. In calculating the instantaneous velocity of the curve at each frame time, each dimension can be calculated separately to obtain v x 、v y 、v z Then, the instantaneous speed of the frame time is obtained according to the instantaneous speed of each dimension, namely the instantaneous speed of each frame time is equal to the square sum of the instantaneous speeds of each dimension of the frame time, and then the square opening calculation is carried out to obtain the speed.
In one embodiment, the designated body part comprises a chest region; analyzing the depth information of the designated body part in the depth image, and determining the sleep state of the human body, wherein the method comprises the following steps: extracting the depth value of a second key point of the chest part in the continuous frame depth images to form a depth value sequence corresponding to the second key point; counting depth extreme values in the depth value sequence of the second key point, and determining the breathing time of the human body based on the frame time corresponding to the depth image where the two adjacent depth extreme values are located; and when the breath time is greater than the preset standard breath time, determining that the human body has the sleep state of apnea.
In this embodiment, the second key point of the thoracic cavity portion may be one pixel point, or may be a plurality of pixel points. For continuous monitoring, the point with the maximum change of the depth value of the chest part and the point with the minimum change of the depth value can be respectively used as a second key point. And obtaining the depth values of the second key point at the continuous frame time, and determining the breathing time of the human body according to the frame time corresponding to the two adjacent depth extreme values. The breathing time is the time required to complete the process of one exhalation and one inhalation, or one breath time is the time when the thorax fluctuates. Therefore, for each depth value of the second keypoints, in the change of the depth values shown in the continuous frame images over time, the time corresponding to two adjacent depth minima or two adjacent depth maxima is the respiration time. In the process of determining the breathing time, the monitoring accuracy can be improved by monitoring a plurality of second key points. The standard breath time is generally 3-5 seconds, and when the standard breath time is found to exceed the breath time during monitoring, the sleep state of the human body with apnea can be determined.
In one embodiment, the designated body part comprises a mouth part, and the analyzing the depth information of the designated body part in the depth image to determine the sleep state of the human body comprises: extracting the depth value of a third key point of the mouth part in the continuous frame depth images, and determining the center point of the upper lip and the center point of the lower lip of the mouth part according to the depth value of the third key point; analyzing the coordinates of the center point of the upper lip and the center point of the lower lip of the mouth part in the continuous frame depth images to determine whether the human body has a sleep state of mouth breathing; when the human body has a sleep state of mouth breathing, the frequency of mouth breathing is determined according to the change rule of the coordinates of the central point of the upper lip and the central point of the lower lip.
In this embodiment, the third key point includes each key point on the lip contour, and the center point of the upper lip and the center point of the lower lip are the points with the smallest depth value of each key point on the lip contour, and the distance between the upper lip and the lower lip can be determined according to the coordinate of the center point of the upper lip and the coordinate of the center point of the lower lip, and if the distance between the upper lip and the lower lip is greater than the preset distance threshold, the sleep state that the human body has mouth breathing can be determined. Furthermore, according to the change rule of the distance between the upper lip and the lower lip at the continuous frame time, the frequency of mouth breathing can be determined. When the human body has a mouth breathing state, the distance between the upper lip and the lower lip is periodically increased and decreased, so that the mouth breathing frequency can be determined according to the change rule of the distance between the upper lip and the lower lip. Mouth breathing can cause adenoid appearance for infants and snoring for adults, which can affect the sleep quality of the human body. If only the chest part is monitored, only the sleep state of whether the human body has apnea can be judged, but the mouth breathing state cannot be indirectly judged through monitoring the chest part, and the mouth part needs to be closely monitored.
According to the sleep state monitoring method provided by the embodiment of the application, various sleep behaviors of a human body can be rapidly determined by acquiring the continuous frame depth images in the sleep area when the human body sleeps, identifying the specified body part of the continuous frame depth images, acquiring the depth information of the specified body part and analyzing the continuous depth information of the specified body part, so that the sleep state of the human body can be accurately judged. In addition, due to the privacy and the safety of the depth image, the personal privacy of the user can be greatly protected in the sleep monitoring process, and the user experience is improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of this patent to add insignificant modifications or introduce insignificant designs to the algorithms or processes, but not to change the core designs of the algorithms and processes.
Embodiments of the present application relate to an electronic device, as shown in fig. 2, including:
at least one processor 201; and a memory 202 communicatively coupled to the at least one processor 201; the memory 202 stores instructions executable by the at least one processor 201, and the instructions are executed by the at least one processor 201 to enable the at least one processor 201 to execute the sleep state monitoring method according to the above embodiments.
The electronic device includes: one or more processors 201 and a memory 202, with one processor 201 being illustrated in fig. 2. The processor 201 and the memory 202 may be connected by a bus or other means, and fig. 2 illustrates the connection by the bus as an example. Memory 202, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor 201 executes various functional applications and data processing of the device by running non-volatile software programs, instructions and modules stored in the memory 202, i.e., implements the sleep state monitoring method described above.
The memory 202 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory 202 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 202 may optionally include memory located remotely from the processor 201, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 202, and when executed by the one or more processors 201, perform the sleep state monitoring method of any of the embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
Embodiments of the present application relate to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.

Claims (10)

1. A sleep state monitoring method, comprising:
acquiring continuous frame depth images containing a human body in a sleep area;
identifying a specified body part of a human body in the depth image, and determining depth information of the specified body part in the depth image;
and analyzing the depth information of the appointed body part in the depth image to determine the sleeping state of the human body.
2. The sleep state monitoring method according to claim 1, wherein the human body comprises an infant and an adult, and the analyzing the depth information of the designated body part in the depth image to determine the sleep state of the human body comprises:
judging whether any body part of the adult is overlapped with the infant or not according to the three-dimensional information corresponding to the pixel points in the depth images of the infant and the adult, and judging whether the body part of the adult covers the body of the infant or not in the overlapped area;
and if the body part of the adult is judged to cover the body of the infant, an alarm prompt is sent out.
3. The sleep state monitoring method according to claim 1, wherein the designated body part includes an extremity part; the analyzing the depth information of the designated body part in the depth image and determining the sleep state of the human body comprises:
extracting the depth value of a first key point at each joint of the four limbs to form a depth value sequence corresponding to the first key point;
counting three-dimensional information and frame time corresponding to pixel points in a depth image of each depth value in a depth value sequence corresponding to the first key point, and determining the motion acceleration of the first key point between two adjacent frame time according to the three-dimensional information and the frame time corresponding to the pixel points in the depth image of each depth value;
and determining that the human body has a sleep state of limb twitching when the motion acceleration of the first key point between two adjacent frame moments is greater than a preset acceleration threshold.
4. The sleep state monitoring method according to claim 3, wherein the determining the motion acceleration of the first key point between two adjacent frame times according to the three-dimensional information corresponding to the pixel point in the depth image where each depth value is located and the frame time comprises:
acquiring a first curve graph of the change of the three-dimensional information of the first key point along with time based on the three-dimensional information and the frame time corresponding to the pixel point in the depth image of each depth value;
respectively calculating the instantaneous speed of the curve at each frame time on the first curve graph;
and calculating the motion acceleration of the curve between every two adjacent frame moments according to the instantaneous speed of the curve at each frame moment on the first graph, and determining the acceleration as the motion acceleration of the first key point between the two adjacent frame moments.
5. The sleep state monitoring method according to claim 1, wherein the designated body part includes a chest part; the analyzing the depth information of the designated body part in the depth image to determine the sleep state of the human body comprises:
extracting the depth value of a second key point of the chest part in the continuous frame depth images to form a depth value sequence corresponding to the second key point;
counting depth extreme values in the depth value sequence of the second key point, and determining the breathing time of the human body based on the frame time corresponding to the depth image where two adjacent depth extreme values are located;
and when the breath time is greater than the preset standard breath time, determining that the human body has the sleep state of apnea.
6. The sleep state monitoring method according to any one of claims 1 to 5, wherein the specified body part includes a mouth part, and the analyzing the depth information of the specified body part in the depth image to determine the sleep state of the human body includes:
extracting a depth value of a third key point of the mouth part in the continuous frame depth images, and determining an upper lip central point and a lower lip central point of the mouth part according to the depth value of the third key point;
analyzing the coordinates of the center point of the upper lip and the coordinates of the center point of the lower lip of the mouth part in the continuous frame depth images to determine whether the human body has a sleep state of mouth breathing;
and when the human body has a sleep state of mouth breathing, determining the frequency of mouth breathing according to the change rule of the coordinates of the central point of the upper lip and the central point of the lower lip.
7. The sleep state monitoring method according to claim 2, wherein the issuing of an alarm prompt if it is determined that the body part of the adult covers the body of the infant comprises:
and if the body part of the adult is judged to cover the body of the infant, and the covering area comprises the head of the infant, an alarm prompt is given.
8. The sleep state monitoring method of claim 2, further comprising:
judging the distance between the infant and the edge of the sleeping area according to the three-dimensional information corresponding to the pixel points in the depth image of the infant and the three-dimensional information of the sleeping area;
and if the distance between the infant and the edge of the sleeping area is judged to be smaller than the preset distance, sending an alarm prompt.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the sleep state monitoring method of any one of claims 1 to 8.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the sleep state monitoring method of any one of claims 1 to 8.
CN202210737853.8A 2022-06-28 2022-06-28 Sleep state monitoring method, electronic device and storage medium Active CN114821674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210737853.8A CN114821674B (en) 2022-06-28 2022-06-28 Sleep state monitoring method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210737853.8A CN114821674B (en) 2022-06-28 2022-06-28 Sleep state monitoring method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN114821674A true CN114821674A (en) 2022-07-29
CN114821674B CN114821674B (en) 2022-11-18

Family

ID=82522955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210737853.8A Active CN114821674B (en) 2022-06-28 2022-06-28 Sleep state monitoring method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114821674B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304596A (en) * 2023-05-26 2023-06-23 深圳市明源云科技有限公司 Indoor child safety monitoring method and device, electronic equipment and storage medium
CN118379320A (en) * 2024-06-24 2024-07-23 宁波星巡智能科技有限公司 Method, device, equipment and medium for identifying infant body from being turned over under shielding

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760861A (en) * 2016-03-29 2016-07-13 华东师范大学 Epileptic seizure monitoring method and system based on depth data
CN105869144A (en) * 2016-03-21 2016-08-17 常州大学 Depth image data-based non-contact respiration monitoring method
US20190276033A1 (en) * 2013-03-15 2019-09-12 Honda Motor Co., Ltd. System and method for responding to driver state
WO2019205015A1 (en) * 2018-04-25 2019-10-31 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for shaking action recognition based on facial feature points
CN111210913A (en) * 2020-02-17 2020-05-29 全爱科技(上海)有限公司 Intelligent sleep monitoring method based on human body key node identification
CN112806962A (en) * 2021-01-18 2021-05-18 珠海格力电器股份有限公司 Child sleep state monitoring method and device based on TOF and infrared module
CN113762085A (en) * 2021-08-11 2021-12-07 江苏省人民医院(南京医科大学第一附属医院) Artificial intelligence-based infant incubator system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190276033A1 (en) * 2013-03-15 2019-09-12 Honda Motor Co., Ltd. System and method for responding to driver state
CN105869144A (en) * 2016-03-21 2016-08-17 常州大学 Depth image data-based non-contact respiration monitoring method
CN105760861A (en) * 2016-03-29 2016-07-13 华东师范大学 Epileptic seizure monitoring method and system based on depth data
WO2019205015A1 (en) * 2018-04-25 2019-10-31 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for shaking action recognition based on facial feature points
CN111210913A (en) * 2020-02-17 2020-05-29 全爱科技(上海)有限公司 Intelligent sleep monitoring method based on human body key node identification
CN112806962A (en) * 2021-01-18 2021-05-18 珠海格力电器股份有限公司 Child sleep state monitoring method and device based on TOF and infrared module
CN113762085A (en) * 2021-08-11 2021-12-07 江苏省人民医院(南京医科大学第一附属医院) Artificial intelligence-based infant incubator system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SYED M.S.ISLAM等: "Deep Learning of Facial Depth Maps for Obstructive Sleep Apnea Prediction", 《2018 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND DATA ENGINEERING (ICMLDE)》 *
陈鑫强等: "非接触式呼吸频率检测技术的设计和实现", 《厦门理工学院学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304596A (en) * 2023-05-26 2023-06-23 深圳市明源云科技有限公司 Indoor child safety monitoring method and device, electronic equipment and storage medium
CN118379320A (en) * 2024-06-24 2024-07-23 宁波星巡智能科技有限公司 Method, device, equipment and medium for identifying infant body from being turned over under shielding

Also Published As

Publication number Publication date
CN114821674B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
CN114821674B (en) Sleep state monitoring method, electronic device and storage medium
US10417775B2 (en) Method for implementing human skeleton tracking system based on depth data
US9489579B2 (en) Monitoring device and monitoring method
US10447972B2 (en) Infant monitoring system
US11315275B2 (en) Edge handling methods for associated depth sensing camera devices, systems, and methods
US9436277B2 (en) System and method for producing computer control signals from breath attributes
JP5273030B2 (en) Facial feature point detection device and drowsiness detection device
JP6025690B2 (en) Information processing apparatus and information processing method
CN111753747B (en) Violent motion detection method based on monocular camera and three-dimensional attitude estimation
JPWO2016151966A1 (en) Infant monitoring apparatus, infant monitoring method, and infant monitoring program
JP2017500111A (en) Sleep monitoring system and method
JP2015088096A (en) Information processor and information processing method
JP6822328B2 (en) Watching support system and its control method
JP2015088098A (en) Information processor and information processing method
CN116403241A (en) Back acupoint recognition method and device, electronic equipment and readable storage medium
Khan et al. Automatic recognition of movement patterns in the vojta-therapy using RGB-D data
JP2023549838A (en) Method and system for detecting child sitting posture based on child face recognition
CN110916614A (en) User sleep monitoring method and device and intelligent ceiling fan
CN117109567A (en) Riding gesture monitoring method and system for dynamic bicycle movement and wearable riding gesture monitoring equipment
KR102477479B1 (en) system and method for enabling content-based multi-party games by recognizing and classifying omnidirectional movements of multiple participants without errors using multiple sensors or devices
EP4128016A1 (en) Motion tracking of a toothcare appliance
KR20230152866A (en) Unmanned patrol robot and its object image analysis method
JP3440644B2 (en) Hand motion recognition device
JP2021005333A (en) Self-removal monitoring system for medical indwelling device and self-removal monitoring method for medical indwelling device
JP7275390B2 (en) Respiratory information estimation device and respiratory information estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230823

Address after: Room 799-4, 7th Floor, Building A3A4, Zhong'an Chuanggu Science and Technology Park, No. 900 Wangjiang West Road, Gaoxin District, Hefei Free Trade Experimental Zone, Anhui Province, 230031

Patentee after: Anhui Lushenshi Technology Co.,Ltd.

Address before: 230091 room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui Province

Patentee before: Hefei lushenshi Technology Co.,Ltd.