CN114937242A - Sleep detection early warning method and device - Google Patents

Sleep detection early warning method and device Download PDF

Info

Publication number
CN114937242A
CN114937242A CN202210617512.7A CN202210617512A CN114937242A CN 114937242 A CN114937242 A CN 114937242A CN 202210617512 A CN202210617512 A CN 202210617512A CN 114937242 A CN114937242 A CN 114937242A
Authority
CN
China
Prior art keywords
target
information
sleep
sleep mode
early warning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210617512.7A
Other languages
Chinese (zh)
Inventor
黄金龙
张琳
贺嘉
何美斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202210617512.7A priority Critical patent/CN114937242A/en
Publication of CN114937242A publication Critical patent/CN114937242A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computational Linguistics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application discloses a sleep detection early warning method and device. Wherein, the method comprises the following steps: acquiring a target monitoring image for monitoring the sleep of a first object; detecting target area information and target key point information in a target monitoring image, wherein the target area information at least comprises the following steps: face region information, limb region information and quilt region information, wherein the target key point information comprises: human body key point information; determining a target sleep mode of the first object based on the target area information and the target key point information; and processing the target area information and the target key point information based on a target processing mode corresponding to the target sleep mode to obtain the sleep state of the first object, and generating early warning information corresponding to the sleep state. The technical problem of the scheme effect to infant sleep detection among the correlation technique is not good, is difficult to in time carry out effective early warning is solved to this application.

Description

Sleep detection early warning method and device
Technical Field
The application relates to the technical field of safety monitoring, in particular to a sleep detection early warning method and device.
Background
In the sleeping process of the infant, a guardian may not be able to monitor the condition of the infant all the time, so that the safe automatic monitoring is a technology which is well paid attention by the society. In the related art, when the automatic infant safety monitoring is realized, the following three monitoring modes are mainly adopted: 1) subjective video monitoring, the mode has the defects that a guardian may not be able to watch the camera all the time and cannot obtain timely early warning information; 2) according to the method, the infant sleeping posture sample characteristics need to be generated through manual simulation, and professional staff need to establish the infant sleeping posture template, so that the technical difficulty is high, and a large amount of manpower and financial resources need to be invested; 3) by analyzing the audio frequency of the infant, the early warning system gives an alarm when the infant cries, and the method has the defects that the alarm cannot be found in advance and can be given only after an event occurs.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a sleep detection early warning method and device, and aims to at least solve the technical problems that in the related art, the scheme for infant sleep detection is poor in effect and effective early warning is difficult to carry out in time.
According to an aspect of an embodiment of the present application, there is provided a sleep detection early warning method, including: acquiring a target monitoring image for monitoring the sleep of a first object; detecting target area information and target key point information in a target monitoring image, wherein the target area information at least comprises: face region information, limb region information and quilt region information, wherein the target key point information comprises: human body key point information; determining a target sleep mode of the first object based on the target area information and the target key point information; and processing the target area information and the target key point information based on a target processing mode corresponding to the target sleep mode to obtain the sleep state of the first object, and generating early warning information corresponding to the sleep state.
Optionally, acquiring a target monitoring video for monitoring the sleep of the first object; and deleting redundant frame images and fuzzy frame images in all frame images of the target monitoring video to obtain the target monitoring image.
Optionally, the face region information in the target monitoring image is detected based on a face detection algorithm in deep learning, where the face region information at least includes: the number of the human faces, the types of the human faces and the positions of human face areas; detecting limb area information and sub-target area information in a target monitoring image based on a general target detection algorithm in deep learning, wherein the limb area information at least comprises: the limb area position and the sub-quilt area information at least comprise: a quilt coverage area position; detecting human body key point information in a target monitoring image based on a key point detection algorithm in deep learning, wherein the human body key point information at least comprises the following steps: the number of human key points and the positions of the human key points.
Optionally, when the number of faces does not exceed a first preset threshold or faces of the second object are not included in the face types, determining that the target sleep mode is the first sleep mode; and when the number of the faces exceeds a first preset threshold and the face type comprises the face of the second object, determining that the target sleep mode is a second sleep mode.
Optionally, the first sleep mode comprises: a first sub-sleep mode and a second sub-sleep mode, the second sleep mode including: a third sub-sleep mode and a fourth sub-sleep mode, determining a target sleep mode of the first subject based on the target area information and the target key point information, including: in the first sleep mode, if the number of the human body key points does not exceed a second preset threshold, determining that the target sleep mode is a first sub-sleep mode, and if the number of the human body key points exceeds the second preset threshold, determining that the target sleep mode is a second sub-sleep mode; and under the second sleep mode, if the number of the human body key points does not exceed a third preset threshold value, determining that the target sleep mode is a third sub-sleep mode, and if the number of the human body key points exceeds the third preset threshold value, determining that the target sleep mode is a fourth sub-sleep mode.
Optionally, when the target sleep mode is the first sub-sleep mode, processing the target area information and the target key point information based on a target processing manner corresponding to the target sleep mode to obtain a sleep state of the first object, and generating early warning information corresponding to the sleep state, including: determining a first distance between the center position of the face area of the first object and the center position of the quilt area, and the contact ratio between the position of the limb area of the first object and the position of the quilt cover area; when the first distance is smaller than a fourth preset threshold value, determining that the first object is in a first sleep state, and generating first early warning information, wherein the first sleep state represents that the quilt covering of the first object is too high, and the first early warning information is used for prompting that the quilt needs to be pulled down; and when the contact ratio is smaller than a fifth preset threshold value, determining that the first object is in a second sleep state, and generating second early warning information, wherein the second sleep state represents that the quilt covering of the first object is too low, and the second early warning information is used for prompting that the quilt needs to be pulled up.
Optionally, when the target sleep mode is the second sub-sleep mode, processing the target area information and the target key point information based on a target processing manner corresponding to the target sleep mode to obtain a sleep state of the first object, and generating early warning information corresponding to the sleep state, including: determining a first number of head key points and a second number of limb key points in human body key points of a first object, determining a first angle between a first key point vector and a second key point vector, and determining a second distance between two part key points, wherein the first key point vector is a vector from a nose key point to a neck key point, the second key point vector is a normal vector of a vector of two shoulder key points, and the two part key points at least comprise one of the following components: key points of shoulders, key points of two sides of waist, key points of two arms and key points of two knees; when the first number exceeds a sixth preset threshold and the second number exceeds a seventh preset threshold, determining that the first object is in a third sleep state, wherein the third sleep state represents that the first object is supine; when the first number is smaller than an eighth preset threshold and the second number exceeds a seventh preset threshold, determining that the first object is in a fourth sleep state, and generating third early warning information, wherein the fourth sleep state represents that the first object is prone, and the third early warning information is used for prompting that the sleeping posture of the first object needs to be adjusted; and when the first angle is smaller than a ninth preset threshold and the second distance is smaller than a tenth preset threshold, determining that the first object is in a fifth sleep state, and generating third early warning information, wherein the fifth sleep state represents that the first object lies on the side.
Optionally, when the target sleep mode is the third sub-sleep mode, processing the target area information and the target key point information based on a target processing manner corresponding to the target sleep mode to obtain a sleep state of the first object, and generating early warning information corresponding to the sleep state, including: determining a third distance between the limb area center position of the second subject and the limb area center position of the first subject; and when the third distance is smaller than an eleventh preset threshold, determining that the first object is in a sixth sleep state, and generating fourth early warning information, wherein the sixth sleep state represents that the second object and the first object are squeezed, and the fourth early warning information is used for prompting that the second object needs to be far away from the first object.
Optionally, when the target sleep mode is the fourth sub-sleep mode, processing the target area information and the target key point information based on a target processing manner corresponding to the target sleep mode to obtain a sleep state of the first object, and generating early warning information corresponding to the sleep state, including: determining whether a superposition area exists between the position of the human key point of the second object and the position of the limb area of the first object; and when the overlapping area exists, determining that the first object is in a sixth sleep state, and generating fourth early warning information, wherein the sixth sleep state represents that the second object and the first object are squeezed, and the fourth early warning information is used for prompting that the second object needs to be away from the first object.
Optionally, detecting whether an occlusion region exists in the face region position of the first object based on a classification detection algorithm in deep learning; and when an occlusion area exists, generating fifth early warning information, wherein the fifth early warning information is used for prompting that the occlusion of the first object face part needs to be checked.
According to another aspect of the embodiments of the present application, there is also provided a sleep detection early warning apparatus, including: the acquisition module is used for acquiring a target monitoring image for monitoring the sleep of the first object; the detection module is used for detecting target area information and target key point information in the target monitoring image, wherein the target area information at least comprises: face region information, limb region information and quilt region information, wherein the target key point information comprises: human body key point information; a determination module for determining a target sleep mode of the first object based on the target area information and the target key point information; and the processing module is used for processing the target area information and the target key point information based on a target processing mode corresponding to the target sleep mode to obtain the sleep state of the first object and generating early warning information corresponding to the sleep state.
According to another aspect of the embodiments of the present application, a non-volatile storage medium is further provided, where the non-volatile storage medium includes a stored program, and when the program runs, a device in which the non-volatile storage medium is located is controlled to execute the sleep detection early warning method.
According to another aspect of the embodiments of the present application, there is also provided a processor, configured to execute a program, where the program executes the sleep detection early warning method.
In the embodiment of the application, after a target monitoring image for monitoring the sleep of a first object is obtained, target area information and target key point information in the target monitoring image are detected by adopting a deep target detection method and a human body key point detection method, a target sleep mode of the first object is determined based on the target area information and the target key point information, the purpose of intelligently identifying the sleep safety of infants is achieved, the target area information and the target key point information are processed by a processing mode corresponding to the target sleep mode, and corresponding early warning information is generated finally, so that the technical effect of fully-automatically and timely early warning the sleep of the infants is achieved, and the technical problems that in the related technology, the scheme for detecting the sleep of the infants is poor in effect and effective early warning is difficult to perform in time are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a sleep detection early warning method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a process of preprocessing a target surveillance video according to an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a process of detecting a target monitoring image according to an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating a process for determining a sleep mode of a first object according to an embodiment of the present application;
fig. 5 is a schematic flowchart illustrating a process of generating corresponding warning information according to a first object sleep mode according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a sleep detection early warning apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present application, there is provided a sleep detection early warning method, it should be noted that the steps shown in the flowchart of the figure may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
Fig. 1 is a schematic flowchart of an optional sleep detection early warning method according to an embodiment of the present application, and as shown in fig. 1, the method at least includes steps S102-S108, where:
step S102, a target monitoring image for monitoring the sleep of the first object is obtained.
Generally, a target monitoring video for monitoring the sleep of a first object may be acquired by using a data acquisition device such as a network camera, and the obtained target monitoring video may be preprocessed, for example, redundant frame images and fuzzy frame images in all frame images of the target monitoring video are deleted to obtain a target monitoring image, where the first object may be an infant or a patient lacking self-care ability, and the following description takes the infant as an example.
Fig. 2 is a diagram illustrating a preprocessing operation performed on a target surveillance video, specifically, redundant frames are removed by using a method of video frame interval random sampling, and then whether a current video frame is blurred is detected based on a fourier transform (FFT) method, and if the obtained result is a blurred video frame, the current video frame is discarded; and if the obtained result is a clear video frame, performing deep learning on the data. The preprocessing operation is to extract key frame images from all frame images of the target surveillance video so as to obtain higher quality video frame data.
Step S104, detecting target area information and target key point information in the target monitoring image, wherein the target area information at least comprises: face region information, limb region information and quilt region information, and the target key point information comprises: human body key point information.
Specifically, the target region detection and the target key point detection are respectively performed on the obtained target surveillance video through the deep learning target detection and the human body key point detection, for example, fig. 3 is a method for detecting all human body key point information and quilt region information in a target surveillance image by using a face detection algorithm, a real-time target detection algorithm, a general target detection algorithm, a classification detection algorithm and a deep learning key point detection algorithm in the deep learning, wherein the method can be realized through the following specific processes:
the method for detecting the face region information in the target monitoring image based on the face detection algorithm in the deep learning includes specifically acquiring the number, the type and the face region position of the faces appearing in the target monitoring video within a certain period of time, such as: the Face position of the infant can be expressed as FU (Face) X ,Face y ,Face w ,Face_h]。
Detecting limb area information and sub-target area information in a target monitoring image based on a general target detection algorithm in deep learning, wherein the limb area information at least comprises: the limb area position and the sub-quilt area information at least comprise: the position of the quilt coverage area. For example, the overall limb area position of an infant can be expressed as LU [ lamb ] X ,limb y ,limb w ,limb h ]The position of the limb area of the sleeping partner can be expressed as PU [ Plimb [ ] X ,Plimb y ,Plimb w ,Plimb h ]The location of the area where the quilt is located can be expressed as BU [ b ] X ,b y ,b w ,b h ]Wherein the position label [ alpha ]x,y,w,h]Respectively representing the coordinates of the center position of the target detection frame and the width and height of the detection frame.
Whether a shielding region exists in the face region position of the first object is detected based on a classification detection algorithm in deep learning, and when the shielding region exists, fifth early warning information is generated, wherein the fifth early warning information is used for prompting that the shielding of the face of the first object needs to be checked and processed, and whether the shielding of the face of the infant exists is identified through a neural network classification model.
Detecting human body key point information in a target monitoring image based on a key point detection algorithm in deep learning, wherein the human body key point information at least comprises the following steps: the number of human key points and the positions of the human key points. Such as: the key points detect the coordinates of key points of each limb of the infant, and the key points of the limbs comprise a { "nose": 0. the 'neck': 1. "right shoulder": 2. "right elbow": 3. "right wrist": 4. "left shoulder": 5. "left elbow": 6. "left wrist": 7. "right waist": 8. "right knee": 9. "right ankle": 10. "left waist": 11. "left knee": 12. "left ankle": 13. "right eye": 14. "left eye": 15. "right ear": 16. "left ear": 17. "background": 18, where the number represents the keypoint location retrieval sequence number.
Step S106, determining a target sleep mode of the first object based on the target area information and the target key point information.
Generally, when an infant sleeps, the infant may fall asleep alone or an adult such as a parent may accompany the infant, so that the target sleep mode may be determined as follows: when the number of the faces does not exceed a first preset threshold or the faces of the second object are not included in the face types, determining that the target sleep mode is a first sleep mode, namely an infant single sleep mode; and when the number of the faces exceeds a first preset threshold and the face type comprises the face of the second object, determining that the target sleep mode is a second sleep mode, namely an adult sleep accompanying mode.
Wherein, the second object can be adult guardians such as dad, mom, grandpa, milpa, etc.
Further, the first sleep mode may be divided into: the infant sleeping management system comprises a first sub-sleeping mode and a second sub-sleeping mode, wherein in the first sleeping mode, if the number of key points of a human body does not exceed a second preset threshold value, the target sleeping mode is determined to be the first sub-sleeping mode, namely the infant comforter sub-mode; and if the number of the key points of the human body exceeds a second preset threshold value, determining that the target sleep mode is a second sub-sleep mode, namely a single-sleep and uncovered sub-mode for the infant.
Likewise, the second sleep mode may be divided into: a third sub-sleep mode and a fourth sub-sleep mode, wherein in the second sleep mode, if the number of the key points of the human body does not exceed a third preset threshold, the target sleep mode is determined to be the third sub-sleep mode, which is also called an adult comforter sub-mode; and if the number of the key points of the human body exceeds a third preset threshold value, determining that the target sleep mode is a fourth sub-sleep mode, namely an adult sleep accompanying uncovered sub-mode.
For example, fig. 4 is a diagram for determining an infant sleep mode in a target monitoring video by analyzing target area information and target key point information, in the infant sleep detection process, the current face number and type are obtained by detecting the target monitoring video, when the face number is less than 2 (a first preset threshold), no adult face is included, and a key point is detected as a plurality of persons, the mode is determined as an a mode (infant single-sleep mode), in the a mode activated state, the number of key points of a human body limb is counted, and when the number of key points of the human body is greater than 8 (a second preset threshold), the mode is determined as an a-NCQ mode (infant single-sleep uncovered sub-mode); and when the number of the key points of the statistical human body key points is less than or equal to 8 (a second preset threshold), determining the mode as an A-CQ mode (the infant single-sleeping quilt sub-mode).
When the number of the faces is more than or equal to 2 (a first preset threshold), the faces comprise adult faces, and the key points are detected as multiple persons, the mode is determined to be a P mode (adult sleep accompanying mode), in the P mode activation state, the key points of the limbs of the human body are counted, and when the number of the key points of the human body is more than 8 (a third preset threshold), the mode is determined to be a P-NCQ mode (adult sleep accompanying uncovered sub-mode); and when the number of the key points of the statistical human body key points is less than or equal to 8 (a third preset threshold), judging the mode to be a P-CQ mode (an adult sleeping cover sub-mode).
And S108, processing the target area information and the target key point information based on a target processing mode corresponding to the target sleep mode to obtain the sleep state of the first object, and generating early warning information corresponding to the sleep state.
Fig. 5 is a complete flowchart of processing target area information and target key point information based on a target processing manner corresponding to a target sleep mode to obtain a sleep state of a first object, and generating early warning information corresponding to the sleep state, and specifically, the processing method and the generation of corresponding early warning information for different target sleep modes may be implemented in the following manners:
in the infant sleep detection process, when the target sleep mode is the A-CQ mode, the distance between the central position of the face of the infant and the central position of the quilt and the contact ratio between the position of the quilt covering area and the position of the limb area of the infant can be determined, and the condition that the quilt cover of the infant is too low or too high, whether the infant kicks the quilt or not and generates corresponding early warning information can be judged.
Specifically, when the target sleep mode is a first sub-sleep mode, determining a first distance between the center position of the face area of the first object and the center position of the sub-covering area, and the contact ratio between the position of the limb area of the first object and the position of the sub-covering area; when the first distance is smaller than a fourth preset threshold value, determining that the first object is in a first sleep state, and generating first early warning information, wherein the first sleep state represents that the quilt covering of the first object is too high, and the first early warning information is used for prompting that the quilt needs to be pulled down; and when the contact ratio is smaller than a fifth preset threshold value, determining that the first object is in a second sleep state, and generating second early warning information, wherein the second sleep state represents that the quilt covering of the first object is too low, and the second early warning information is used for prompting that the quilt needs to be pulled up.
The specific analysis method is as follows:
first, determining the Face center position coordinates F [ Face ] of the infant X ,Face y ]And the central coordinate B [ B ] of the area where the quilt is located X ,b y ];
Secondly, calculating the Euclidean distance between the coordinates of the point F and the point B
Figure BDA0003675019660000071
(first distance);
third, if Face y ≤b y (fourth preset threshold), and taking early warning threshold
Figure BDA0003675019660000081
As a critical point threshold. When Dist 1E (alpha) 1 ,α 2 ) If the coincidence degree of the quilt covering area position and the infant limb area position is higher, the infant quilt is indicated to be too high, and first early warning information is generated at the moment and prompts that the quilt needs to be pulled down.
Fourthly, obtaining the infant limb area position LU [ lamb ] obtained by the area detection module X ,limb y ,limb w ,limb_h]BU [ b ] at the area where quilt is located X ,b y ,b w ,b h ]The intersection is divided by the limb area LU to obtain the limb intersection rate omega, and the expression is
Figure BDA0003675019660000082
And when the omega is less than or equal to 0.3 (a fifth preset threshold), the situation that the infant comforter is too low can be set, second early warning information is generated at the moment, and the second early warning information prompts that the comforter needs to be pulled upwards.
In the infant sleep detection process, when the target sleep mode is the A-NCQ mode, the sleep posture condition of the first object can be analyzed according to the key point information.
Specifically, when the target sleep mode is a second sub-sleep mode, determining a first number of head key points and a second number of limb key points in human key points of a first object, determining a first angle between a first key point vector and a second key point vector, and determining a second distance between two part key points, wherein the first key point vector is a vector from a nose key point to a neck key point, and the second key point vector is a normal vector of a vector of double shoulder key points;
when the first number exceeds a sixth preset threshold and the second number exceeds a seventh preset threshold, determining that the first object is in a third sleep state, wherein the third sleep state represents that the first object is supine; when the first number is smaller than an eighth preset threshold and the second number exceeds a seventh preset threshold, determining that the first object is in a fourth sleep state, and generating third early warning information, wherein the fourth sleep state represents that the first object is prone, and the third early warning information is used for prompting that the sleeping posture of the first object needs to be adjusted; and when the first angle is smaller than a ninth preset threshold and the second distance is smaller than a tenth preset threshold, determining that the first object is in a fifth sleep state, and generating third early warning information, wherein the fifth sleep state represents that the first object lies on the side.
The specific analysis process is as follows:
determining the sleeping posture of the infant according to the information, comprising the following steps: when the number (first number) of head key points in the body key points of the infant is greater than or equal to a preset threshold (sixth preset threshold), and the number (second number) of the body key points is greater than or equal to 10 (seventh preset threshold), determining that the sleeping posture of the infant is supine (third sleeping state); when the number (first number) of the head key points in the limb key points of the infant is less than or equal to 2 (an eighth preset threshold) and the number (second number) of the limb key points is greater than or equal to 10 (a seventh preset threshold), determining that the sleeping posture of the infant is prone (a fourth sleeping state); by calculating an angle (first angle) between a vector from a nose key point to a neck key point of an infant (first key point vector) and a horizontal vector of a limb detection midline, the horizontal vector of the limb detection midline is defined as a normal vector of a backpack vector (second key point vector). When the included angle threshold is smaller than a set threshold (a ninth preset threshold), and the relative Euclidean distance (a second distance) of key points of two parts (shoulders, waists, arms and knees) is smaller than a preset threshold (a tenth preset threshold), determining that the sleeping posture of the infant is in a side-lying state (a fifth sleeping state).
And when the infant has the prone position or the side lying position, generating third early warning information, wherein the third early warning information prompts that the infant needs to be adjusted in sleeping position.
In the process of infant sleep detection, when the target sleep mode is a P-CQ mode, the distance between the central position of the infant detection frame and the central position of the sleep partner detection frame can be judged, and if the distance is smaller than a set threshold value, an extrusion early warning is generated.
Specifically, when the target sleep mode is the third sub-sleep mode, determining a third distance between the central position of the limb area of the second subject and the central position of the limb area of the first subject; and when the third distance is smaller than an eleventh preset threshold, determining that the first object is in a sixth sleep state, and generating fourth early warning information, wherein the sixth sleep state represents that the second object and the first object are squeezed, and the fourth early warning information is used for prompting that the second object needs to be far away from the first object.
The specific analysis process is as follows:
firstly, acquiring a position coordinate P [ Plimb ] of a sleeping partner based on a real-time target detection algorithm X ,Plimb y ]And the central coordinate C [ lamb ] of the area where the quilt infant is located X ,limb y ];
Secondly, calculating the Euclidean distance between the position coordinate P point of the sleeping partner and the central coordinate C point of the area where the infant is located
Figure BDA0003675019660000091
(third distance);
the third step is that
Figure BDA0003675019660000092
When the infant is in sleep, the sleeping partner and the infant are in safe distance; when in use
Figure BDA0003675019660000093
And (eleventh preset threshold), the fact that the sleeping partner and the infant are squeezed (sixth sleeping state) is indicated, fourth early warning information is generated, and the fourth early warning information prompts the sleeping partner to be away from the infant.
In the process of detecting the sleep of the infant, when the target sleep mode is the P-NCQ mode, whether the key point position of the sleeping partner is in the infant detection frame or not can be detected.
Specifically, when the target sleep mode is the fourth sub-sleep mode, determining whether a superposition area exists between the position of the human body key point of the second object and the position of the limb area of the first object; and when the overlapping area exists, determining that the first object is in a sixth sleep state, and generating fourth early warning information, wherein the sixth sleep state represents that the second object and the first object are squeezed, and the fourth early warning information is used for prompting that the second object needs to be away from the first object.
And when the intersection of the positions of the human body key points of the sleeping partner and the skeleton connecting line of the infant limb key points is judged, fourth early warning information is generated when strong extrusion exists between the infant and the sleeping partner, and the fourth early warning information is used for prompting the sleeping partner to be away from the infant.
It should be noted that each preset threshold in the embodiment of the present application may be automatically adjusted according to an actual scene, and all the values are merely exemplary, and do not constitute specific limitations.
Specifically, the above-mentioned early warning information can be prompted by outputting an early warning signal through the early warning device, for example, the early warning information can be prompted by a buzzer, a short message prompt, electronic bracelet vibration, mobile phone ringing and other modes.
According to the embodiment of the application, the infant monitoring video acquired by the network camera is analyzed through a neural network algorithm, the fact that the face position of an infant can be detected in a monitoring video stream, whether the face of the infant is shielded by foreign matters or not, the body covering quilt condition and the specific information of the quilt position are achieved, whether the sleeping posture of the infant is correct or not is judged through analyzing the infant limb key point information, whether the infant is extruded or not is analyzed by analyzing the key point information of the limb of a sleeping partner and analyzing the coincidence rate of the infant key point information, and therefore the defect that the infant is extruded when the infant sleeps safely in the prior art is overcome, and early warning for identifying that the infant is extruded when the infant is asleep is added.
Example 2
According to the embodiment of the present application, there is also provided a sleep detection and early warning apparatus for implementing the sleep detection and early warning method, as shown in fig. 6, the apparatus at least includes an obtaining module 61, a detecting module 62, a determining module 63, and a processing module 64, where:
the acquiring module 61 is configured to acquire a target monitoring image for monitoring sleep of the first object.
Optionally, the obtaining module may obtain a target monitoring video in which the first object sleeps, and delete redundant frame images and blurred frame images in all frame images of the target monitoring video to obtain a target monitoring image.
A detection module 62, configured to detect target area information and target key point information in a target monitoring image, where the target area information at least includes: face region information, limb region information and quilt region information, wherein the target key point information comprises: human body key point information.
Optionally, the detection module may detect face region information in the target monitoring image based on a face detection algorithm in deep learning, where the face region information at least includes: the number of the human faces, the types of the human faces and the positions of human face areas; detecting limb area information and sub-target area information in a target monitoring image based on a general target detection algorithm in deep learning, wherein the limb area information at least comprises: the limb area position and the sub-area information at least comprise: the position of a quilt coverage area; detecting human body key point information in a target monitoring image based on a key point detection algorithm in deep learning, wherein the human body key point information at least comprises the following steps: number of and positions of human key points
A determining module 63, configured to determine a target sleep mode of the first subject based on the target area information and the target key point information.
Optionally, the determining module may determine the target sleep mode as the first sleep mode when the number of faces does not exceed a first preset threshold or faces of the second object are not included in the face type; and when the number of the faces exceeds a first preset threshold and the face type comprises the face of the second object, determining that the target sleep mode is the second sleep mode. Wherein the first sleep mode includes: a first sub-sleep mode and a second sub-sleep mode, the second sleep mode including: a third sub-sleep mode and a fourth sub-sleep mode.
Specifically, in the first sleep mode, if the number of the human body key points does not exceed a second preset threshold, the target sleep mode is determined to be a first sub-sleep mode, and if the number of the human body key points exceeds the second preset threshold, the target sleep mode is determined to be a second sub-sleep mode; and under the second sleep mode, if the number of the human body key points does not exceed a third preset threshold value, determining that the target sleep mode is a third sub-sleep mode, and if the number of the human body key points exceeds the third preset threshold value, determining that the target sleep mode is a fourth sub-sleep mode.
When the target sleep mode is a first sub-sleep mode, determining a first distance between the center position of the face area of the first object and the center position of the sub-covering area, and the contact ratio between the position of the limb area of the first object and the position of the sub-covering area; when the first distance is smaller than a fourth preset threshold value, determining that the first object is in a first sleep state, and generating first early warning information, wherein the first sleep state represents that the quilt covering of the first object is too high, and the first early warning information is used for prompting that the quilt needs to be pulled down; and when the contact ratio is smaller than a fifth preset threshold value, determining that the first object is in a second sleep state, and generating second early warning information, wherein the second sleep state represents that the quilt covering of the first object is too low, and the second early warning information is used for prompting that the quilt needs to be pulled up.
When the target sleep mode is a second sub-sleep mode, determining a first number of head key points and a second number of limb key points in human key points of a first object, determining a first angle between a first key point vector and a second key point vector, and determining a second distance between two part key points, wherein the first key point vector is a vector from a nose key point to a neck key point, the second key point vector is a normal vector of a vector of two shoulder key points, and the two part key points at least comprise one of the following components: key points of shoulders, key points of two sides of waist, key points of two arms and key points of two knees; when the first number exceeds a sixth preset threshold and the second number exceeds a seventh preset threshold, determining that the first object is in a third sleep state, wherein the third sleep state represents that the first object is supine; when the first number is smaller than an eighth preset threshold and the second number exceeds a seventh preset threshold, determining that the first object is in a fourth sleep state, and generating third early warning information, wherein the fourth sleep state represents that the first object is prone, and the third early warning information is used for prompting that the sleeping posture of the first object needs to be adjusted; and when the first angle is smaller than a ninth preset threshold and the second distance is smaller than a tenth preset threshold, determining that the first object is in a fifth sleep state, and generating third early warning information, wherein the fifth sleep state represents that the first object lies on the side.
When the target sleep mode is a third sub-sleep mode, determining a third distance between the central position of the limb area of the second object and the central position of the limb area of the first object; and when the third distance is smaller than an eleventh preset threshold, determining that the first object is in a sixth sleep state, and generating fourth early warning information, wherein the sixth sleep state represents that the second object and the first object are squeezed, and the fourth early warning information is used for prompting that the second object needs to be far away from the first object.
When the target sleep mode is the fourth sub-sleep mode, determining whether a superposition area exists between the position of the human key point of the second object and the position of the limb area of the first object; and when the overlapping area exists, determining that the first object is in a sixth sleep state, and generating fourth early warning information, wherein the sixth sleep state represents that the second object and the first object are squeezed, and the fourth early warning information is used for prompting that the second object needs to be away from the first object.
Optionally, whether an occlusion region exists at the face region position of the first object may also be detected based on a classification detection algorithm in deep learning; and when an occlusion area exists, generating fifth early warning information, wherein the fifth early warning information is used for prompting that the occlusion of the first object face part needs to be checked.
And the processing module 64 is configured to process the target area information and the target key point information based on a target processing manner corresponding to the target sleep mode to obtain a sleep state of the first object, and generate early warning information corresponding to the sleep state.
It should be noted that, in the embodiment of the present application, each module in the sleep detection and early warning apparatus corresponds to each implementation step of the sleep detection and early warning method in embodiment 1 one to one, and since the detailed description is already performed in embodiment 1, details that are not partially embodied in this embodiment may refer to embodiment 1, and are not described herein again.
Example 3
According to the embodiment of the application, a nonvolatile storage medium is further provided, and the nonvolatile storage medium includes a stored program, wherein when the program runs, a device where the nonvolatile storage medium is located is controlled to execute the sleep detection early warning method in embodiment 1.
According to an embodiment of the present application, a processor is further provided, where the processor is configured to execute a program, where the program executes the sleep detection early warning method in embodiment 1.
Optionally, the following steps are implemented when the program is running:
acquiring a target monitoring image for monitoring the sleep of a first object; detecting target area information and target key point information in a target monitoring image, wherein the target area information at least comprises: face region information, limb region information and quilt region information, wherein the target key point information comprises: human body key point information; determining a target sleep mode of the first object based on the target area information and the target key point information; and processing the target area information and the target key point information based on a target processing mode corresponding to the target sleep mode to obtain the sleep state of the first object, and generating early warning information corresponding to the sleep state.
The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
In the embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit may be a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, or portions or all or portions of the technical solutions that contribute to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (13)

1. A sleep detection early warning method is characterized by comprising the following steps:
acquiring a target monitoring image for monitoring the sleep of a first object;
detecting target area information and target key point information in the target monitoring image, wherein the target area information at least comprises: face region information, limb region information and quilt region information, wherein the target key point information comprises: human body key point information;
determining a target sleep mode of the first subject based on the target area information and the target keypoint information;
and processing the target area information and the target key point information based on a target processing mode corresponding to the target sleep mode to obtain the sleep state of the first object, and generating early warning information corresponding to the sleep state.
2. The method of claim 1, wherein obtaining a target monitoring image that monitors sleep of a first subject comprises:
acquiring a target monitoring video for monitoring the first object to sleep;
and deleting redundant frame images and fuzzy frame images in all frame images of the target monitoring video to obtain the target monitoring image.
3. The method according to claim 1, wherein detecting target area information and target key point information in the target monitoring image comprises:
detecting the face region information in the target monitoring image based on a face detection algorithm in deep learning, wherein the face region information at least comprises: the number of the faces, the types of the faces and the positions of face areas;
detecting the limb area information and the sub-target area information in the target monitoring image based on a general target detection algorithm in deep learning, wherein the limb area information at least comprises: the limb area position, the sub-area information at least includes: a quilt coverage area position;
detecting the human body key point information in the target monitoring image based on a key point detection algorithm in deep learning, wherein the human body key point information at least comprises: the number of human key points and the positions of the human key points.
4. The method of claim 3, wherein determining the target sleep pattern of the first subject based on the target area information and the target keypoint information comprises:
when the number of the faces does not exceed a first preset threshold or faces of a second object are not included in the face types, determining that the target sleep mode is a first sleep mode;
and when the number of the faces exceeds the first preset threshold and the face type comprises the face of the second object, determining that the target sleep mode is a second sleep mode.
5. The method of claim 4, wherein the first sleep mode comprises: a first sub-sleep mode and a second sub-sleep mode, the second sleep mode including: a third sub-sleep mode and a fourth sub-sleep mode, determining a target sleep mode of the first subject based on the target area information and the target key point information, including:
in the first sleep mode, if the number of the human body key points does not exceed a second preset threshold, determining that the target sleep mode is the first sub-sleep mode, and if the number of the human body key points exceeds the second preset threshold, determining that the target sleep mode is the second sub-sleep mode;
in the second sleep mode, if the number of the human body key points does not exceed a third preset threshold, the target sleep mode is determined to be the third sub-sleep mode, and if the number of the human body key points exceeds the third preset threshold, the target sleep mode is determined to be the fourth sub-sleep mode.
6. The method according to claim 5, wherein when the target sleep mode is the first sub-sleep mode, processing the target area information and the target key point information based on a target processing manner corresponding to the target sleep mode to obtain a sleep state of the first subject, and generating early warning information corresponding to the sleep state includes:
determining a first distance between the center position of the face area of the first object and the center position of the sub-covering area, and the contact ratio between the position of the limb area of the first object and the position of the sub-covering area;
when the first distance is smaller than a fourth preset threshold value, determining that the first object is in a first sleep state, and generating first early warning information, wherein the first sleep state indicates that the quilt cover of the first object is too high, and the first early warning information is used for prompting that the quilt needs to be pulled down;
and when the contact ratio is smaller than a fifth preset threshold value, determining that the first object is in a second sleep state, and generating second early warning information, wherein the second sleep state represents that the quilt covering of the first object is too low, and the second early warning information is used for prompting that the quilt needs to be pulled up.
7. The method according to claim 5, wherein when the target sleep mode is the second sub-sleep mode, processing the target area information and the target key point information based on a target processing manner corresponding to the target sleep mode to obtain a sleep state of the first subject, and generating early warning information corresponding to the sleep state, includes:
determining a first number of head key points and a second number of limb key points in the human body key points of the first object, determining a first angle between a first key point vector and a second key point vector, and determining a second distance between two part key points, wherein the first key point vector is a vector from a nose key point to a neck key point, the second key point vector is a normal vector of a vector of two shoulder key points, and the two part key points at least comprise one of the following components: key points of shoulders, key points of two sides of waist, key points of two arms and key points of two knees;
determining that the first object is in a third sleep state when the first number exceeds a sixth preset threshold and the second number exceeds a seventh preset threshold, wherein the third sleep state represents that the first object is supine;
when the first number is smaller than an eighth preset threshold and the second number exceeds a seventh preset threshold, determining that the first object is in a fourth sleep state, and generating third early warning information, wherein the fourth sleep state represents that the first object is prone, and the third early warning information is used for prompting that the sleeping posture of the first object needs to be adjusted;
when the first angle is smaller than a ninth preset threshold and the second distance is smaller than a tenth preset threshold, determining that the first object is in a fifth sleep state, and generating the third early warning information, wherein the fifth sleep state represents that the first object lies on the side.
8. The method according to claim 5, wherein when the target sleep mode is the third sub-sleep mode, processing the target area information and the target key point information based on a target processing manner corresponding to the target sleep mode to obtain a sleep state of the first subject, and generating early warning information corresponding to the sleep state, includes:
determining a third distance between the limb area center position of the second subject and the limb area center position of the first subject;
when the third distance is smaller than an eleventh preset threshold, determining that the first object is in a sixth sleep state, and generating fourth early warning information, wherein the sixth sleep state represents that the second object and the first object are squeezed, and the fourth early warning information is used for prompting that the second object needs to be far away from the first object.
9. The method according to claim 5, wherein when the target sleep mode is the fourth sub-sleep mode, processing the target area information and the target key point information based on a target processing manner corresponding to the target sleep mode to obtain a sleep state of the first subject, and generating early warning information corresponding to the sleep state includes:
determining whether a coincidence region exists between the position of the human key point of the second object and the position of the limb area of the first object;
when the overlapping area exists, determining that the first object is in a sixth sleep state, and generating fourth early warning information, wherein the sixth sleep state represents that the second object and the first object are squeezed, and the fourth early warning information is used for prompting that the second object needs to be far away from the first object.
10. The method of claim 3, further comprising:
detecting whether an occlusion region exists in the face region position of the first object based on a classification detection algorithm in deep learning;
and when the occlusion area exists, generating fifth early warning information, wherein the fifth early warning information is used for prompting that the occlusion of the first object face part needs to be checked.
11. A sleep detection early warning device, characterized by comprising:
the acquisition module is used for acquiring a target monitoring image for monitoring the sleep of the first object;
the detection module is used for detecting target area information and target key point information in the target monitoring image, wherein the target area information at least comprises: face region information, limb region information and quilt region information, wherein the target key point information comprises: human body key point information;
a determination module to determine a target sleep mode of the first subject based on the target area information and the target keypoint information;
and the processing module is used for processing the target area information and the target key point information based on a target processing mode corresponding to the target sleep mode to obtain the sleep state of the first object and generating early warning information corresponding to the sleep state.
12. A non-volatile storage medium, comprising a stored program, wherein when the program runs, the non-volatile storage medium is controlled to execute the sleep detection early warning method according to any one of claims 1 to 10.
13. A processor, characterized in that the processor is configured to execute a program, wherein the program executes the sleep detection pre-warning method according to any one of claims 1 to 10.
CN202210617512.7A 2022-06-01 2022-06-01 Sleep detection early warning method and device Pending CN114937242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210617512.7A CN114937242A (en) 2022-06-01 2022-06-01 Sleep detection early warning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210617512.7A CN114937242A (en) 2022-06-01 2022-06-01 Sleep detection early warning method and device

Publications (1)

Publication Number Publication Date
CN114937242A true CN114937242A (en) 2022-08-23

Family

ID=82866434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210617512.7A Pending CN114937242A (en) 2022-06-01 2022-06-01 Sleep detection early warning method and device

Country Status (1)

Country Link
CN (1) CN114937242A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115649025A (en) * 2022-12-26 2023-01-31 深圳曦华科技有限公司 Equipment control method and device based on child posture abnormal event in cabin
CN116313164A (en) * 2023-05-22 2023-06-23 亿慧云智能科技(深圳)股份有限公司 Anti-interference sleep monitoring method, device, equipment and storage medium
CN116978185A (en) * 2023-09-20 2023-10-31 永林电子股份有限公司 Induction control method and device for LED lamp and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115649025A (en) * 2022-12-26 2023-01-31 深圳曦华科技有限公司 Equipment control method and device based on child posture abnormal event in cabin
CN115649025B (en) * 2022-12-26 2023-03-21 深圳曦华科技有限公司 Equipment control method and device based on child posture abnormal event in cabin
CN116313164A (en) * 2023-05-22 2023-06-23 亿慧云智能科技(深圳)股份有限公司 Anti-interference sleep monitoring method, device, equipment and storage medium
CN116313164B (en) * 2023-05-22 2023-08-22 亿慧云智能科技(深圳)股份有限公司 Anti-interference sleep monitoring method, device, equipment and storage medium
CN116978185A (en) * 2023-09-20 2023-10-31 永林电子股份有限公司 Induction control method and device for LED lamp and electronic equipment

Similar Documents

Publication Publication Date Title
CN114937242A (en) Sleep detection early warning method and device
CN110477925A (en) A kind of fall detection for home for the aged old man and method for early warning and system
CN110021140A (en) Monitoring system and monitoring method applied to infant
CN108615333A (en) Infant asphyxia early warning system based on artificial intelligence and method
CN107958572B (en) Baby monitoring system
CN113657150A (en) Fall detection method and device and computer readable storage medium
CN114926957B (en) Infant monitoring system and method based on intelligent home
CN112949417A (en) Tumble behavior identification method, equipment and system
WO2017098265A1 (en) Method and apparatus for monitoring
Sukno et al. Automatic assessment of eye blinking patterns through statistical shape models
CN112184642A (en) Method and device for warning abnormal sleeping posture, storage medium and electronic device
WO2024001588A1 (en) Breathing state detection method and apparatus, device, storage medium and computer program product
CN113392765A (en) Tumble detection method and system based on machine vision
CN113679302A (en) Monitoring method, device, equipment and storage medium based on sweeping robot
CN109044375A (en) A kind of control system and its method of real-time tracking detection eyeball fatigue strength
Walizad et al. Driver drowsiness detection system using convolutional neural network
CN114999643A (en) WiFi-based intelligent monitoring method for old people
CN113963424A (en) Infant asphyxia or sudden death early warning method based on single-order face positioning algorithm
CN117173784B (en) Infant turning-over action detection method, device, equipment and storage medium
CN111563492B (en) Fall detection method, fall detection device and storage device
CN208092911U (en) A kind of baby monitoring systems
JP6822326B2 (en) Watching support system and its control method
US20220254241A1 (en) Ai-based video tagging for alarm management
CN115424341A (en) Fighting behavior identification method and device and electronic equipment
Xie et al. Skeleton-based fall events classification with data fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination