WO2017098265A1 - Procédé et appareil de surveillance - Google Patents

Procédé et appareil de surveillance Download PDF

Info

Publication number
WO2017098265A1
WO2017098265A1 PCT/GB2016/053891 GB2016053891W WO2017098265A1 WO 2017098265 A1 WO2017098265 A1 WO 2017098265A1 GB 2016053891 W GB2016053891 W GB 2016053891W WO 2017098265 A1 WO2017098265 A1 WO 2017098265A1
Authority
WO
WIPO (PCT)
Prior art keywords
baby
camera
face
cot
matching
Prior art date
Application number
PCT/GB2016/053891
Other languages
English (en)
Inventor
Andrea Cavallaro
Tsz Kin HON
Evangelos SARIYANIDI
Ricardo SÁNCHEZ MATILLA
Original Assignee
Queen Mary University Of London
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Queen Mary University Of London filed Critical Queen Mary University Of London
Publication of WO2017098265A1 publication Critical patent/WO2017098265A1/fr

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/043Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Definitions

  • the present invention relates to methods and apparatuses for monitoring subjects, in particular for monitoring human babies and infants whilst sleeping.
  • An audio monitor only detects if the baby is making noise and cannot assist the parent or carer in detecting other conditions of concern or interest such as the baby's face becoming covered or the baby waking quietly.
  • Video monitors are known but mostly only transmit a video feed from a camera in the baby unit to a screen in the parent unit and so must be checked regularly by the parent to determine the status of the baby.
  • other closed circuit video systems can be used for the same purpose but a low-light or infra-red camera is necessary since the baby will likely sleep better in a dark environment.
  • a method of monitoring a subject in a monitored area comprising: obtaining a video feed of at least a part of the monitored area expected to include the head of the subject, the video feed being provided by an infra-red/thermal camera located adjacent a side of the monitored area; identifying one or more bright regions in frames of the video feed; analysing the bright region(s) to determine a status of the subject; and providing an alarm signal if the subject is in a predetermined status.
  • the method of the invention can provide a reliable and efficient detection of a change of status of a subject, e.g. a baby, especially when the subject is prone.
  • the identified bright regions of an infra-red/thermal image are an optimum input to motion detection or face recognition by trained model, even in challenging conditions.
  • the present invention also provides a monitoring apparatus including an image processor configured to effect the above method.
  • Figure 1 depicts an apparatus according to an embodiment of the invention in situ
  • Figure 2 is a high level schematic of a method according to an embodiment of the present invention.
  • Figure 3 is a more detailed flow diagram of selected steps in a method according to an embodiment of the invention.
  • Figure 4 depicts an image of a subject obtained from a thermal camera usable in an embodiment of the present invention
  • Figure 5 depicts an image of a baby obtained by a thermal camera in an overhead position
  • Figure 6 is a thermal image of a baby waking up
  • Figure 7 depicts pixel differences between two successive thermal images of a baby waking up
  • Figure 8 depicts a thermal image of a baby in which bounding boxes for candidate face detection areas are shown;
  • Figures 9, 10 and 11 depict the root filter, part filters and weights for the histogram of oriented gradients features respectively for frontal face detection model;
  • Figures 12, 13 and 14 depict the root filter, part filters and weights for the histogram of oriented gradients features respectively for lateral face detection model;
  • Figure 15 is an example training image for frontal face detection showing the bounding box
  • Figure 16 is an example training image for lateral face detection showing the bounding box
  • Figure 17 shows an image of a subject obtained from a thermal camera in which body-localisation has been performed by the first embodiment of the present invention
  • Figure 18 is an example output of an embodiment of the present invention showing detection of an awake state
  • Figure 19 is an example output of an embodiment of the present invention showing a baby asleep with a frontal face detection
  • Figure 20 is an example output of an embodiment of the present invention showing a baby asleep with lateral face detection
  • Figure 21 is an example output of an embodiment of the present invention with no face detected
  • Figure 22 depicts a method of applying feedback in an embodiment of the present invention
  • Figure 23 depicts an apparatus according to a second embodiment of the invention in situ
  • Figure 24 is a high level schematic of a method according to the second embodiment of the present invention.
  • Figure 25 is a more detailed flow diagram of selected steps in a method according to the second embodiment of the invention.
  • Figure 26 shows an example of a baby image collected with an IR CCTV camera in which body-localisation has been performed by the second embodiment of the present invention, and the bounding box that represents the body of the baby.
  • An embodiment of the invention is a system for automatically detecting two events of interest, namely whether a baby in a cot is awake or her face is covered.
  • the term "cot” as used herein is intended to encompass all types of furniture in which a baby or infant can sleep, including crib, prams, bassinets, beds, etc.
  • the term "baby” as used herein is intended to encompass subjects of all ages, in particular babies and infants.
  • the system sends an alert, e.g. to a parent as cover with the
  • the video of a baby in a cot is captured by a thermal or IR-CCTV camera from a specific safe position.
  • the system extracts facial and motion features from the image signal.
  • the two situations, namely awake and face being covered, are detected based on a comparison of sets of extracted visual features with corresponding pre-trained models that might adaptively be adjusted for individual babies based on feedback from parents after an initial training period.
  • an embodiment of the invention comprises a thermal camera 3 at a safe position in the front/back of a cot 1.
  • This camera position avoids any devices, cables or chords needing to be positioned above the baby 2, thus avoiding the risk of a baby being hurt by a falling object or suffocated by entanglement in cords.
  • the camera also does not obstruct parental access to the baby.
  • an embodiment of the invention provides a method as depicted in Figure 2.
  • step SI an image sequence in the form of a video feed is obtained from thermal camera 3.
  • step S2 a motion feature extraction process is carried out and in step S4 a determination of whether the baby is awake is made. The determination of the awake state takes account of feedback provided by a user of the system S5. If it is determined that the baby is awake an alert is provided in step S6.
  • step S7 determines whether the baby's face is covered.
  • the determination of face covering also takes account of user feedback S9. If it is determined that the baby's face is covered then a postprocessing step S10 is carried out and an alert given in step SI 1.
  • An embodiment of the invention can use models based on deformable facial parts (such as described in P. F. Felzenszwalb, R. B. Girshick, D. McAllester and D. Ramanan, "Object Detection with Discriminatively Trained Part Based Models".
  • An embodiment of the inventions uses two trained facial models for two different sleeping poses of a baby, namely facing up (frontal) and lying on a side (lateral).
  • the training data set includes variations of different face poses, camera view-points/positions and camera rotations and captures different babies' sleeping poses, different home settings (e.g. sizes of cots) and different camera positioning.
  • the two models capture variations of facial features due to the pose difference.
  • an embodiment of the invention compares the detection scores using the two models and detects the face pose.
  • the face pose and motion information provided by an embodiment of the invention could be interesting to parents or doctors to analyse sleeping postures and sleeping patterns. This information can be jointly analysed with other external measures such as temperature, humidity, atmospheric pressure, air quality, alimentation, stress, etc. to create sleep quality statistics for understanding how external factors affect baby quality sleeping.
  • the wake-up detection algorithm uses video frame difference for motion feature extraction.
  • Figure 6 is an example frame from a thermal video feed of a baby waking up taken with a camera at the head of a cot. Let be the pixel position.
  • the input frame is a grayscale image with value between 0 and 255.
  • the current frame is a grayscale image with value between 0 and 255.
  • k is chosen to be half of the sampling rate of the input video (about 0.5 second) as a baby hardly has fast motion. This approach ensures that movement of all parts of the body, e.g. feet and hands, is detected. A segmentation band in the largest brightest area, discussed below, can be used but isolates movement of the baby's head so may be slower to detect the awake state. [ 0041 ] Then the sum of absolute pixel difference ( Figure 7 depicts the pixel difference between two frames of the thermal video feed) between the two frames is calculated S42 as
  • W and L are the width and length of the frame, respectively.
  • the video can be of different length or omitted.
  • the alert may be sent to a dedicated parent unit or some other device such as a computer or smartphone by any suitable means.
  • the baby is detected as sleeping S46 and the embodiment checks if the baby' s face is covered or not.
  • the video stream is processed frame by frame. First the background with lower temperature is filtered out and only the largest area in the frame, where n is the time index
  • segmentation S71 This can be done, for example, using MATLAB (TM) functions available at (Image Analyst), Image Segmentation tutorial, MATLAB Central, 2009, online at
  • the input images from the thermal camera are in grayscale with grayscale values between 0 and 255 for example.
  • the maximum grayscale values indicate the highest temperature.
  • First the pixels of an image with grey-scale values that are larger than a fixed threshold ( 150) are calculated. Then the set of selected pixels are processed to remove discontinuities that are not expected to reflect the underlying
  • the output for frame I n is the matching scores for each
  • the frontal model Mf and lateral model Mi are deformable part models (DPM).
  • DPM deformable part models
  • Suitable DPM software is available from R. Girshick, P. Felzenszwalb, and D. McAllester, Discnminatively Trained Deformable Part Models, Release 4, 2010, online at http://cs.brown.edu/ ⁇ pff/latent-release4/.
  • DPM uses the histogram of oriented gradients (HOG) (as described in N. Dalai and B. Triggs, "Histograms of oriented gradients for human detection," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.886-893, June 2005) features to capture local appearance and shape.
  • HOG histogram of oriented gradients
  • the DPM algorithm includes extracting low-level HOG features, using a star- structured part-based model for efficient matching and discriminative learning by latent support vector machine (SVM) (as described in S. Andrews, I. Tsochantaridis, and T. Hofmann, "Support vector machines for multiple-instance learning,” Advances in Neural Information Processing Systems, pp. 561- 568, 2002).
  • SVM latent support vector machine
  • FIG. 9 to 14 show: root filters for the frontal (Fig. 9) and lateral (Fig. 12) models; part filters (Figs. 10 and 13) and weights for the histogram of oriented gradient features (Figs. 1 1 and 14).
  • the algorithm generates a score for each detection that can help detecting if the baby' s face is visible or not, as well as detecting the pose of the baby when its face is visible (i.e. face-up or side).
  • Figure 8 depicts an example image with bounding boxes showing the regions identified by the frontal and lateral models as most face like. The scores are shown adjacent the boxes:
  • the models are trained for face detection in thermal images using a specially collected data set with positive and negative samples from different poses, viewpoints, cropping and rotations (Table 1).
  • Table 1 Training data set for the face-covered detection.
  • the data set is annotated with bounding boxes as shown in Figures 15 and 16 which are example frontal and lateral training images.
  • the aspect ratio of frontal bounding boxes is fixed to about 1 :2.
  • the aspect ratio of a frontal bounding box can be between 1:1.5 and 1:3, desirably between about 1:1.75 and 1:2.5.
  • the aspect ratio of lateral bounding boxes is about 2:1.
  • the aspect ratio of a lateral bounding box can be between 1.5:1 and 3:1, desirably between about 1.75:1 and 2.5:1.
  • Negative samples are also used, e.g. a cot without baby, hot and cold bottles, lights, kitchen and other more challenging samples such as baby's body parts or the baby covered with a blanket.
  • the decision of face (and pose) detection is determined based on the frontal threshold, and lateral threshold, in decision steps S81, S82, SI 00 which implement the
  • matching score for frontal model matching score for lateral model; frontal threshold; lateral threshold; absolute difference between and and absolute
  • a post-processing step S10 (Fig. 2) is applied to filter out some outliers in the face detection.
  • the detection information of previous frames is used to update the current detection n and the current face covered state X .
  • the current detection D includes the information of detected bounding box , pose P and matching
  • the post-processing is divided into four cases: (i) state is 'face' and face is detected (i.e. D n is not null); (ii) state is 'face' but no
  • the current detection D n and current face covered state X n are updated by comparing the detection information in previous frames with the current detection.
  • the information we used in the post-processing includes mean grayscale value difference in the bounding box
  • the thresholds are initially set using training data and then adaptively
  • the system asks the parents if the detection decision was correct.
  • feedback can continue throughout use of the apparatus rather than just in a training period.
  • TP true positive
  • FP false positive
  • FN false negative
  • TN true negative
  • the evaluation measure is also based on the overlap between the bounding box detected, B d , and the ground truth, B g .
  • the overlap is calculated as
  • results will be a true positive if overlapping between the detection and the ground truth is over 40% and the estimated pose matches the ground truth pose.
  • the false positive rates in some videos (5083, 5188 and 5321) of the face-covered detection is much higher than the false negative rates of other videos as the camera was not put in the ideal position.
  • the ideal position for the face-covered detection should be a camera placed at the centre of the edge of the bed straight facing the baby's face. The camera was put in different horizontal positions (sliding along the edge of the bed) to check the robustness of the algorithm with different camera positions. Another reason is that the decision thresholds are chosen to have higher false positive rate than false negative rate as it is more concerning if a baby's face is covered but detected as a face (false negative).
  • FIG. 22 An example of adaptation of the decision threshold in the face-covered detection is shown in Fig. 22.
  • the baby's face is not covered but is incorrectly detected as covered.
  • the threshold, T h is decreased from - 0.2 to -0.5 in some trials and the face can then be correctly detected.
  • an IR-CCTV camera 3a mounted on a wall is used, as depicted in Figure 23.
  • Such a camera position again avoids the need for any devices or cords to be positioned overhead the baby, thus avoiding a baby being hurt by a falling object or suffocated by clinging to cords if the camera was placed overhead.
  • a system that works on an IR-CCTV camera mounted on the wall can be more attractive for a consumer, because such cameras are commonly used for home security and families can use the same camera for monitoring the baby.
  • a camera on the wall is advantageous also because it can be used by families that prefer to sleep their baby in a bed (rather than a cot), which may not have a place to mount a camera. Of course either type of camera can be used in either position.
  • Body localisation S200 aims at cropping out the part of that contains the body; let this part be denoted by The method that
  • the camera used for baby monitoring is an IR-CCTV camera 3a on the wall, then localising the body becomes more difficult, as the field of view is larger and localisation must be performed within the entire room rather than just the cot.
  • An automatic body detection method can be used to crop the body region, The method must be computationally
  • the present embodiment therefore uses an automatic body detection method that is based on the cascaded AdaBoost machine learning method [P. Viola and M. Jones. "Rapid object detection using a boosted cascade of simple features.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1, pp. 1-511, June 2011, which document is incorporated by reference].
  • AdaBoost AdaBoost machine learning method
  • QLZM Quadrature Localised Zernike Moments
  • the wake-up detection algorithm uses video frame difference for motion feature extraction.
  • the body of the baby is localised s200 in the current frame, at time n.
  • the bounding box of the body is used to generate two cropped images: at time n and at time n-k. k is chosen to be half of the sampling rate
  • Figure 25 is a block diagram of detection for waking-up based on frame difference and detection for face being covered based on deformable part model, the following symbols are used:
  • Figure 26 shows an example of a baby image collected with an IR CCTV camera, and the bounding box that represents the body of the baby.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Psychology (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un système permettant de détecter automatiquement deux événements d'intérêt, c'est-à-dire si un bébé dans un lit d'enfant est réveillé ou si son visage est recouvert. Lorsqu'une des deux situations est détectée, le système envoie une alerte avec le ou les événements correspondants détectés et le clip vidéo correspondant capturant le ou les événements. La vidéo d'un bébé dans un lit d'enfant est capturée par une caméra thermique ou une caméra IR-CCTV à partir d'une position sûre spécifique. Le système extrait des caractéristiques de visage et de mouvement à partir du signal d'image. Les deux événements, c'est-à-dire réveillé et visage est recouvert, sont détectés sur la base d'une comparaison d'ensembles de caractéristiques visuelles extraites à des modèles correspondants appris au préalable qui pourraient être ajustés de manière adaptative pour des bébés particuliers sur la base d'informations renvoyées par des parents après une période d'apprentissage initiale.<i /> Les profils de mouvement de bébé et les informations de pose du visage peuvent être utilisés conjointement avec des variables externes pour comprendre comment des facteurs environnementaux affectent la qualité du sommeil du bébé.
PCT/GB2016/053891 2015-12-11 2016-12-09 Procédé et appareil de surveillance WO2017098265A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1521885.2 2015-12-11
GBGB1521885.2A GB201521885D0 (en) 2015-12-11 2015-12-11 Method and apparatus for monitoring

Publications (1)

Publication Number Publication Date
WO2017098265A1 true WO2017098265A1 (fr) 2017-06-15

Family

ID=55274592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2016/053891 WO2017098265A1 (fr) 2015-12-11 2016-12-09 Procédé et appareil de surveillance

Country Status (2)

Country Link
GB (1) GB201521885D0 (fr)
WO (1) WO2017098265A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108542389A (zh) * 2018-02-23 2018-09-18 山东沃尔德生物技术有限公司 一种婴幼儿状态监控系统
CN112069949A (zh) * 2020-08-25 2020-12-11 开放智能机器(上海)有限公司 一种基于人工智能的婴儿睡眠监测系统及监测方法
CN112712020A (zh) * 2020-12-29 2021-04-27 文思海辉智科科技有限公司 睡眠监测方法、装置及系统
CN112971730A (zh) * 2021-04-20 2021-06-18 广东德泷智能科技有限公司 一种基于区块链的婴幼儿睡眠健康数据监测系统
WO2022039663A1 (fr) * 2020-08-18 2022-02-24 Conex Healthcare Pte. Ltd. Plateforme de surveillance continue sans contact et non intrusive
CN114518172A (zh) * 2021-08-26 2022-05-20 中华人民共和国深圳海关 体温监测系统运行监控方法、装置、设备及存储介质
US11386676B2 (en) 2018-10-19 2022-07-12 Shanghai Sensetime Intelligent Technology Co., Ltd Passenger state analysis method and apparatus, vehicle, electronic device and storage medium
CN112712020B (zh) * 2020-12-29 2024-05-31 文思海辉智科科技有限公司 睡眠监测方法、装置及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2019021445A1 (ja) * 2017-07-28 2020-06-25 株式会社オプティム 判定システム、方法及びプログラム

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464950A (zh) * 2009-01-16 2009-06-24 北京航空航天大学 基于在线学习和贝叶斯推理的视频人脸识别与检索方法
US20150213317A1 (en) * 2014-01-28 2015-07-30 Challentech International Corporation Intelligent Monitoring System

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464950A (zh) * 2009-01-16 2009-06-24 北京航空航天大学 基于在线学习和贝叶斯推理的视频人脸识别与检索方法
US20150213317A1 (en) * 2014-01-28 2015-07-30 Challentech International Corporation Intelligent Monitoring System

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YICHUAN TANG ET AL: "Robust Boltzmann Machines for recognition and denoising", COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012 IEEE CONFERENCE ON, IEEE, 16 June 2012 (2012-06-16), pages 2264 - 2271, XP032232335, ISBN: 978-1-4673-1226-4, DOI: 10.1109/CVPR.2012.6247936 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108542389A (zh) * 2018-02-23 2018-09-18 山东沃尔德生物技术有限公司 一种婴幼儿状态监控系统
CN108542389B (zh) * 2018-02-23 2020-11-13 山东沃尔德生物技术有限公司 一种婴幼儿状态监控系统
US11386676B2 (en) 2018-10-19 2022-07-12 Shanghai Sensetime Intelligent Technology Co., Ltd Passenger state analysis method and apparatus, vehicle, electronic device and storage medium
WO2022039663A1 (fr) * 2020-08-18 2022-02-24 Conex Healthcare Pte. Ltd. Plateforme de surveillance continue sans contact et non intrusive
AU2020464165B2 (en) * 2020-08-18 2023-05-18 Conex Healthcare Pte. Ltd. Non-contact and non-intrusive continuous monitoring platform
CN112069949A (zh) * 2020-08-25 2020-12-11 开放智能机器(上海)有限公司 一种基于人工智能的婴儿睡眠监测系统及监测方法
CN112712020A (zh) * 2020-12-29 2021-04-27 文思海辉智科科技有限公司 睡眠监测方法、装置及系统
CN112712020B (zh) * 2020-12-29 2024-05-31 文思海辉智科科技有限公司 睡眠监测方法、装置及系统
CN112971730A (zh) * 2021-04-20 2021-06-18 广东德泷智能科技有限公司 一种基于区块链的婴幼儿睡眠健康数据监测系统
CN112971730B (zh) * 2021-04-20 2021-08-06 广东德泷智能科技有限公司 一种基于区块链的婴幼儿睡眠健康数据监测系统
CN114518172A (zh) * 2021-08-26 2022-05-20 中华人民共和国深圳海关 体温监测系统运行监控方法、装置、设备及存储介质
CN114518172B (zh) * 2021-08-26 2023-11-21 中华人民共和国深圳海关 体温监测系统运行监控方法、装置、设备及存储介质

Also Published As

Publication number Publication date
GB201521885D0 (en) 2016-01-27

Similar Documents

Publication Publication Date Title
WO2017098265A1 (fr) Procédé et appareil de surveillance
Yu et al. A posture recognition-based fall detection system for monitoring an elderly person in a smart home environment
Feng et al. Fall detection for elderly person care in a vision-based home surveillance environment using a monocular camera
CN106557726B (zh) 一种带静默式活体检测的人脸身份认证系统及其方法
Rougier et al. Robust video surveillance for fall detection based on human shape deformation
Belshaw et al. Towards a single sensor passive solution for automated fall detection
CN109800643B (zh) 一种活体人脸多角度的身份识别方法
WO2015172445A1 (fr) Robot intelligent multifonctionnel domestique
Shoaib et al. View-invariant fall detection for elderly in real home environment
TW201140511A (en) Drowsiness detection method
Le et al. Eye blink detection for smart glasses
Joshi et al. A fall detection and alert system for an elderly using computer vision and Internet of Things
CN110852306A (zh) 一种基于人工智能的安全监控系统
US20190303656A1 (en) Multi-level state detecting system and method
Tao et al. 3D convolutional neural network for home monitoring using low resolution thermal-sensor array
Zambanini et al. Detecting falls at homes using a network of low-resolution cameras
Hung et al. Fall detection with two cameras based on occupied area
Khan et al. Video analytic for fall detection from shape features and motion gradients
Hung et al. The estimation of heights and occupied areas of humans from two orthogonal views for fall detection
WO2020064580A1 (fr) Déduction d&#39;informations concernant les états de sommeil et d&#39;éveil d&#39;une personne à partir d&#39;une séquence de trames vidéo
JP6822326B2 (ja) 見守り支援システム及びその制御方法
Rathour et al. Klugoculus: A vision-based intelligent architecture for security system
Soni et al. Single Camera based Real Time Framework for Automated Fall Detection
KR20210158584A (ko) 안면인식 및 유사한 옷 색상의 다수 객체 추적기술을 통한 스마트 휴먼검색 통합 솔루션
Alaliyat Video-based fall detection in elderly’s houses

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16813023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16813023

Country of ref document: EP

Kind code of ref document: A1