CN117690159A - Infant groveling and sleeping monitoring method, device and equipment based on multi-mode data fusion - Google Patents
Infant groveling and sleeping monitoring method, device and equipment based on multi-mode data fusion Download PDFInfo
- Publication number
- CN117690159A CN117690159A CN202311683971.6A CN202311683971A CN117690159A CN 117690159 A CN117690159 A CN 117690159A CN 202311683971 A CN202311683971 A CN 202311683971A CN 117690159 A CN117690159 A CN 117690159A
- Authority
- CN
- China
- Prior art keywords
- infant
- image
- head
- real
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000012544 monitoring process Methods 0.000 title claims abstract description 42
- 230000004927 fusion Effects 0.000 title claims abstract description 29
- 238000001514 detection method Methods 0.000 claims abstract description 117
- 230000007958 sleep Effects 0.000 claims abstract description 73
- 230000036760 body temperature Effects 0.000 claims abstract description 52
- 239000011159 matrix material Substances 0.000 claims abstract description 37
- 230000000474 nursing effect Effects 0.000 claims abstract description 17
- 238000013507 mapping Methods 0.000 claims description 31
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012806 monitoring device Methods 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 206010003497 Asphyxia Diseases 0.000 abstract description 6
- 210000003128 head Anatomy 0.000 description 127
- 230000036544 posture Effects 0.000 description 31
- 230000008569 process Effects 0.000 description 15
- 238000004458 analytical method Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000011897 real-time detection Methods 0.000 description 5
- 238000013136 deep learning model Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000004297 night vision Effects 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 210000002345 respiratory system Anatomy 0.000 description 3
- 208000000884 Airway Obstruction Diseases 0.000 description 2
- 206010008589 Choking Diseases 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013145 classification model Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000010438 heat treatment Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 210000004916 vomit Anatomy 0.000 description 2
- 230000008673 vomiting Effects 0.000 description 2
- 206010005908 Body temperature conditions Diseases 0.000 description 1
- 206010061199 Head deformity Diseases 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 235000013559 Schnittsellerie Nutrition 0.000 description 1
- 244000169997 Schnittsellerie Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 210000003238 esophagus Anatomy 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 210000003736 gastrointestinal content Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000008267 milk Substances 0.000 description 1
- 210000004080 milk Anatomy 0.000 description 1
- 235000013336 milk Nutrition 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000000241 respiratory effect Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003860 sleep quality Effects 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000011343 solid material Substances 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 208000011580 syndromic disease Diseases 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to the technical field of infant nursing, solves the problem that infant lying sleep detection cannot be accurately achieved in the prior art, and is high in choking probability of infants, and provides an infant lying sleep monitoring method, device and equipment based on multi-mode data fusion. The method comprises the following steps: acquiring a real-time visible light image and a real-time thermal infrared image of an infant care area; calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix; extracting a first infant region image according to infant body temperature characteristics; outputting a second infant area image in the real-time visible light image according to the homography matrix; and detecting the infant head of the second infant region image, and when the infant head is detected, detecting the infant head to lie prone, and outputting the infant head prone detection result. The invention improves the accuracy and reliability of infant groveling sleep detection, thereby increasing the safety of infants and reducing the suffocation risk.
Description
Technical Field
The invention relates to the technical field of infant care, in particular to an infant groveling and sleeping monitoring method, device and equipment based on multi-mode data fusion.
Background
The importance of detecting infant lying down is an important topic concerning the health and life safety of children. Lying on the back refers to the sleeping position of an infant outside the supine position, usually the prone position or the lateral position. Lying down can cause the infant's face to directly contact the surface of the bed, which increases the risk of the infant inhaling harmful substances such as second hand smoke or bed dust. In addition, lying prone to sleep also can lead to the air flue to block, prevents infant normal breathing, detects infant and lies prone to sleep and can ensure that their respiratory tract is fully protected, reduces the risk of respiratory tract problem. Lying prone may lead to deformation of the infant's head or unbalanced muscle development in the neck. This condition is known as "head tilt syndrome", and detecting the infant's lying prone sleep and taking appropriate precautions in time, such as head positioning, neck activity and monitoring sleep position, can reduce the occurrence of these problems. Currently, parents and caregivers use infant monitoring cameras to monitor the sleeping posture of an infant in real time, the cameras are usually provided with night vision functions and work under low illumination conditions so as to monitor at night, and meanwhile, some cameras also have a movement detection function, and if the infant is in a lying sleeping posture for too long, an alarm or notification can be sent out.
The prior Chinese patent CN112712020A discloses a sleep monitoring method, a sleep monitoring device and a sleep monitoring system, wherein the method comprises the following steps: judging whether a monitoring object exists in the designated area according to the monitoring image, wherein the monitoring object can be a human body or an animal body, and the human body can be an infant or other personnel needing to be monitored; the monitoring image can be a visible light image or an infrared image, and can also comprise a visible light image and an infrared image; when the visible light image and the infrared image are adopted for double judgment, the object to be monitored is considered to exist in the designated area as long as one of the object identification result and the heating body judgment result indicates that the object or the heating body exists; when the sleep posture is detected, the posture recognition and the posture transformation frequency calculation can be performed, the anti-asphyxia monitoring of the infant can be realized through the posture recognition, the sleep quality statistics can be performed according to the posture recognition and the posture transformation frequency, and further the sleep habit analysis can be performed. Since the sleeping postures of infants are various, including lying prone, lying on the back, lying on the side, etc., and there may be different degrees of body twisting while sleeping, which increases complexity of posture recognition, highly accurate algorithms and training data may be required to accurately distinguish these postures, difficulty in posture recognition is great, and at the same time, the sleep monitoring system is easily disturbed by environmental factors, such as bed sheets, toys or other objects on the bed may be erroneously recognized as infants, resulting in inaccurate results of anti-asphyxia detection.
Therefore, how to accurately realize the infant sleep-prone detection and prevent the infant from choking is a problem to be solved urgently.
Disclosure of Invention
In view of the above, the invention provides a method, a device and equipment for monitoring infant groveling sleep based on multi-mode data fusion, which are used for solving the problem that infant groveling sleep detection cannot be accurately realized and the probability of suffocation of the infant is high in the prior art.
The technical scheme adopted by the invention is as follows:
in a first aspect, the invention provides a method for monitoring infant sleep on prone position based on multi-modal data fusion, which comprises the following steps:
s1: acquiring a real-time visible light image and a real-time thermal infrared image of an infant care area;
s2: calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix between the visible light image and the thermal infrared image;
s3: according to the body temperature characteristics of the infants, extracting a first infant area image from the real-time thermal infrared image;
s4: mapping the first infant area image into the real-time visible light image according to the homography matrix, and outputting a second infant area image in the real-time visible light image;
S5: and detecting the infant head of the second infant region image, when the infant head is detected, detecting the infant head region image, and outputting the infant head-lying detection result.
Preferably, the S2 includes:
s21: acquiring first vertex position information of a preset reference object according to color difference in a real-time visible light image;
s22: acquiring second vertex position information of the preset reference object according to temperature information in the real-time thermal infrared image and combining an edge detection algorithm and a right angle detection algorithm;
s23: and calculating the first vertex position information and the second vertex position information, and outputting the homography matrix.
Preferably, the S3 includes:
s31: acquiring a temperature value of each point in the real-time thermal infrared image and a preset body temperature threshold value related to the infant;
s32: when the temperature value is larger than the body temperature threshold value, outputting an area image of the point corresponding to the temperature value as the first infant area image.
Preferably, the S5 includes:
s51: inputting the second infant area image into a pre-trained infant head detection model, and outputting a head detection result;
S52: if the head detection result is that the head of the infant is detected, the infant is in the nursing area and the head of the infant is not shielded, the head area image of the infant is subjected to groveling detection, and the infant groveling detection result is output;
s53: if the head detection result is that the head of the infant is not detected, the infant is in the nursing area and the head of the infant is shielded, and a first safety prompt is sent to the user.
Preferably, the S52 includes:
s521: if the infant is in the nursing area and the infant head is not shielded, acquiring a first infant head image in the second infant area image;
s522: according to the homography matrix, mapping the first infant head image into the first infant area image, and outputting a second infant head image in the first infant area image;
s523: and according to the body temperature characteristics in the head image of the second infant, carrying out groveling detection on the infant, and outputting the groveling detection result.
Preferably, the S523 includes:
s5231: converting the second infant head image into a corresponding gray level image, and marking a standard gray level value corresponding to the infant body temperature in the gray level image;
S5232: according to the standard gray value, acquiring the proportion of pixels smaller than the standard gray value in the second infant head image;
s5233: and obtaining the groveling detection result according to the pixel proportion and a preset proportion threshold value.
Preferably, the S5233 includes:
s52331: acquiring a first proportional threshold and a second proportional threshold which are preset and related to the sleeping posture of the infant, wherein the first proportional threshold is smaller than the second proportional threshold;
s52332: if the pixel proportion is smaller than the first proportion threshold, the temperature of the head area of the infant is considered to be lower than the body temperature, and the detection result is lying prone;
s52333: if the pixel proportion is larger than the second proportion threshold, the temperature of the head area of the infant is considered to be higher than the body temperature, and the detection result is positive sleep;
s52334: and if the pixel proportion is not smaller than the first proportion threshold value and not larger than the second proportion threshold value, the detection result is side sleep.
In a second aspect, the present invention provides an infant groveling monitoring device based on multi-modal data fusion, the device comprising:
the image acquisition module is used for acquiring a real-time visible light image and a real-time thermal infrared image of the infant nursing area;
The calibration module is used for calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix between the visible light image and the thermal infrared image;
the first infant image extraction module is used for extracting a first infant region image from the real-time thermal infrared image according to infant body temperature characteristics;
the image mapping module is used for mapping the first infant area image into the real-time visible light image according to the homography matrix and outputting a second infant area image in the real-time visible light image;
and the groveling sleep detection module is used for carrying out infant head detection on the second infant area image, carrying out groveling sleep detection on the infant head area image when the infant head is detected, and outputting an infant groveling sleep detection result.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: at least one processor, at least one memory and computer program instructions stored in the memory, which when executed by the processor, implement the method as in the first aspect of the embodiments described above.
In a fourth aspect, embodiments of the present invention also provide a storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method as in the first aspect of the embodiments described above.
In summary, the beneficial effects of the invention are as follows:
the invention provides a multimode data fusion-based infant groveling monitoring method, device and equipment, wherein the method comprises the following steps: acquiring a real-time visible light image and a real-time thermal infrared image of an infant care area; calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix between the visible light image and the thermal infrared image; according to the body temperature characteristics of the infants, extracting a first infant area image from the real-time thermal infrared image; mapping the first infant area image into the real-time visible light image according to the homography matrix, and outputting a second infant area image in the real-time visible light image; and detecting the infant head of the second infant region image, when the infant head is detected, detecting the infant head region image, and outputting the infant head-lying detection result. The invention combines the information of visible light images and thermal infrared images, adopts a multi-mode data fusion mode, improves the prone sleeping detection of infants, and is beneficial to improving the accuracy of prone sleeping detection: the body temperature distribution characteristics of the infant can be accurately captured by using the thermal infrared image, the body temperature distribution characteristics of the head of the infant can be used for distinguishing the prone sleeping and other sleeping postures when the infant is prone to sleep, the visible light image and the thermal infrared image are fused together to provide more comprehensive information, the visible light image can capture the external form and posture of the infant, the thermal infrared image provides internal body temperature information, the false alarm rate can be reduced by combining the two information, the reliability of prone sleeping detection is improved, the head detection and prone sleeping detection are carried out after the second infant area image is obtained, and the cascade detection process can help to eliminate interference of other body parts or objects and improve the specificity of prone sleeping detection. In general, by combining the technologies of multi-mode data, homography matrix mapping, head detection, groveling sleep detection and the like, the accuracy and reliability of the groveling sleep detection of the infants can be improved, so that the safety of the infants is improved, and the suffocation risk is reduced.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described, and it is within the scope of the present invention to obtain other drawings according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of the overall operation of the infant groveling monitoring method based on multi-mode data fusion in embodiment 1 of the present invention;
FIG. 2 is a schematic flow chart of calibrating a real-time visible light image and a real-time thermal infrared image in embodiment 1 of the present invention;
FIG. 3 is a flowchart illustrating the process of extracting an image of a first infant area according to embodiment 1 of the present invention;
fig. 4 is a schematic flow chart of the prone sleep detection for the infant head region image in embodiment 1 of the present invention;
FIG. 5 is a flow chart of mapping a first infant head image into a first infant area image according to embodiment 1 of the present invention;
FIG. 6 is a schematic diagram of a process for detecting infant sleep according to the body temperature characteristics in the head image of the second infant in embodiment 1 of the present invention;
fig. 7 is a flow chart of obtaining a groveling detection result according to a pixel ratio and a ratio threshold in embodiment 1 of the present invention;
Fig. 8 is a block diagram of an infant groveling monitor device based on multi-mode data fusion in embodiment 3 of the present invention;
fig. 9 is a schematic structural diagram of an electronic device in embodiment 4 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. In the description of the present invention, it should be understood that the terms "center," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate description of the present application and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element. If not conflicting, the embodiments of the present invention and the features of the embodiments may be combined with each other, which are all within the protection scope of the present invention.
Example 1
Referring to fig. 1, embodiment 1 of the invention discloses a method for monitoring infant sleep on prone based on multi-modal data fusion, which comprises the following steps:
s1: acquiring a real-time visible light image and a real-time thermal infrared image of an infant care area;
specifically, in an infant care area, two different types of sensors are used for acquiring real-time monitoring information, firstly, a visible light camera is used, the visible light camera can record a visible light video stream, a 940 nanometer infrared lamp set is used for the visible light camera, the infrared light camera belongs to a passive infrared camera shooting technology, is suitable for monitoring under night or low light conditions, can capture infrared reflection signals in the surrounding environment by using a 940 nanometer infrared light source and convert the infrared reflection signals into a visible light image in a black-and-white state, can acquire clear monitoring images under extremely low illumination conditions, and meanwhile, the infrared camera is also provided with a thermal infrared camera which can capture thermal radiation of a target object, is generated based on the temperature distribution of the target object and can provide information about the temperature distribution of the target object, so that the infrared camera is very useful for monitoring infants because the infrared camera can help to detect the body temperature characteristics of the infants, such as the relative temperature of the head area; the combination of the two sensors allows the visible light image and the thermal infrared image to be obtained simultaneously, and the multi-mode data fusion can provide more comprehensive information, thereby being beneficial to the detection of the lying-on sleep of the infant and other monitoring tasks.
S2: calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix between the visible light image and the thermal infrared image;
specifically, calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix between the visible light image and the thermal infrared image, in order to align the real-time visible light image and the real-time thermal infrared image under the same coordinate system, determining internal parameters and external parameters of a camera through a series of characteristic points or calibration plates at known positions is generally included, and a conversion relation between the two images is established, wherein the homography matrix corresponding to the conversion relation can be used for mapping points on the thermal infrared image onto the corresponding visible light image, so that pixel level alignment between the two images is realized, which is very important for subsequent analysis and detection, because the homography matrix ensures that the visible light information and the thermal infrared information can be mutually referred to, and more accurate monitoring and recognition results are provided.
In one embodiment, referring to fig. 2, the step S2 includes:
s21: acquiring first vertex position information of a preset reference object according to color difference in a real-time visible light image;
S22: acquiring second vertex position information of the preset reference object according to temperature information in the real-time thermal infrared image and combining an edge detection algorithm and a right angle detection algorithm;
s23: and calculating the first vertex position information and the second vertex position information, and outputting the homography matrix.
Specifically, firstly, a heated black rectangular plate is placed in the field of view of an infrared camera and a visible light camera, the plate has obvious characteristics in two images and can be used for subsequent calibration, and in a visible light image, the black rectangular plate is extracted through color difference, and four vertexes D1 (D11, D12, D13 and D14) of the black rectangular plate are determined. In the infrared image, since the temperature of the black rectangular plate is significantly higher than the surrounding environment, four vertexes D2 (D21, D22, D23, D24) can be obtained by extracting the brightest part of the infrared image and combining the technologies of edge detection, right angle detection and the like. The homography matrix H is used for describing a transformation relationship between the visible light image and the infrared image, so that a mapping relationship from D1 to D2 is established, and the homography matrix M is calculated by the following formula:
[D1]=H*[D2]
this equation represents a linear mapping between D1 and D2, where [ D1] and [ D2] are the coordinate vectors of D1 and D2, respectively, and the calculation of the homography matrix M allows mapping points on the infrared image onto the visible image, thereby achieving alignment of the two images. The alignment of the images is realized by calculating the homography matrix, so that the accuracy and the reliability of image processing are enhanced.
S3: according to the body temperature characteristics of the infants, extracting a first infant area image from the real-time thermal infrared image;
in particular, in thermal infrared images, infants often present bright areas in the image due to their high body temperature. This is because the thermal infrared camera can capture the thermal radiation of the object, and thus this feature can be used to distinguish the infant from the surrounding environment. Once the body temperature characteristics are identified, the region in which the infant is located is determined from the intensity and temperature distribution of the thermal infrared image. This region image typically contains the infant's body contours and features, and possibly pose information, and after the first infant region image is extracted, further image processing, such as noise removal, contour detection, segmentation, etc., may be performed to further extract information about the infant, which may be used for subsequent monitoring, analysis, or detection tasks.
In one embodiment, referring to fig. 3, the step S3 includes:
s31: acquiring a temperature value of each point in the real-time thermal infrared image and a preset body temperature threshold value related to the infant;
s32: when the temperature value is larger than the body temperature threshold value, outputting an area image of the point corresponding to the temperature value as the first infant area image.
Specifically, the temperature value of each point in the real-time thermal infrared image is obtained, the temperature values reflect the temperature distribution condition of different areas in the image, a preset infant-related body temperature threshold value is obtained, the threshold value is usually set in the range of normal infant body temperature, the temperature value in the image is analyzed, when the temperature value of a certain area is detected to be larger than the preset body temperature threshold value, the area is marked as a potential infant area, the threshold value is usually set above the normal human body temperature, for example, 34 degrees or more, and the image of the infant area is extracted as the first infant area image.
S4: mapping the first infant area image into the real-time visible light image according to the homography matrix, and outputting a second infant area image in the real-time visible light image;
specifically, the homography matrix is a mathematical transformation matrix for describing the mapping relationship between two different images, in this step, the homography matrix calculated before is used to map the first infant area image from the coordinate system of the thermal infrared image to the coordinate system of the real-time visible light image, this mapping process is pixel-level, ensuring that the same points on the two images still correspond after mapping, once mapping is completed, the obtained mapping result is output as the second infant area image in the real-time visible light image, this second area image usually contains the pose and position information of the infant in the visible light image, and is obtained based on the characteristics of the thermal infrared image. By this step, the infant area in the thermal infrared image is aligned with the visible light image, facilitating subsequent monitoring and analysis.
S5: and detecting the infant head of the second infant region image, when the infant head is detected, detecting the infant head region image, and outputting the infant head-lying detection result.
Specifically, the head detection is performed on the image of the second infant area, the detection process aims at identifying the infant head area in the image, and the detection method at least comprises the following steps: the method comprises the steps of searching for the characteristics or the shape of the head by using a computer vision technology, such as facial characteristic detection or a deep learning model, continuously performing groveling detection on an infant head area image once the head is successfully detected, determining whether the infant is in a groveling state or not through the groveling detection, generating output according to the result of the groveling detection, and outputting the result of the groveling detection to indicate the current sleeping state of the infant if the infant is detected to be in the groveling state, wherein the result can be used for alarming or notifying a guardian to ensure the safety of the infant.
In one embodiment, referring to fig. 4, the step S5 includes:
s51: inputting the second infant area image into a pre-trained infant head detection model, and outputting a head detection result;
Specifically, the second infant area image is input into a pre-trained infant head detection model, the model is a deep learning model specially designed for detecting the infant head, training is performed based on image labels marked by night vision infrared black-and-white images, a framework of a YOLOv8s detection model is adopted, the infant head detection model analyzes the input second infant area image and outputs a head detection result, and the result at least comprises the position and the size of the head in the image, so that the position of the infant head in the image can be determined. Through the use of the pre-trained deep learning model, the model is trained on a large number of night vision infrared images so as to identify the head of the infant, and the automatic head detection technology can efficiently extract the information about the position of the infant, provides a basis for subsequent monitoring and lying sleep detection, and is beneficial to ensuring that the sleeping posture and safety of the infant are effectively monitored.
S52: if the head detection result is that the head of the infant is detected, the infant is in the nursing area and the head of the infant is not shielded, the head area image of the infant is subjected to groveling detection, and the infant groveling detection result is output;
In one embodiment, referring to fig. 5, the step S52 includes:
s521: if the infant is in the nursing area and the infant head is not shielded, acquiring a first infant head image in the second infant area image;
specifically, if the infant is in the nursing area and the infant head is not blocked, the characteristics of the infant head are identified and extracted through an image processing technology, the head characteristics at least comprise information on the shape, color, texture and the like of the head, the position of the head in the whole image is determined, a segmentation operation is usually required to ensure that the extracted image only contains the infant head, and a pre-trained deep learning model, such as YOLO (You Only Look Once) or a model based on a convolutional neural network, is used for detecting the existence and the position of the head. By accurately positioning the head of the infant, key head images are provided for subsequent analysis, and the infant's posture and sleep state can be known.
S522: according to the homography matrix, mapping the first infant head image into the first infant area image, and outputting a second infant head image in the first infant area image;
In particular, using a homography matrix, mapping the first head image from the coordinate system of the thermal infrared sensor to the coordinate system of the real-time visible light image ensures that they are aligned under the same coordinate system, the mapping process ensuring accurate location of the head image on the area image for subsequent analysis and monitoring.
S523: and according to the body temperature characteristics in the head image of the second infant, carrying out groveling detection on the infant, and outputting the groveling detection result.
Specifically, by checking the body temperature distribution in the head image, knowing the body temperature condition of the infant, particularly whether the head has abnormal hot spots or distribution, and judging whether the infant is in a lying sleep posture according to the body temperature characteristics, such detection generally includes analyzing the angle and position of the head, and by monitoring the sleeping posture of the infant, particularly the lying sleep posture, in real time, ensuring the safety and comfort of the infant, which is very important for the prevention of the choking risk.
In one embodiment, referring to fig. 6, the step S523 includes:
s5231: converting the second infant head image into a corresponding gray level image, and marking a standard gray level value corresponding to the infant body temperature in the gray level image;
specifically, first, temperature-to-gray scale mapping is performed for the temperature information in the second infant head image. This mapping process uses a linear relationship to convert temperature to gray values, e.g., 40 degrees celsius to gray value 180, and 20 degrees celsius to gray value 60, meaning that the higher the temperature, the brighter the corresponding pixel is in the gray map and the lower the temperature, the darker the color, and for better identification of the heat source, the gray value calculation of the infrared heat source point T uses the formula g= 6*T-60, which ensures that the infrared heat sources have a clear brightness difference in the gray image, making them easier to detect and analyze, and for further analysis of the body temperature of the human body, especially around normal body temperature 37 degrees, the standard gray value g_stand=162 corresponding to infant body temperature is labeled, which aids in identifying the region in the image that is close to the body temperature of the human body.
S5232: according to the standard gray value, acquiring the proportion of pixels smaller than the standard gray value in the second infant head image;
in particular, since the temperature of the face and forehead portions in the head region is generally highest and approaches the human body temperature, the gray value of the head region should be similar to the standard gray value G when the infant sleeps frontally. But when infant crouches down, the face can press close to the bed surface usually, because the bed surface is solid material, compare the air material when just sleeping, facial heat transfer gives the bed surface more easily when crouching down, leads to the holistic temperature of head region to become low. The gray value of the head region is usually significantly lower than the standard gray value G because the head temperature distribution changes due to the prone sleeping posture, and the proportion of pixels smaller than the standard gray value in the second infant head image is acquired.
S5233: and obtaining the groveling detection result according to the pixel proportion and a preset proportion threshold value.
In an embodiment, referring to fig. 7, S5233 includes:
s52331: acquiring a first proportional threshold and a second proportional threshold which are preset and related to the sleeping posture of the infant, wherein the first proportional threshold is smaller than the second proportional threshold;
S52332: if the pixel proportion is smaller than the first proportion threshold, the temperature of the head area of the infant is considered to be lower than the body temperature, and the detection result is lying prone;
s52333: if the pixel proportion is larger than the second proportion threshold, the temperature of the head area of the infant is considered to be higher than the body temperature, and the detection result is positive sleep;
s52334: and if the pixel proportion is not smaller than the first proportion threshold value and not larger than the second proportion threshold value, the detection result is side sleep.
Specifically, the dimensions of the head region, i.e., length L and width W, are specified to ensure that subsequent analysis can be performed within the specified image region, and the Gray value Gray in the infant head region is counted to satisfy fabs (Gray-g_stand) <=12 of pixel points sum: pixels are counted in the infant head area, and pixel points meeting the condition fabs (Gray-g_stand) <=12 are screened, and Gray values of the pixel points are close to the standard Gray value g_stand, so that the Gray values correspond to the normal body temperature area of a human body. Acquiring a first preset proportional threshold, such as 0.15 and a second proportional threshold, such as 0.55, related to the sleeping posture of the infant, and considering the infant as a sleeping state if sum/(LW) > 0.55; if sum/(LW) <0.15, the infant is considered to be lying on the side, and sum/(LW) > =0.15 and sum/(LW) <0.55 infant is considered to be lying on the side: judging the sleeping posture state of the infant by calculating the proportion of the pixel points meeting the conditions: if sum/(L.times.W) is greater than 0.55, the infant is judged to be in a front sleeping state, which means that most pixels in the head area of the infant are close to gray values of normal body temperature, and the head of the infant is right facing upwards. If sum/(L.times.W) is less than 0.15, the system will determine that the infant is lying down. This means that only a few pixels in the infant head area approach the gray value of normal body temperature, and the infant head is facing the mattress and lying down. If sum/(L.times.W) is between 0.15 and 0.55, the infant is judged to be in a side sleep state, and in this case, a part of pixel points in the head area of the infant are close to gray values of normal body temperature, but are not close, so that the infant is indicated to sleep while lying on the side. By analyzing the gray values in the infant head area and combining the number of pixel points and the size of the area, the sleeping posture state of the infant can be accurately judged. This is very useful for guardians because they can take appropriate measures based on the sleeping position, such as correcting a lying sleeping position to ensure the safety and comfort of the infant, and combine temperature information and image analysis to provide real-time feedback about the infant's sleeping position.
S53: if the head detection result is that the head of the infant is not detected, the infant is in the nursing area and the head of the infant is shielded, and a first safety prompt is sent to the user.
Specifically, if the head detection result is that the head of the infant is not detected, since the infant is identified to be in the nursing area through the infrared image before, but the head of the infant is not detected, the head of the infant is blocked by a sheet, a quilt, a toy or other objects, or the head of the infant may be buried in a pillow or a mattress while sleeping, which requires special attention of a guardian. Although the head is not detected, the infant is still in the monitored area, so that the overall safety of the infant needs to be ensured, a corresponding safety prompt is sent to the user to remind the guardian of checking the bed and the surrounding environment, and the infant is ensured not to be injured by dangerous objects or shielding objects.
Example 2
In embodiment 1, if the head detection result is that the infant head is not detected, the infant is in the nursing area and the infant head is blocked, and a first safety prompt is sent to the user, meanwhile, after the infant is prone to sleep detection is performed on the image of the infant head area, after the infant prone to sleep detection result is output, parents or guardians are timely reminded to check the sleeping posture of the infant by setting different types of safety prompts, and the parents can timely adjust the sleeping posture of the infant according to the actual situation of the infant, so that the potential risk is reduced, and therefore, after S5, the following steps further include:
S61: acquiring target information related to an infant input by a user, wherein the target information at least comprises: age and feeding period of the infant;
s62: comparing the infant age with a preset age threshold, and closing the thermal infrared camera if the infant age is judged to be smaller than the age threshold;
specifically, comparing the infant age with a preset age threshold, and if the infant age is smaller than the age threshold, closing the thermal infrared camera; the skin of an infant may be more sensitive to thermal infrared light, and particularly for an infant of an excessively small age, the radiation of the thermal infrared light may have a certain influence on the skin of the infant, so that it is a reasonable measure to turn off the thermal infrared camera in order to ensure the safety of the infant.
S63, acquiring real-time detection time corresponding to infant groveling detection, and when the real-time detection time is after the infant feeding time period and the difference between the real-time detection time and the time stamp of infant feeding end is smaller than a preset duration threshold value, sending a second safety reminder to a user if the infant groveling detection result is that the infant is sleeping and lying prone;
Specifically, the infant is easy to produce vomit after eating, and if the infant is in a sleeping position of side lying, the vomit is not easy to block the respiratory tract. This helps to avoid respiratory obstruction, reduce the potential risk caused by this, and at the same time, the side lying sleep position helps to promote the normal flow of food in the infant's stomach, helps to digest more effectively, and helps to reduce the risk of backflow of the stomach contents to the esophagus, so that a real-time detection time corresponding to the infant lying sleep detection is obtained, and when the difference between the real-time detection time after the infant feeding period and the timestamp of the infant feeding end is less than a preset duration threshold, if the infant lying sleep detection result is a normal sleep and a lying sleep, a second safety reminder is sent to the user to remind the user to place the infant in the side lying sleep position immediately after the infant has consumed milk.
S64: and according to the infant age, if the infant is in the head correction period and the infant lying sleep detection result is side sleep, classifying the side sleep posture by using a pre-trained SVM classification model, and sending a third safety prompt to the user according to the classification result.
Specifically, the head correction period refers to a period in which the head shape of an infant gradually develops during the growth and development process, and in this period, the head of the infant may be more susceptible to external factors, so that special attention is required, and it is very important to prevent problems such as head deformity, and the like, in maintaining an appropriate sleeping posture. Side sleep refers to a sleeping position in which an infant is lying down, and in certain situations, such as a head correction period, the system is more concerned about whether the infant remains sideways or not for a long time. SVM (supervised learning algorithm, which can be used for classification and regression tasks, and uses a pre-trained SVM classification model to classify the side sleeping gesture, wherein the classification result comprises sleeping leftwards and sleeping rightwards, and the classification result is used for sending a third safety reminder to the user, wherein the third safety reminder comprises adjusting the side sleeping direction of the infant, so that the head of the infant is continuously reversed and does not fall off the head, and the head of the infant is ensured to be correctly developed in the special period.
Example 3
Referring to fig. 8, embodiment 3 of the present invention further provides an infant prone sleep monitoring device based on multi-mode data fusion, the device comprising:
the image acquisition module is used for acquiring a real-time visible light image and a real-time thermal infrared image of the infant nursing area;
The calibration module is used for calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix between the visible light image and the thermal infrared image;
the first infant image extraction module is used for extracting a first infant region image from the real-time thermal infrared image according to infant body temperature characteristics;
the image mapping module is used for mapping the first infant area image into the real-time visible light image according to the homography matrix and outputting a second infant area image in the real-time visible light image;
and the groveling sleep detection module is used for carrying out infant head detection on the second infant area image, carrying out groveling sleep detection on the infant head area image when the infant head is detected, and outputting an infant groveling sleep detection result.
Specifically, the infant groveling and sleeping monitoring device based on multi-mode data fusion provided by the embodiment of the invention comprises: the image acquisition module is used for acquiring a real-time visible light image and a real-time thermal infrared image of the infant nursing area; the calibration module is used for calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix between the visible light image and the thermal infrared image; the first infant image extraction module is used for extracting a first infant region image from the real-time thermal infrared image according to infant body temperature characteristics; the image mapping module is used for mapping the first infant area image into the real-time visible light image according to the homography matrix and outputting a second infant area image in the real-time visible light image; and the groveling sleep detection module is used for carrying out infant head detection on the second infant area image, carrying out groveling sleep detection on the infant head area image when the infant head is detected, and outputting an infant groveling sleep detection result. The device combines the information of visible light images and thermal infrared images, adopts a multi-mode data fusion mode, improves the prone sleeping detection of infants, and is beneficial to improving the accuracy of prone sleeping detection: the body temperature distribution characteristics of the infant can be accurately captured by using the thermal infrared image, the body temperature distribution characteristics of the head of the infant can be used for distinguishing the prone sleeping and other sleeping postures when the infant is prone to sleep, the visible light image and the thermal infrared image are fused together to provide more comprehensive information, the visible light image can capture the external form and posture of the infant, the thermal infrared image provides internal body temperature information, the false alarm rate can be reduced by combining the two information, the reliability of prone sleeping detection is improved, the head detection and prone sleeping detection are carried out after the second infant area image is obtained, and the cascade detection process can help to eliminate interference of other body parts or objects and improve the specificity of prone sleeping detection. In general, by combining the technologies of multi-mode data, homography matrix mapping, head detection, groveling sleep detection and the like, the accuracy and reliability of the groveling sleep detection of the infants can be improved, so that the safety of the infants is improved, and the suffocation risk is reduced.
Example 3
In addition, the infant groveling monitoring method based on the multi-mode data fusion according to embodiment 1 of the present invention described in connection with fig. 1 may be implemented by an electronic device. Fig. 9 shows a schematic hardware structure of an electronic device according to embodiment 3 of the present invention.
The electronic device may include a processor and memory storing computer program instructions.
In particular, the processor may comprise a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present invention.
The memory may include mass storage for data or instructions. By way of example, and not limitation, the memory may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. The memory may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is a non-volatile solid state memory. In a particular embodiment, the memory includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor reads and executes the computer program instructions stored in the memory to realize any of the infant groveling monitoring methods based on multi-mode data fusion in the above embodiments.
In one example, the electronic device may also include a communication interface and a bus. The processor, the memory, and the communication interface are connected by a bus and complete communication with each other, as shown in fig. 9.
The communication interface is mainly used for realizing communication among the modules, the devices, the units and/or the equipment in the embodiment of the invention.
The bus includes hardware, software, or both that couple the components of the device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. The bus may include one or more buses, where appropriate. Although embodiments of the invention have been described and illustrated with respect to a particular bus, the invention contemplates any suitable bus or interconnect.
Example 4
In addition, in combination with the method for monitoring infant sleep based on multimodal data fusion in the above embodiment 1, embodiment 4 of the present invention may also provide a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement any of the infant groveling monitoring methods based on multi-modal data fusion in the above embodiments.
In summary, the embodiment of the invention provides a method, a device and equipment for monitoring infant groveling sleep based on multi-mode data fusion.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present invention are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and they should be included in the scope of the present invention.
Claims (10)
1. An infant groveling monitoring method based on multi-mode data fusion is characterized by comprising the following steps:
s1: acquiring a real-time visible light image and a real-time thermal infrared image of an infant care area;
S2: calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix between the visible light image and the thermal infrared image;
s3: according to the body temperature characteristics of the infants, extracting a first infant area image from the real-time thermal infrared image;
s4: mapping the first infant area image into the real-time visible light image according to the homography matrix, and outputting a second infant area image in the real-time visible light image;
s5: and detecting the infant head of the second infant region image, when the infant head is detected, detecting the infant head region image, and outputting the infant head-lying detection result.
2. The method for monitoring the lying sleep of the infant based on the multi-modal data fusion according to claim 1, wherein the step S2 comprises:
s21: acquiring first vertex position information of a preset reference object according to color difference in a real-time visible light image;
s22: acquiring second vertex position information of the preset reference object according to temperature information in the real-time thermal infrared image and combining an edge detection algorithm and a right angle detection algorithm;
s23: and calculating the first vertex position information and the second vertex position information, and outputting the homography matrix.
3. The method for monitoring the lying sleep of the infant based on the multi-modal data fusion according to claim 1, wherein the step S3 comprises:
s31: acquiring a temperature value of each point in the real-time thermal infrared image and a preset body temperature threshold value related to the infant;
s32: when the temperature value is larger than the body temperature threshold value, outputting an area image of the point corresponding to the temperature value as the first infant area image.
4. The method for monitoring the lying sleep of the infant based on the multi-modal data fusion according to claim 1, wherein the step S5 comprises:
s51: inputting the second infant area image into a pre-trained infant head detection model, and outputting a head detection result;
s52: if the head detection result is that the head of the infant is detected, the infant is in the nursing area and the head of the infant is not shielded, the head area image of the infant is subjected to groveling detection, and the infant groveling detection result is output;
s53: if the head detection result is that the head of the infant is not detected, the infant is in the nursing area and the head of the infant is shielded, and a first safety prompt is sent to the user.
5. The method for monitoring the sleep of an infant based on multi-modal data fusion according to claim 4, wherein S52 includes:
S521: if the infant is in the nursing area and the infant head is not shielded, acquiring a first infant head image in the second infant area image;
s522: according to the homography matrix, mapping the first infant head image into the first infant area image, and outputting a second infant head image in the first infant area image;
s523: and according to the body temperature characteristics in the head image of the second infant, carrying out groveling detection on the infant, and outputting the groveling detection result.
6. The method for monitoring infant groveling sleep based on multi-modal data fusion as claimed in claim 5, wherein the S523 includes:
s5231: converting the second infant head image into a corresponding gray level image, and marking a standard gray level value corresponding to the infant body temperature in the gray level image;
s5232: according to the standard gray value, acquiring the proportion of pixels smaller than the standard gray value in the second infant head image;
s5233: and obtaining the groveling detection result according to the pixel proportion and a preset proportion threshold value.
7. The method for monitoring the sleep of an infant based on multi-modal data fusion according to claim 6, wherein S5233 comprises:
S52331: acquiring a first proportional threshold and a second proportional threshold which are preset and related to the sleeping posture of the infant, wherein the first proportional threshold is smaller than the second proportional threshold;
s52332: if the pixel proportion is smaller than the first proportion threshold, the temperature of the head area of the infant is considered to be lower than the body temperature, and the detection result is lying prone;
s52333: if the pixel proportion is larger than the second proportion threshold, the temperature of the head area of the infant is considered to be higher than the body temperature, and the detection result is positive sleep;
s52334: and if the pixel proportion is not smaller than the first proportion threshold value and not larger than the second proportion threshold value, the detection result is side sleep.
8. Infant's monitoring devices that sleeps on prone based on multimodal data fusion, its characterized in that, the device includes:
the image acquisition module is used for acquiring a real-time visible light image and a real-time thermal infrared image of the infant nursing area;
the calibration module is used for calibrating the real-time visible light image and the real-time thermal infrared image to obtain a homography matrix between the visible light image and the thermal infrared image;
the first infant image extraction module is used for extracting a first infant region image from the real-time thermal infrared image according to infant body temperature characteristics;
The image mapping module is used for mapping the first infant area image into the real-time visible light image according to the homography matrix and outputting a second infant area image in the real-time visible light image;
and the groveling sleep detection module is used for carrying out infant head detection on the second infant area image, carrying out groveling sleep detection on the infant head area image when the infant head is detected, and outputting an infant groveling sleep detection result.
9. An electronic device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method of any one of claims 1-7.
10. A storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311683971.6A CN117690159B (en) | 2023-12-07 | 2023-12-07 | Infant groveling and sleeping monitoring method, device and equipment based on multi-mode data fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311683971.6A CN117690159B (en) | 2023-12-07 | 2023-12-07 | Infant groveling and sleeping monitoring method, device and equipment based on multi-mode data fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117690159A true CN117690159A (en) | 2024-03-12 |
CN117690159B CN117690159B (en) | 2024-06-11 |
Family
ID=90129572
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311683971.6A Active CN117690159B (en) | 2023-12-07 | 2023-12-07 | Infant groveling and sleeping monitoring method, device and equipment based on multi-mode data fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117690159B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117268559A (en) * | 2023-10-25 | 2023-12-22 | 武汉星巡智能科技有限公司 | Multi-mode infant abnormal body temperature detection method, device, equipment and medium |
CN118135614A (en) * | 2024-05-10 | 2024-06-04 | 宁波星巡智能科技有限公司 | Infant sleeping posture identification method, device and equipment based on multi-mode data fusion |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014012070A1 (en) * | 2012-07-12 | 2014-01-16 | Flir Systems, Inc. | Infant monitoring systems and methods using thermal imaging |
CN104091408A (en) * | 2014-05-09 | 2014-10-08 | 郑州轻工业学院 | Infant sleeping posture intelligent identification method and device based on thermal infrared imaging |
US20140313309A1 (en) * | 2012-10-05 | 2014-10-23 | Panasonic Corporation | Drowsiness estimation device, drowsiness estimation method, and computer-readable non-transient recording medium |
JP2018051288A (en) * | 2016-09-23 | 2018-04-05 | パナソニックIpマネジメント株式会社 | Pulse wave measurement device and pulse wave measurement method |
CN109691987A (en) * | 2018-12-25 | 2019-04-30 | 合肥镭智光电科技有限公司 | A kind of sleep detection system and detection method |
CN109840493A (en) * | 2019-01-27 | 2019-06-04 | 武汉星巡智能科技有限公司 | Infantal sleeping condition detection method, device and computer readable storage medium |
US20200265602A1 (en) * | 2019-02-15 | 2020-08-20 | Northeastern University | Methods and systems for in-bed pose estimation |
CN114216564A (en) * | 2021-11-26 | 2022-03-22 | 杭州七格智联科技有限公司 | Intelligent infant body temperature detection method based on head multi-zone positioning |
CN114926957A (en) * | 2022-04-13 | 2022-08-19 | 西安理工大学 | Infant monitoring system and monitoring method based on smart home |
CN218220183U (en) * | 2022-07-18 | 2023-01-06 | 深圳市千隼科技有限公司 | Non-contact sleep monitoring device and system |
CN115641603A (en) * | 2021-07-17 | 2023-01-24 | 深圳市起点人工智能科技有限公司 | Detection device and detection method for child lying prone to sleep |
WO2023005468A1 (en) * | 2021-07-30 | 2023-02-02 | 上海商汤智能科技有限公司 | Respiratory rate measurement method and apparatus, storage medium, and electronic device |
CN117173784A (en) * | 2023-08-30 | 2023-12-05 | 武汉星巡智能科技有限公司 | Infant turning-over action detection method, device, equipment and storage medium |
-
2023
- 2023-12-07 CN CN202311683971.6A patent/CN117690159B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014012070A1 (en) * | 2012-07-12 | 2014-01-16 | Flir Systems, Inc. | Infant monitoring systems and methods using thermal imaging |
US20140313309A1 (en) * | 2012-10-05 | 2014-10-23 | Panasonic Corporation | Drowsiness estimation device, drowsiness estimation method, and computer-readable non-transient recording medium |
CN104091408A (en) * | 2014-05-09 | 2014-10-08 | 郑州轻工业学院 | Infant sleeping posture intelligent identification method and device based on thermal infrared imaging |
JP2018051288A (en) * | 2016-09-23 | 2018-04-05 | パナソニックIpマネジメント株式会社 | Pulse wave measurement device and pulse wave measurement method |
CN109691987A (en) * | 2018-12-25 | 2019-04-30 | 合肥镭智光电科技有限公司 | A kind of sleep detection system and detection method |
CN109840493A (en) * | 2019-01-27 | 2019-06-04 | 武汉星巡智能科技有限公司 | Infantal sleeping condition detection method, device and computer readable storage medium |
US20200265602A1 (en) * | 2019-02-15 | 2020-08-20 | Northeastern University | Methods and systems for in-bed pose estimation |
CN115641603A (en) * | 2021-07-17 | 2023-01-24 | 深圳市起点人工智能科技有限公司 | Detection device and detection method for child lying prone to sleep |
WO2023005468A1 (en) * | 2021-07-30 | 2023-02-02 | 上海商汤智能科技有限公司 | Respiratory rate measurement method and apparatus, storage medium, and electronic device |
CN114216564A (en) * | 2021-11-26 | 2022-03-22 | 杭州七格智联科技有限公司 | Intelligent infant body temperature detection method based on head multi-zone positioning |
CN114926957A (en) * | 2022-04-13 | 2022-08-19 | 西安理工大学 | Infant monitoring system and monitoring method based on smart home |
CN218220183U (en) * | 2022-07-18 | 2023-01-06 | 深圳市千隼科技有限公司 | Non-contact sleep monitoring device and system |
CN117173784A (en) * | 2023-08-30 | 2023-12-05 | 武汉星巡智能科技有限公司 | Infant turning-over action detection method, device, equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
张笙等: "面向鲁棒视觉监控的热红外与可见光视频融合运动目标检测", 红外技术, no. 12, 20 December 2013 (2013-12-20), pages 773 - 779 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117268559A (en) * | 2023-10-25 | 2023-12-22 | 武汉星巡智能科技有限公司 | Multi-mode infant abnormal body temperature detection method, device, equipment and medium |
CN117268559B (en) * | 2023-10-25 | 2024-05-07 | 武汉星巡智能科技有限公司 | Multi-mode infant abnormal body temperature detection method, device, equipment and medium |
CN118135614A (en) * | 2024-05-10 | 2024-06-04 | 宁波星巡智能科技有限公司 | Infant sleeping posture identification method, device and equipment based on multi-mode data fusion |
CN118135614B (en) * | 2024-05-10 | 2024-08-30 | 宁波星巡智能科技有限公司 | Infant sleeping posture identification method, device and equipment based on multi-mode data fusion |
Also Published As
Publication number | Publication date |
---|---|
CN117690159B (en) | 2024-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117690159B (en) | Infant groveling and sleeping monitoring method, device and equipment based on multi-mode data fusion | |
CN104091408B (en) | Infant sleeping posture intelligent identification method and device based on thermal infrared imaging | |
JP2007534032A (en) | How to monitor a sleeping infant | |
CN107958572B (en) | Baby monitoring system | |
CN117268559B (en) | Multi-mode infant abnormal body temperature detection method, device, equipment and medium | |
WO2017098265A1 (en) | Method and apparatus for monitoring | |
US10463294B2 (en) | Video monitoring to detect sleep apnea | |
CN112712020A (en) | Sleep monitoring method, device and system | |
WO2019104108A1 (en) | Respiration monitor | |
CN113662530B (en) | Pig physiological growth state monitoring and early warning method | |
WO2019003859A1 (en) | Monitoring system, control method therefor, and program | |
CN113963424B (en) | Infant asphyxia or sudden death early warning method based on single-order face positioning algorithm | |
CN114333047A (en) | Human body tumbling detection device and method based on double-light perception information fusion | |
CN117373110A (en) | Visible light-thermal infrared imaging infant behavior recognition method, device and equipment | |
CN117173784B (en) | Infant turning-over action detection method, device, equipment and storage medium | |
CN113384386B (en) | Infant quilt kicking intelligent detection method and device, electronic equipment and medium | |
CN113408477A (en) | Infant sleep monitoring system, method and equipment | |
JP6822326B2 (en) | Watching support system and its control method | |
CN116386671B (en) | Infant crying type identification method, device, equipment and storage medium | |
CN208092911U (en) | A kind of baby monitoring systems | |
CN118135614B (en) | Infant sleeping posture identification method, device and equipment based on multi-mode data fusion | |
CN113505742B (en) | Infant sleep tracking and monitoring method, device, equipment and storage medium | |
KR102381204B1 (en) | Apparatus and method for monitoring breathing using thermal image | |
CN115641603A (en) | Detection device and detection method for child lying prone to sleep | |
CN114360207B (en) | Child quilt kicking detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |