CN117173784B - Infant turning-over action detection method, device, equipment and storage medium - Google Patents

Infant turning-over action detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN117173784B
CN117173784B CN202311108531.8A CN202311108531A CN117173784B CN 117173784 B CN117173784 B CN 117173784B CN 202311108531 A CN202311108531 A CN 202311108531A CN 117173784 B CN117173784 B CN 117173784B
Authority
CN
China
Prior art keywords
infant
face
image
outputting
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311108531.8A
Other languages
Chinese (zh)
Other versions
CN117173784A (en
Inventor
陈辉
杜沛力
张智
张青军
熊章
雷奇文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Xingxun Intelligent Technology Co ltd
Original Assignee
Wuhan Xingxun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Xingxun Intelligent Technology Co ltd filed Critical Wuhan Xingxun Intelligent Technology Co ltd
Priority to CN202311108531.8A priority Critical patent/CN117173784B/en
Publication of CN117173784A publication Critical patent/CN117173784A/en
Application granted granted Critical
Publication of CN117173784B publication Critical patent/CN117173784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of intelligent nursing, solves the problem that the prior art cannot accurately detect the turning-over action of an infant in real time, and provides a method, a device, equipment and a storage medium for detecting the turning-over action of the infant. The method comprises the following steps: acquiring a real-time video stream in an infant care scene, and decomposing the real-time video stream into multi-frame images; carrying out infant face shielding judgment on each image, and identifying infant face shielding conditions; when the face of the infant is not shielded, carrying out face analysis on the face of the infant, and outputting the face orientation of the infant; when the face orientation of the infant is a positive face, the infant turning-over action is detected, and a detection result is output. The invention monitors the turning-over action of the infant in real time, and is helpful for parents or guardianship personnel to know the state of the infant in time.

Description

Infant turning-over action detection method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of intelligent nursing, in particular to a method, a device, equipment and a storage medium for detecting infant turning-over actions.
Background
The intelligent nursing of infants is favored by wide young parents, and how to provide accurate intelligent nursing is an important factor of the market of the AI intelligent nursing equipment.
In the existing intelligent nursing aiming at the infant sleeping scene, whether the infant is a human head can be detected only through a human head detection algorithm, after the human head is detected, whether the detected human head is an adult head or an infant head is distinguished, whether the infant enters sleeping activity or not is judged by judging whether the detected infant head is kept motionless for a long time, the scheme is simple, only human head detection is considered, and because the human head detection algorithm only focuses on the existence of the human head, when the infant sleeps, if the face is covered or the head is blocked, the scheme cannot detect; meanwhile, when an infant sleeps, the head of the infant cannot move, but the infant is still in a waking state, and under the condition, whether the infant is in a sleeping state cannot be accurately judged by the scheme.
The prior chinese patent CN113408477a discloses a system, method and apparatus for monitoring sleep of infants, the method comprising: s1, enabling an infant to enter a set sleep detection area, and continuously acquiring sleep image information of the infant in a multi-frame mode in real time; s2, preprocessing the acquired sleep image information; s3, carrying out gesture recognition analysis on limb actions of the infant based on the preprocessed sleep image information of the infant, and obtaining characteristic parameters of the sleep gesture of the infant in real time; s4, based on the characteristic parameters of the identified infant sleeping posture, matching with preset sleeping behavior characteristic parameters to obtain sleeping behavior information with highest matching degree; s5, judging whether the abnormal sleep behavior exists, if so, executing S6, otherwise, directly executing S7; s6, sending out an alarm notification; s7, a sleep curve is obtained based on the sleep behavior information in the specific time, and a sleep quality report is generated based on the set period. However, the above patent merely detects abnormal sleep behavior, and detection of a turning-over motion in abnormal behavior is not accurate.
Therefore, how to accurately detect the turning-over action of the infant in real time is a problem to be solved.
Disclosure of Invention
In view of the above, the invention provides a method, a device, equipment and a storage medium for detecting the turning-over action of an infant, which are used for solving the problem that the turning-over action of the infant cannot be accurately detected in real time in the prior art.
The technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a method for detecting a turning motion of an infant, which is characterized in that the method includes:
s1: acquiring a real-time video stream in an infant care scene, and decomposing the real-time video stream into multi-frame images;
s2: carrying out infant face shielding judgment on each image, and identifying infant face shielding conditions;
s3: when the face of the infant is not shielded, carrying out face analysis on the face of the infant, and outputting the face orientation of the infant;
S4: when the face orientation of the infant is a positive face, the infant turning-over action is detected, and a detection result is output.
Preferably, the S2 includes:
s21: detecting the head of the infant on each image to obtain an image of the head of the infant;
S22: detecting the infant head image, and when a target image of the infant face is detected, tracking the head of the infant in the target image, wherein the infant face comprises a front face and a side face;
s23: and detecting the infant face of the tracked infant head, and outputting the shielding condition of the infant face.
Preferably, the S23 includes:
s231: if the tracked infant heads all have infant faces, the infant faces are not shielded;
S232, if the tracked infant head does not have an infant face, acquiring a current frame image of the infant face, and detecting the infant face from a preset frame image after the current frame image;
S233: if the number of frames of the images of the infant face in the preset frame images is larger than a preset frame number threshold, the infant face is recognized as not being blocked;
S234: if the infant face does not appear in the preset frame images or the image frame number of the infant face is not larger than the preset frame number threshold value, the infant face is recognized to be blocked.
Preferably, the S3 includes:
s31, acquiring training image data containing infant faces, extracting features of the training image data, and outputting key feature information of face orientation;
s32: inputting the key characteristic information into a deep learning network for training, and outputting a face analysis model;
s33: when the fact that the infant face appears on the head of the tracked infant or the number of frames of images of the infant face appearing in the preset frame images is larger than a preset frame number threshold is recognized, obtaining an infant face image corresponding to the infant face which is not shielded;
s34, inputting the infant face image into the face analysis model, and outputting the infant face orientation.
Preferably, the step S4 further includes:
s401: if the infant face does not appear in the preset frame images or the image frame number of the infant face is not larger than a preset frame number threshold value, the infant face is recognized to be blocked;
S402: inputting a corresponding image of the blocked infant face into a pre-trained target detection model, and detecting preset face key points, wherein the face key points comprise mouth and nose key points;
s403: and outputting a safety prompt when the fact that the face key point does not appear is detected.
Preferably, the S4 includes:
s41: according to the infant face orientation, when the infant face orientation is a front face, acquiring target area position information, wherein the target area comprises: an infant body area and a shelter area;
S42: according to the target area position information, performing motion detection on the target area, and outputting target area motion information;
S43: and judging the face directions of the infants and the movement information of the target area, and outputting the detection result as the infant turning-over action when the face directions of the infants are changed and the target area moves.
Preferably, the S41 includes:
S411: acquiring an infant frontal image according to the face orientation of the infant corresponding to the continuous multi-frame images;
S412: inputting the infant frontal face image into a pre-trained infant face key point detection model, and outputting infant face key point positions;
S413: acquiring a plurality of preset infant age intervals and target area proportions corresponding to the infant age intervals, comparing the current infant age with the infant age intervals, and outputting real-time proportions corresponding to the current infant;
S414: and outputting the position information of the target area according to the position of the infant face key point and the real-time proportion.
In a second aspect, the present invention provides a device for detecting a turning-over motion of an infant, comprising:
The image acquisition module is used for acquiring a real-time video stream in an infant care scene and decomposing the real-time video stream into multi-frame images;
the face shielding judging module is used for carrying out infant face shielding judgment on each image and identifying infant face shielding conditions;
The face orientation acquisition module is used for carrying out face analysis on the infant face and outputting the infant face orientation when the infant face is not shielded;
The turnover detection module is used for detecting the turnover action of the infant when the face orientation of the infant is the positive face and outputting a detection result.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: at least one processor, at least one memory and computer program instructions stored in the memory, which when executed by the processor, implement the method as in the first aspect of the embodiments described above.
In a fourth aspect, embodiments of the present invention also provide a storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method as in the first aspect of the embodiments described above.
In summary, the beneficial effects of the invention are as follows:
The invention provides a method, a device, equipment and a storage medium for detecting the turning-over action of an infant, wherein the method comprises the steps of obtaining a real-time video stream in a nursing scene of the infant and decomposing the real-time video stream into multi-frame images; carrying out infant face shielding judgment on each image, and identifying infant face shielding conditions; when the face of the infant is not shielded, carrying out face analysis on the face of the infant, and outputting the face orientation of the infant; when the face orientation of the infant is a positive face, the infant turning-over action is detected, and a detection result is output. According to the invention, the real-time video stream is acquired and processed, so that the sleeping condition and the action of the infant are monitored in real time, and parents or guardians can know the state of the infant in time; by judging the face shielding of the image, whether the infant has an object or other objects shielding the face of the infant can be determined, which is very important for monitoring the respiration or the facial expression change; if the infant's face is not occluded, further facial analysis is performed to detect the infant's face orientation, and when the infant's face orientation is positive, the system may detect the infant's turn-over motion, which may provide information about the infant's sleep quality and safety to a parent or guardian, and if the infant is in the same posture or position for a long period of time, appropriate measures may need to be taken, such as changing the infant's sleep position or posture, to reduce the potential risk of choking or discomfort.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described, and it is within the scope of the present invention to obtain other drawings according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of the whole operation of the method for detecting the turning-over action of the infant in the embodiment 1 of the invention;
FIG. 2 is a flow chart of infant face shielding judgment for each of the images in embodiment 1 of the present invention;
FIG. 3 is a flow chart of infant face detection for the head of the tracked infant in embodiment 1 of the present invention;
FIG. 4 is a flow chart of facial analysis of the infant's face in embodiment 1 of the present invention;
FIG. 5 is a flow chart of the safety reminding when the face of the infant is recognized to be blocked in the embodiment 1 of the invention;
FIG. 6 is a flow chart of generating sleep quality reports for infants in embodiment 1 of the invention;
Fig. 7 is a flow chart of determining the position information of the trunk of the infant in embodiment 1 of the present invention;
FIG. 8 is a block diagram showing the structure of a device for detecting the turning-over motion of an infant in embodiment 2 of the present invention;
fig. 9 is a schematic structural diagram of an electronic device in embodiment 3 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. In the description of the present application, it should be understood that the terms "center," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present application and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. If not conflicting, the embodiments of the present application and the features of the embodiments may be combined with each other, which are all within the protection scope of the present application.
Example 1
Referring to fig. 1, embodiment 1 of the invention discloses a method for detecting a turning-over action of an infant, which comprises the following steps:
s1: acquiring a real-time video stream in an infant care scene, and decomposing the real-time video stream into multi-frame images;
Specifically, a real-time video stream acquired by monitoring equipment is acquired, the real-time video stream is decoded by a video codec library, so that the real-time video stream is converted into an image format, decoding and decompression of video data are required in the conversion process, and decompressed data are converted into the image format, such as JPEG, PNG and the like; the decomposed multi-frame images can be used for identifying the sleeping state, posture and process of the infant and monitoring the safety condition of the infant; meanwhile, the overall characteristics of the infant sleeping process, such as sleeping duration, depth, midway awakening times and the like, can be obtained by carrying out time sequence analysis on the multi-frame images.
S2: carrying out infant face shielding judgment on each image, and identifying infant face shielding conditions;
Specifically, multiple frames of images decomposed by the real-time video stream are obtained, infant face shielding judgment is carried out on each image, and corresponding protection measures can be selected to be adopted according to the face shielding judgment result, for example, when the infant face is judged to be shielded, a safety alarm is sent to parents so as to ensure the safety of the infant.
In one embodiment, referring to fig. 2, the step S2 includes:
s21: detecting the head of the infant on each image to obtain an image of the head of the infant;
Specifically, a large number of first image data sets containing infant heads are collected, infant heads in the collected first image data sets are marked, the marked image data are preprocessed, operations including scaling, cutting, normalization and the like of images are included, so that the effect of subsequent processing is improved, a deep learning model corresponding to an infant head detection task is obtained, for example, a target detection model based on a convolutional neural network, such as Faster R-CNN, YOLO or SSD, is obtained, the preprocessed image data sets are used for training a selected target detection model, in the training process, infant head positions and characteristics are used as labels for supervised learning, the models can learn the head characteristics and positions, the trained models are evaluated by using the reserved partial image data sets, indexes such as detection accuracy, recall rate and the like are calculated, accuracy and robustness of the models are improved, the trained target detection model is output as an infant head detection model, each frame of an original image decomposed by a real-time video stream is input into the infant head detection model, infant head position information is output, infant head position information is used for cutting the infant images, and infant head position information is extracted according to the infant head position information.
S22: detecting the infant head image, and when a target image of the infant face is detected, tracking the head of the infant in the target image, wherein the infant face comprises a front face and a side face;
specifically, a second image dataset containing the front face and the side face of the infant is obtained, wherein the second image dataset can be obtained from the disclosed image dataset related to the infant, the front face and the side face of the infant in the second image dataset are marked, the marked image dataset is preprocessed, operations such as scaling, clipping and normalization of the image are included, the effect of subsequent processing is improved, a method based on deep learning is adopted, such as a face key point detection model based on a convolutional neural network, for example, a common model is HRNet, the preprocessed second image dataset is used for training the face key point detection model based on the HRNet structure, in the training process, key points of the front face and the side face of the infant are used as labels for supervised learning, the method comprises the steps of enabling a model to learn the characteristics and the positions of the front face and the side face of an infant, evaluating the trained model by utilizing a reserved partial second image data set, improving the accuracy and the robustness of the model to key point detection, outputting the evaluated model as an infant face detection model, inputting each infant head image into the infant face detection model for processing, outputting target images of the front face and the side face of the infant, and tracking the infant head in the target images by a IoU tracking method, wherein the IoU tracking method is a common target tracking method, and the positions of the infant heads are obtained by calculating IoU values of the area of the infant heads of the current frame image and the area of the infant heads of the previous frame image, so that the tracking of the infant heads is realized.
S23: and detecting the infant face of the tracked infant head, and outputting the shielding condition of the infant face.
Specifically, a continuous image frame sequence related to the tracked infant head is obtained, and infant face detection is performed on the image frame sequence, wherein the infant face detection method is to input the image frame sequence into the infant face detection model after training, process the image frame sequence, output a detection result, judge the shielding condition of the infant face on the tracked infant head according to the detection result, and send a safety alarm to parents when judging that the infant face is shielded, so as to ensure the safety of the infant.
In one embodiment, referring to fig. 3, the step S23 includes:
s231: if the tracked infant heads all have infant faces, the infant faces are not shielded;
Specifically, if the image frame sequence related to the tracked infant head is detected through the infant face detection model, the detection result is that the infant face and the infant side face appear in each frame image in the image frame sequence, and at the moment, the infant face is considered to be unoccluded, and the infant head in the image frame sequence input into the infant face detection model is continuously tracked, so that judgment of the face occlusion condition by other infant heads which are detected by mistake in the image frame sequence is avoided.
S232, if the tracked infant head does not have an infant face, acquiring a current frame image of the infant face, and detecting the infant face from a preset frame image after the current frame image;
Specifically, if the image frame sequence related to the head of the tracked infant is detected through the infant face detection model, when a first frame image of the face of the infant is detected, the first frame image is obtained to be used as a current frame image, a preset frame image after the first frame image is input into the infant face detection model for processing, wherein the frame number of the preset frame image is adjusted according to an actual application scene and the actual application requirement of a user, the greater the frame number of the preset frame image is, the higher the accuracy of face shielding condition recognition is, the lower the recognition efficiency is, the smaller the frame number of the preset frame image is, the lower the accuracy of face shielding condition recognition is, and the recognition efficiency is higher.
S233: if the number of frames of the images of the infant face in the preset frame images is larger than a preset frame number threshold, the infant face is recognized as not being blocked;
Specifically, a preset frame number threshold is obtained, for example, the frame number threshold is set to be 10 frames, if the preset frame number image is input into the infant face detection model for processing, the frame number of the image of the front face or the side face of the infant is recognized to be greater than 10 frames, the face of the infant is considered to be not shielded at the moment, the face of the infant is shielded when the face of the infant appears due to actions such as rolling of the infant, the abnormal situation that the face of the infant is not shielded is considered to be caused, the infant is not in a choking dangerous situation at the moment, if the face of the infant is recognized to be shielded, a false alarm is sent out, the user nurses the experience to be poor, and when the frame number of the image of the face of the infant is larger than the preset frame number threshold, the face of the infant is recognized to be not shielded through the preset frame number threshold, so that the false alarm is avoided, and the nurses experience of the user is promoted.
S234: if the infant face does not appear in the preset frame images or the image frame number of the infant face is not larger than the preset frame number threshold value, the infant face is recognized to be blocked.
Specifically, inputting the preset frame images after the first frame image into the infant face detection model for processing, if the number of frames of the images of the infant face or the infant side face is not more than 10 or the infant face or the infant side face does not appear in the preset frame images, then the infant is easy to suffer from suffocation danger, then the infant face is considered to be blocked, a safety alarm is timely sent to a user, and the infant is prevented from being injured.
S3: when the face of the infant is not shielded, carrying out face analysis on the face of the infant, and outputting the face orientation of the infant;
Specifically, an image of an infant's face that is not occluded is input into a pre-trained face orientation detection model for orientation detection, which will output the infant's face orientation, e.g., frontal, left, or right.
In one embodiment, referring to fig. 4, S3 includes;
s31, acquiring training image data containing infant faces, extracting features of the training image data, and outputting key feature information of face orientation;
Specifically, collecting infant face image data sets with different orientations, wherein the infant face image data sets comprise face images with multiple orientations of front, left, right and the like; preprocessing the infant face image dataset to improve the performance of the model may take into account the use of image enhancement techniques such as rotation, scaling and cropping to increase the diversity and robustness of the data, capturing key feature information of the face orientation using feature extraction methods, for example, the key feature information including Haar features and HOG features, or learning facial features using deep learning models such as Convolutional Neural Networks (CNNs). The model is facilitated to better understand and learn facial features in different orientations by collecting data of multiple orientations, so that generalization capability of the model is improved. The data enhancement and the selection of a proper feature extraction method are utilized to help improve the performance and the robustness of the infant face orientation recognition model, so that the infant face orientation can be accurately judged.
S32: inputting the key characteristic information into a deep learning network for training, and outputting a face analysis model;
Specifically, the extracted key characteristic information is utilized to train a classifier, the classifier is used for judging the orientation of the face of the infant, the classifier selects a common classifier algorithm such as a Support Vector Machine (SVM), a Random Forest (Random Forest) or a deep learning model such as a convolutional neural network, images in infant face image data sets are used for training the model, verification sets are used for evaluating and optimizing the model, cross verification and other methods are adopted for evaluating the performance of the model, parameter adjustment is carried out, and the face analysis model is output. The classifier can better distinguish faces with different orientations by learning key characteristic information, so that the recognition accuracy is improved; the classifier model which is fully trained and optimized has stronger robustness and can cope with the influence of factors such as different illumination conditions, face posture changes and the like. The model can stably run in various practical application scenes, and the reliability of the system is improved.
S33: when the fact that the infant face appears on the head of the tracked infant or the number of frames of images of the infant face appearing in the preset frame images is larger than a preset frame number threshold is recognized, obtaining an infant face image corresponding to the infant face which is not shielded;
s34, inputting the infant face image into the face analysis model, and outputting the infant face orientation.
Specifically, inputting the infant face image into the face analysis model, outputting the infant face orientation, and for the care and monitoring of infants, obtaining the infant face orientation can help keep the infants safe, for example, in a crib or an automobile seat, correctly judging the orientation of the infant face can ensure that the head positions of the infants are correct, and possible choking or uncomfortable feeling is avoided; acquiring the orientation of the infant's face facilitates sleep monitoring, and by identifying the orientation of the infant's face, the posture of the infant while sleeping can be determined, including supine, lateral or prone, etc., to help monitor their sleep quality and posture habits.
In one embodiment, referring to fig. 5, the step S4 includes:
s401: if the infant face does not appear in the preset frame images or the image frame number of the infant face is not larger than a preset frame number threshold value, the infant face is recognized to be blocked;
specifically, if no infant face appears in the preset frame images or the number of frames of images in which the infant face appears is not greater than the preset frame number threshold, the infant face may be blocked by other objects, towels, toys, etc., so that the infant face cannot be recognized or partially recognized.
S402: inputting a corresponding image of the blocked infant face into a pre-trained target detection model, and detecting preset face key points, wherein the face key points comprise mouth and nose key points;
Specifically, a dataset containing images of the infant's face is collected and labeled, labeling the infant's nose and mouth keypoint locations. The method can be carried out by using an image marking tool or a manual marking mode; preprocessing the marked data set, including operations such as zooming, cutting, rotating and the like of the image so as to increase the diversity and robustness of the data, and simultaneously, performing data enhancement operations such as translation, rotation, overturning, brightness adjustment and the like on the infant face image so as to increase the generalization capability of the model, selecting a target detection model architecture such as a model based on deep learning, for example, faster R-CNN, YOLO, SSD and the like, and using the existing pre-training model as a basic network and adding additional key point regression branches on the basis of the basic network. Training a target detection model by using a labeled infant face image dataset, minimizing a loss function by adopting an optimization algorithm such as random gradient descent (SGD) and the like in the training process, and simultaneously calculating the loss of key point regression by using coordinate information of key points. In the training process, proper super parameters such as learning rate, batch size and the like are required to be set, proper iteration times are carried out, a verification set is used for evaluating a model obtained through training, and tuning is carried out according to an evaluation result. The performance of the model can be evaluated by using an evaluation index such as an average precision average (mAP), parameters and a framework of the model are adjusted according to an evaluation result so as to improve detection accuracy and accuracy of key point regression, the adjusted model is output as a target detection model, a corresponding image of the blocked infant face is input into a pre-trained target detection model, and preset face key points are detected, wherein the face key points comprise mouth and nose key points; by detecting the key points of the nose and the mouth, the facial expression and the state of the mouth and nose area of the infant can be known in real time. This is very helpful in monitoring infant health, respiration and comfort. If abnormal conditions such as blocked nose, abnormal opening or closing mouth and the like are detected, corresponding actions can be timely taken, and comfort and safety of infants are ensured.
S403: and outputting a safety prompt when the fact that the face key point does not appear is detected.
In particular, when the face keypoints are detected not to be present in the infant sleep scene, this situation may mean that the infant's face may be blocked, facing down or facing sideways, which may increase the risk of choking or discomfort, by monitoring the infant's face keypoints in real time, abnormalities may be found in time and an alarm provided. This may prompt parents or guardians to take immediate action, such as turning the body of an infant or adjusting their sleeping position, to ensure that their face is properly ventilated and breathed; in addition, outputting a safety reminder may also remind parents or guardians of safety issues in the infant sleeping environment, such as ensuring that there are no loose sheets or toys on the bed, and avoiding excessive coverage of the infant. These reminders may help parents or guardians to maintain alertness and ensure the safety of the infant during sleep.
S4: when the face orientation of the infant is a positive face, the infant turning-over action is detected, and a detection result is output.
Specifically, through detecting the turning-over action when the face orientation of the infant is the front face and outputting the detection result, the safety monitoring, guidance and recording can be provided, the safety and comfort of the infant during the sleeping period can be ensured, and parents or guardians can be helped to better manage the sleeping behaviors of the infant.
In one embodiment, referring to fig. 6, the step S4 includes:
s41: according to the infant face orientation, when the infant face orientation is a front face, acquiring target area position information, wherein the target area comprises: an infant body area and a shelter area;
In one embodiment, referring to fig. 7, the step S41 includes:
S411: acquiring an infant frontal image according to the face orientation of the infant corresponding to the continuous multi-frame images;
Specifically, using a video stream of consecutive frames as input, face detection and localization are performed on each frame of image, and face detection algorithms, such as Haar cascade, hog+svm, deep learning model, etc., may be used to screen out a face image according to face orientation information. The method comprises the steps of obtaining the frontal face image of the infant, providing accurate input for subsequent analysis and processing, excluding images of the lateral face or other angles, and intensively analyzing the frontal facial features of the infant.
S412: inputting the infant frontal face image into a pre-trained infant face key point detection model, and outputting infant face key point positions;
Specifically, a pre-trained infant face key point detection model, for example, a face key point detection model based on deep learning, is used to infer an infant frontal face image input model, and the position of the face key point is obtained. The accurate position of the key points of the infant face, such as the position information of the characteristic points of eyes, nose, mouth and the like, is obtained, and the position information of the key points can be used for subsequent facial analysis, emotion recognition, sleep quality evaluation and the like.
S413: acquiring a plurality of preset infant age intervals and target area proportions corresponding to the infant age intervals, comparing the current infant age with the infant age intervals, and outputting real-time proportions corresponding to the current infant;
Specifically, a plurality of infant age intervals are predefined, for example, 0-3 months, 3-6 months, 6-9 months, etc., a target area proportion is defined for each age interval, the relative position of the face of the infant in the age interval in the whole image is represented, the current infant age is compared with each age interval according to the current infant age, and the real-time proportion corresponding to the current infant is determined. According to the age of infants, the infant face detection system is automatically adapted to the target area proportion of infants in different age groups, the universality and the adaptability of the system are improved, the real-time proportion output can help to further locate the infant face area, and the facial features of infants in specific age groups can be accurately analyzed.
S414: and outputting the position information of the target area according to the position of the infant face key point and the real-time proportion.
Specifically, the position of the target area is calculated by utilizing the position information of the infant face key points and combining the real-time proportion, and the position of the target area can be represented by a rectangular frame to surround the infant body area, the area where the shielding object on the body, and the like is located.
S42: according to the target area position information, performing motion detection on the target area, and outputting target area motion information;
Specifically, the target area is detected by using the position information of the target area, usually the upper body area of the infant or the area where the shielding object on the upper body of the infant is located, using a motion detection algorithm, such as an optical flow method, a background difference, an inter-frame difference, and the like, analyzing the motion detection result, and outputting the motion information of the target area, such as a motion direction, a motion speed, a motion amplitude, and the like. According to the movement information of the target area, the activity degree, the posture change and the like of the infant can be further analyzed and judged, and the movement information can be used for subsequent applications such as behavior monitoring and sleep posture analysis.
S43: and judging the face directions of the infants and the movement information of the target area, and outputting the detection result as the infant turning-over action when the face directions of the infants are changed and the target area moves.
Specifically, by combining the infant face orientation information and the target area motion information obtained in the previous step, judging whether the infant face orientation is changed, comparing the face orientation information of the current frame and the face orientation information of the previous frame, judging whether the target area moves, judging according to the motion information of the target area, and if the infant face orientation is changed and the target area moves, outputting a detection result as an infant turning action. By timely detecting and identifying the turning-over action of the infant, timely monitoring and warning are provided, the safety of the infant is guaranteed, the detection result of the turning-over action is output, and guardianship or nursing staff can be helped to timely take necessary measures, such as adjusting the sleeping posture of the infant or providing support and the like.
Example 2
Referring to fig. 8, embodiment 2 of the present invention further provides a device for detecting a turning-over action of an infant, which is characterized in that the device includes:
The image acquisition module is used for acquiring a real-time video stream in an infant care scene and decomposing the real-time video stream into multi-frame images;
the face shielding judging module is used for carrying out infant face shielding judgment on each image and identifying infant face shielding conditions;
The face orientation acquisition module is used for carrying out face analysis on the infant face and outputting the infant face orientation when the infant face is not shielded;
The turnover detection module is used for detecting the turnover action of the infant when the face orientation of the infant is the positive face and outputting a detection result.
Specifically, the device for detecting the turning-over action of the infant provided by the embodiment of the invention comprises: the image acquisition module is used for acquiring a real-time video stream in an infant care scene and decomposing the real-time video stream into multi-frame images; the face shielding judging module is used for carrying out infant face shielding judgment on each image and identifying infant face shielding conditions; the face orientation acquisition module is used for carrying out face analysis on the infant face and outputting the infant face orientation when the infant face is not shielded; the turnover detection module is used for detecting the turnover action of the infant when the face orientation of the infant is the positive face and outputting a detection result. The device monitors the sleeping condition and the action of the infant in real time by acquiring and processing the real-time video stream, which is helpful for parents or guardianship personnel to know the state of the infant in time; by judging the face shielding of the image, whether the infant has an object or other objects shielding the face of the infant can be determined, which is very important for monitoring the respiration or the facial expression change; if the infant's face is not occluded, further facial analysis is performed to detect the infant's face orientation, and when the infant's face orientation is positive, the device may detect the infant's turn-over motion, which may provide information about the infant's sleep quality and safety to a parent or guardian, and if the infant is in the same posture or position for a long period of time, appropriate measures may need to be taken, such as changing the infant's sleep position or posture, to reduce the potential risk of choking or discomfort.
Example 3
In addition, the infant turning-over action detection method of embodiment 1 of the present invention described in connection with fig. 1 may be implemented by an electronic device. Fig. 9 shows a schematic hardware structure of an electronic device according to embodiment 3 of the present invention.
The electronic device may include a processor and memory storing computer program instructions.
In particular, the processor may comprise a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present invention.
The memory may include mass storage for data or instructions. By way of example, and not limitation, the memory may comprise a hard disk drive (HARD DISK DRIVE, HDD), floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) drive, or a combination of two or more of these. The memory may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is a non-volatile solid state memory. In a particular embodiment, the memory includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor reads and executes the computer program instructions stored in the memory to realize any one of the infant turning-over action detection methods in the above embodiments.
In one example, the electronic device may also include a communication interface and a bus. The processor, the memory, and the communication interface are connected by a bus and complete communication with each other, as shown in fig. 9.
The communication interface is mainly used for realizing communication among the modules, the devices, the units and/or the equipment in the embodiment of the invention.
The bus includes hardware, software, or both that couple the components of the device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. The bus may include one or more buses, where appropriate. Although embodiments of the invention have been described and illustrated with respect to a particular bus, the invention contemplates any suitable bus or interconnect.
Example 4
In addition, in combination with the infant turning-over action detection method in the above embodiment 1, embodiment 4 of the present invention may also provide a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement any of the infant turn-over motion detection methods of the above embodiments.
In summary, the embodiment of the invention provides a method, a device, equipment and a storage medium for detecting the turning-over action of an infant.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present invention are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. The present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present invention are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and they should be included in the scope of the present invention.

Claims (8)

1. A method for detecting the turning-over action of an infant, which is characterized by comprising the following steps:
s1: acquiring a real-time video stream in an infant care scene, and decomposing the real-time video stream into multi-frame images;
s2: carrying out infant face shielding judgment on each image, and identifying infant face shielding conditions;
s3: when the face of the infant is not shielded, carrying out face analysis on the face of the infant, and outputting the face orientation of the infant;
S4: when the face orientation of the infant is a front face, detecting the turning-over action of the infant, and outputting a detection result;
The step S4 comprises the following steps:
s41: according to the infant face orientation, when the infant face orientation is a front face, acquiring target area position information, wherein the target area comprises: an infant body area and a shelter area;
S42: according to the target area position information, performing motion detection on the target area, and outputting target area motion information;
S43: judging the face orientation of each infant and the movement information of the target area, and outputting the detection result as the infant turning action when the face orientation of the infant changes and the target area moves;
the S41 includes:
S411: acquiring an infant frontal image according to the face orientation of the infant corresponding to the continuous multi-frame images;
S412: inputting the infant frontal face image into a pre-trained infant face key point detection model, and outputting infant face key point positions;
S413: acquiring a plurality of preset infant age intervals and target area proportions corresponding to the infant age intervals, comparing the current infant age with the infant age intervals, and outputting real-time proportions corresponding to the current infant;
S414: and outputting the position information of the target area according to the position of the infant face key point and the real-time proportion.
2. The method for detecting a turning-over motion of an infant according to claim 1, wherein S2 comprises:
s21: detecting the head of the infant on each image to obtain an image of the head of the infant;
S22: detecting the infant head image, and when a target image of the infant face is detected, tracking the head of the infant in the target image, wherein the infant face comprises a front face and a side face;
s23: and detecting the infant face of the tracked infant head, and outputting the shielding condition of the infant face.
3. The method for detecting a turning-over motion of an infant according to claim 2, wherein S23 comprises:
s231: if the tracked infant heads all have infant faces, the infant faces are not shielded;
S232, if the tracked infant head does not have an infant face, acquiring a current frame image of the infant face, and detecting the infant face from a preset frame image after the current frame image;
S233: if the number of frames of the images of the infant face in the preset frame images is larger than a preset frame number threshold, the infant face is recognized as not being blocked;
S234: if the infant face does not appear in the preset frame images or the image frame number of the infant face is not larger than the preset frame number threshold value, the infant face is recognized to be blocked.
4. The method for detecting a turning-over motion of an infant according to claim 3, wherein S3 comprises:
s31, acquiring training image data containing infant faces, extracting features of the training image data, and outputting key feature information of face orientation;
s32: inputting the key characteristic information into a deep learning network for training, and outputting a face analysis model;
s33: when the fact that the infant face appears on the head of the tracked infant or the number of frames of images of the infant face appearing in the preset frame images is larger than a preset frame number threshold is recognized, obtaining an infant face image corresponding to the infant face which is not shielded;
s34, inputting the infant face image into the face analysis model, and outputting the infant face orientation.
5. The method for detecting a turning-over motion of an infant according to claim 1, wherein the step S4 is preceded by:
s401: if the infant face does not appear in the preset frame images or the image frame number of the infant face is not larger than a preset frame number threshold value, the infant face is recognized to be blocked;
S402: inputting a corresponding image of the blocked infant face into a pre-trained target detection model, and detecting preset face key points, wherein the face key points comprise mouth and nose key points;
s403: and outputting a safety prompt when the fact that the face key point does not appear is detected.
6. An infant turn-over motion detection device, the device comprising:
The image acquisition module is used for acquiring a real-time video stream in an infant care scene and decomposing the real-time video stream into multi-frame images;
the face shielding judging module is used for carrying out infant face shielding judgment on each image and identifying infant face shielding conditions;
The face orientation acquisition module is used for carrying out face analysis on the infant face and outputting the infant face orientation when the infant face is not shielded;
the turnover detection module is used for detecting the turnover action of the infant when the face orientation of the infant is a positive face and outputting a detection result;
When the infant face orientation is a front face, the infant turning-over action is detected, and the output detection result comprises:
According to the infant face orientation, when the infant face orientation is a front face, acquiring target area position information, wherein the target area comprises: an infant body area and a shelter area;
According to the target area position information, performing motion detection on the target area, and outputting target area motion information;
judging the face orientation of each infant and the movement information of the target area, and outputting the detection result as the infant turning action when the face orientation of the infant changes and the target area moves;
According to the infant face orientation, when the infant face orientation is a front face, acquiring target area position information, wherein the target area comprises: the infant body area and the shelter area comprise:
Acquiring an infant frontal image according to the face orientation of the infant corresponding to the continuous multi-frame images;
Inputting the infant frontal face image into a pre-trained infant face key point detection model, and outputting infant face key point positions;
Acquiring a plurality of preset infant age intervals and target area proportions corresponding to the infant age intervals, comparing the current infant age with the infant age intervals, and outputting real-time proportions corresponding to the current infant;
and outputting the position information of the target area according to the position of the infant face key point and the real-time proportion.
7. An electronic device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method of any one of claims 1-5.
8. A storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1-5.
CN202311108531.8A 2023-08-30 2023-08-30 Infant turning-over action detection method, device, equipment and storage medium Active CN117173784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311108531.8A CN117173784B (en) 2023-08-30 2023-08-30 Infant turning-over action detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311108531.8A CN117173784B (en) 2023-08-30 2023-08-30 Infant turning-over action detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117173784A CN117173784A (en) 2023-12-05
CN117173784B true CN117173784B (en) 2024-05-07

Family

ID=88944176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311108531.8A Active CN117173784B (en) 2023-08-30 2023-08-30 Infant turning-over action detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117173784B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117690159A (en) * 2023-12-07 2024-03-12 武汉星巡智能科技有限公司 Infant groveling and sleeping monitoring method, device and equipment based on multi-mode data fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091408A (en) * 2014-05-09 2014-10-08 郑州轻工业学院 Infant sleeping posture intelligent identification method and device based on thermal infrared imaging
KR20160055576A (en) * 2014-11-10 2016-05-18 양성화 Infants monitering apparatus, control method thereof
CN107832744A (en) * 2017-11-30 2018-03-23 西安科锐盛创新科技有限公司 A kind of baby sleep monitoring system and its method
CN111407563A (en) * 2020-04-13 2020-07-14 中国人民解放军陆军特色医学中心 Turning device in incubator for neonates
CN113034849A (en) * 2019-12-25 2021-06-25 海信集团有限公司 Infant nursing apparatus, nursing method and storage medium
CN113158937A (en) * 2021-04-28 2021-07-23 合肥移瑞通信技术有限公司 Sleep monitoring method, device, equipment and readable storage medium
CN113516095A (en) * 2021-07-28 2021-10-19 宁波星巡智能科技有限公司 Infant sleep monitoring method, device, equipment and medium
CN115862115A (en) * 2022-12-23 2023-03-28 宁波星巡智能科技有限公司 Infant respiration detection area positioning method, device and equipment based on vision
CN116612532A (en) * 2023-05-25 2023-08-18 武汉星巡智能科技有限公司 Infant target nursing behavior recognition method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091408A (en) * 2014-05-09 2014-10-08 郑州轻工业学院 Infant sleeping posture intelligent identification method and device based on thermal infrared imaging
KR20160055576A (en) * 2014-11-10 2016-05-18 양성화 Infants monitering apparatus, control method thereof
CN107832744A (en) * 2017-11-30 2018-03-23 西安科锐盛创新科技有限公司 A kind of baby sleep monitoring system and its method
CN113034849A (en) * 2019-12-25 2021-06-25 海信集团有限公司 Infant nursing apparatus, nursing method and storage medium
CN111407563A (en) * 2020-04-13 2020-07-14 中国人民解放军陆军特色医学中心 Turning device in incubator for neonates
CN113158937A (en) * 2021-04-28 2021-07-23 合肥移瑞通信技术有限公司 Sleep monitoring method, device, equipment and readable storage medium
CN113516095A (en) * 2021-07-28 2021-10-19 宁波星巡智能科技有限公司 Infant sleep monitoring method, device, equipment and medium
CN115862115A (en) * 2022-12-23 2023-03-28 宁波星巡智能科技有限公司 Infant respiration detection area positioning method, device and equipment based on vision
CN116612532A (en) * 2023-05-25 2023-08-18 武汉星巡智能科技有限公司 Infant target nursing behavior recognition method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
金海岚 ; 陈峰 ; 刘静 ; .基于IP摄像机的手机睡眠监护系统.北京生物医学工程.(02),全文. *

Also Published As

Publication number Publication date
CN117173784A (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN107103733B (en) One kind falling down alarm method, device and equipment
Zhao et al. Real-time detection of fall from bed using a single depth camera
US20160310067A1 (en) A baby monitoring device
CN117173784B (en) Infant turning-over action detection method, device, equipment and storage medium
CN110477925A (en) A kind of fall detection for home for the aged old man and method for early warning and system
CN107767874B (en) Infant crying recognition prompting method and system
US9408562B2 (en) Pet medical checkup device, pet medical checkup method, and non-transitory computer readable recording medium storing program
CN114926957B (en) Infant monitoring system and method based on intelligent home
WO2019003859A1 (en) Monitoring system, control method therefor, and program
WO2017098265A1 (en) Method and apparatus for monitoring
CN107958572A (en) A kind of baby monitoring systems
CN117690159A (en) Infant groveling and sleeping monitoring method, device and equipment based on multi-mode data fusion
CN113706824A (en) Old man nurses system at home based on thing networking control
CN115862115B (en) Infant respiration detection area positioning method, device and equipment based on vision
CN111626273A (en) Fall behavior recognition system and method based on atomic action time sequence characteristics
CN116110129A (en) Intelligent evaluation method, device, equipment and storage medium for dining quality of infants
US20220254241A1 (en) Ai-based video tagging for alarm management
Weber et al. Deep transfer learning for video-based detection of newborn presence in incubator
CN113408477A (en) Infant sleep monitoring system, method and equipment
CN116386671B (en) Infant crying type identification method, device, equipment and storage medium
CN116800976B (en) Audio and video compression and restoration method, device and equipment for infant with sleep
GB2581767A (en) Patient fall prevention
CN110874561A (en) Image detection method and image detection device using double analysis
KR20200109738A (en) Method and apparatus for providing care service of infant
CN113256648B (en) Self-adaptive multi-scale respiration monitoring method based on camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant