CN113257440A - ICU intelligent nursing system based on patient video identification - Google Patents
ICU intelligent nursing system based on patient video identification Download PDFInfo
- Publication number
- CN113257440A CN113257440A CN202110682391.XA CN202110682391A CN113257440A CN 113257440 A CN113257440 A CN 113257440A CN 202110682391 A CN202110682391 A CN 202110682391A CN 113257440 A CN113257440 A CN 113257440A
- Authority
- CN
- China
- Prior art keywords
- nursing
- patient
- human
- posture
- monitoring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000474 nursing Effects 0.000 title claims abstract description 103
- 230000001815 facial Effects 0.000 claims abstract description 122
- 230000014509 gene expression Effects 0.000 claims abstract description 93
- 230000000694 effects Effects 0.000 claims abstract description 42
- 238000011156 evaluation Methods 0.000 claims abstract description 16
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 210000003128 Head Anatomy 0.000 claims description 54
- 208000002193 Pain Diseases 0.000 claims description 26
- 230000036407 pain Effects 0.000 claims description 26
- 210000004709 Eyebrows Anatomy 0.000 claims description 21
- 210000001331 Nose Anatomy 0.000 claims description 21
- 230000001537 neural Effects 0.000 claims description 14
- 210000003414 Extremities Anatomy 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 13
- 210000001503 Joints Anatomy 0.000 claims description 11
- 210000000214 Mouth Anatomy 0.000 claims description 10
- 238000005452 bending Methods 0.000 claims description 8
- 210000003467 Cheek Anatomy 0.000 claims description 7
- 208000000114 Pain Threshold Diseases 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 6
- 230000037040 pain threshold Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 230000000875 corresponding Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 5
- 210000003254 Palate Anatomy 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 230000001702 transmitter Effects 0.000 claims 1
- 210000000088 Lip Anatomy 0.000 description 19
- 238000004458 analytical method Methods 0.000 description 17
- 238000010195 expression analysis Methods 0.000 description 5
- 235000021171 collation Nutrition 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000003203 everyday Effects 0.000 description 4
- 210000000744 Eyelids Anatomy 0.000 description 3
- 230000002354 daily Effects 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 230000029058 respiratory gaseous exchange Effects 0.000 description 3
- 230000037303 wrinkles Effects 0.000 description 3
- 210000002683 Foot Anatomy 0.000 description 2
- 210000001624 Hip Anatomy 0.000 description 2
- 210000003127 Knee Anatomy 0.000 description 2
- 210000002356 Skeleton Anatomy 0.000 description 2
- 238000000034 method Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 210000003423 Ankle Anatomy 0.000 description 1
- 210000004369 Blood Anatomy 0.000 description 1
- 208000000059 Dyspnea Diseases 0.000 description 1
- 206010013975 Dyspnoeas Diseases 0.000 description 1
- 210000003743 Erythrocytes Anatomy 0.000 description 1
- 210000004247 Hand Anatomy 0.000 description 1
- 206010024855 Loss of consciousness Diseases 0.000 description 1
- 210000004072 Lung Anatomy 0.000 description 1
- 210000004373 Mandible Anatomy 0.000 description 1
- 210000002050 Maxilla Anatomy 0.000 description 1
- 210000003324 RBC Anatomy 0.000 description 1
- 206010038683 Respiratory disease Diseases 0.000 description 1
- 241000170489 Upis Species 0.000 description 1
- 210000000707 Wrist Anatomy 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001154 acute Effects 0.000 description 1
- 230000004872 arterial blood pressure Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001186 cumulative Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000004399 eye closure Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006011 modification reaction Methods 0.000 description 1
- 230000004962 physiological condition Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000000630 rising Effects 0.000 description 1
- 238000007665 sagging Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H80/00—ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Abstract
An ICU intelligent nursing system based on patient video identification belongs to the technical field of intelligent nursing. The system comprises an intelligent monitoring device, an image collector, a body index monitoring device and a nursing end, wherein the image collector, the body index monitoring device and the nursing end are respectively connected with the intelligent monitoring device; the intelligent monitoring device comprises a face detection module, a facial expression recognition module, a head gesture recognition module, a human body gesture recognition module, an activity evaluation module and a monitoring decision module; the monitoring decision module is used for judging whether to send a nursing instruction to a nursing end according to the facial expression information identified by the facial expression identification module or the head posture information identified by the head posture identification module; and comprehensively evaluating the state of the patient according to the facial expression information, the head posture information, the activity data, the physiological data and the vital sign data, and sending a comprehensive nursing instruction to the nursing end according to the state of the patient. The invention can monitor, analyze and evaluate the patient in real time, improve the timely intervention ability of medical care personnel and reduce the nursing workload.
Description
Technical Field
The invention relates to the technical field of intelligent nursing, in particular to an ICU intelligent nursing system based on patient video identification.
Background
Currently, many intensive care indicators are not automatically captured, but are repeatedly evaluated by nurses, which not only brings a large amount of high-intensity workload to medical care personnel, but also hardly guarantees the quality of service for severe patients, especially unconscious severe patients. Intensive Care Unit (ICU) is the largest component of the medical systems in the United states. Approximately 55000 patients are cared for in the ICU each day, with a daily cost between $ 3000 and $ 10000. The cumulative cost exceeds $ 800 million per year. ICU becomes more and more important as the baby boomer enters the elderly stage. Today, over half of all patients in the U.S. ICU are over 65 years old. China also has the same trend, which creates a worldwide problem. To meet the growing demand for acute care, training more intensive care professionals is one of the solutions, but with artificial intelligence.
Patients with severe respiratory diseases may wear mechanical ventilators to assist in breathing. These machines, when they push air into the lungs of a patient, may have a breathing rhythm that does not correspond to the patient's natural breathing rate (which, for the most part, varies with physiological conditions), which can lead to the patient actually struggling with the ventilator, which is often adjusted by an experienced caregiver, with serious consequences if the adjustment is not timely, or if an emergency situation occurs. The intelligent control system can detect the expression of the patient and implement recognition of the patient's dyspnea condition. Once the patient suffers from the pain on the expression, the medical staff can be timely warned.
The invention patent application CN108234956A discloses a medical care monitoring method, which comprises the steps of receiving a current video frame sent by a monitoring video acquisition end; analyzing the current video frame to obtain the identity information of the target object; analyzing the current video frame to obtain the state information of the target object; when the state of the target object is judged to be abnormal according to the state information, generating a state abnormality notification according to the identity information and the state information; and sending the state abnormity notification to a processing end so that the processing end carries out corresponding early warning according to the state abnormity notification. The analyzing the current video frame to obtain the state information of the target object specifically includes: performing action recognition on the current video frame by adopting a preset action analysis model to obtain action information of the target object; and/or performing expression recognition on the current video frame by adopting a preset expression analysis model to obtain facial expression information of the target object; and/or performing semantic recognition on the audio information in the current video frame by adopting a preset semantic analysis model to obtain semantic information of the target object; and/or performing emotion analysis on the audio information in the current video frame by adopting a preset emotion analysis model to obtain the emotion information of the target object. The invention utilizes the preset action analysis model, the expression analysis model, the semantic analysis model and the emotion analysis model to monitor the target object in multiple aspects, monitors whether tumble, painful expression, uncomfortable language and painful emotion exist, and then informs medical care personnel to take care in time. The scheme solves the problem that the existing nursing is not timely to a certain extent, and can monitor the state of the patient in real time. However, the invention only monitors the state of the patient through video recognition, does not carry out comprehensive monitoring and evaluation in combination with monitoring data of the medical device, only carries out emergency nursing on the abnormity, and does not form a long-term specific nursing scheme. The action monitoring is only limited to the monitoring that the patient has a large action abnormity, and other action monitoring is not carried out on the patient, so that the recovery activity capability of the patient cannot be known; repeated analysis exists in expression and emotion analysis, both the expression and emotion analysis are obtained through image information analysis of videos, certain subjective judgment exists in emotion analysis, monitoring reliability is not high, and the situation that medical staff are mistakenly warned for many times possibly occurs; repeated analysis also exists in semantic analysis and expression analysis, although the former is analyzed through audio information and the latter is analyzed through image information, pain and symptoms are monitored, differences may exist in expression analysis and semantic analysis results, the differences can cause medical staff to be warned for many times, and the situation that one of the expression analysis and the semantic analysis is wrong is caused.
The invention patent application CN104506809B discloses a severe patient monitoring system based on intelligent video, and specifically discloses a system comprising two image acquisition modules, a medical staff identification module, a patient monitoring module and a personThe device comprises a machine interaction module, an alarm module and a data storage module. The method for calculating the current state of the patient by the patient monitoring module comprises the following steps: (1) the patient monitoring module processes the acquired image for 1 time at intervals of T; (2) carrying out gray level equalization processing on the pixel point matrix, compensating the influence of the light intensity on pixel point coordinate values, and obtaining a gray level equalization image pixel point pixel value matrix; (3) performing front and rear frame difference calculation on the pixel value matrix of the pixel point; (4) the pixel values in the differential gray level equilibrium image matrix are subjected to square summation to obtain a scalar function f (k); (5) and performing Gaussian low-pass filtering on the f (k) to obtain a low-pass filtered scalar function fLP(k) (ii) a (6) A first step of mixingLP(k) Comparing the obtained data with preset experience threshold values a and b, wherein a is less than b; if f isLP(k) If < a, the patient shows slight action or normal state; if a < fLP(k) If the amplitude of the movement is larger than the preset threshold, judging whether the patient has dangerous movement or not by using a mode identification method, and if the patient does not have dangerous movement, continuing to detect; if the patient has dangerous activity, the patient monitoring module begins timing, e.g., at T1A is always kept less than f within the timeLP(k) If the number b is less than the preset threshold, generating alarm information and sending the alarm information to an alarm module; if f isLP(k) If the alarm signal is greater than b, the patient is indicated to have violent action, and alarm information is immediately generated and sent to an alarm module. Although the invention also discloses a system for intelligently monitoring by using videos, the invention only detects dangerous actions of limbs of a patient, does not consider face monitoring, patient activity monitoring and body index detection of the patient, and even does not comprehensively evaluate comprehensive monitoring data to obtain a proper nursing scheme.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an ICU intelligent nursing system based on patient video identification, which identifies the facial expression of a patient and detects the movement of limbs from a video through computer vision so as to improve the autonomous detection rate of the activity of the patient, comprehensively and automatically monitors the patient through fusion with real-time vital sign data and physiological data, improves the timely intervention capability of medical personnel and reduces the nursing workload.
The invention is realized by the following technical scheme:
an ICU intelligent nursing system based on patient video identification comprises an image collector, an intelligent monitoring device, a body index monitoring device and a nursing end; the image collector is used for collecting video data in the ICU; the physical index monitoring device is used for monitoring physiological data and vital sign data of a patient; the intelligent monitoring device is respectively connected with the image collector, the body index monitoring device and the nursing end; its characterized in that, intelligent monitoring device includes:
the face detection module is used for identifying a face position area from video data and obtaining a plurality of facial feature point position vectors in the face position area;
the facial expression recognition module is used for recognizing facial expression information based on the position vectors of the facial feature points and the facial expression comparison vectors;
the head posture identification module is used for converting the position vectors of the facial feature points into three-dimensional coordinates and identifying head posture information by utilizing a random sampling consistency algorithm;
the human body posture identification module is used for identifying a human body position area from the video data, obtaining a plurality of joint vectors in the human body position area, and identifying human body posture information based on the plurality of joint vectors and a human body posture comparison condition;
the activity evaluation module is used for evaluating the activity condition of the patient on the human posture information obtained in the monitoring period to obtain activity data;
the monitoring decision module is used for judging whether to send a nursing instruction to a nursing end according to the facial expression information or the head posture information; and comprehensively evaluating the state of the patient according to the facial expression information, the head posture information, the activity data, the physiological data and the vital sign data, and sending a comprehensive nursing instruction to the nursing end according to the state of the patient.
The invention utilizes an image collector (such as a camera) to collect video data and monitors the condition in the intensive care unit in real time. And based on the video data, carrying out facial feature point recognition to recognize facial expression information and head posture information and carry out facial condition monitoring on the patient. And the posture change condition of the human body are identified based on the video data, and the condition of four limbs of the patient is monitored. The whole body monitoring of the patient is realized by the face condition monitoring and the four-limb condition monitoring, the condition of the patient can be comprehensively evaluated by combining the physiological data and the vital sign data, and a comprehensive nursing scheme is given, so that the daily monitoring of emergency rescue and gradual recovery of the self-weight symptom state of the patient to a mild disease/rehabilitation state is realized.
This intelligent monitoring system can prompt medical personnel to intervene in time and treat under emergency, also can carry out reasonable arrangement to medical personnel's daily nursing, provides medical care management efficiency, reduces unnecessary nursing work load, provides the corresponding reasonable nurse for the patient.
Preferably, the face detection module includes:
the face recognition unit is used for recognizing a face position area from the video data by utilizing computer vision and deep learning technology;
the characteristic point identification unit is used for identifying a plurality of facial characteristic points of the face position area according to the standard face model and determining a plurality of facial characteristic point position vectors;
the standard face model comprises a plurality of standard face feature points set based on standard face contours.
Preferably, the facial expression recognition module includes:
the facial motion extraction unit is used for extracting facial motion parameter vectors of all parts of the face from the position vectors of the plurality of facial feature points; the facial motion parameter vector comprises an eyebrow motion parameter vector, an eye motion parameter vector, a cheek motion parameter vector, a mouth motion parameter vector, a nose motion parameter vector and a palate motion parameter vector;
and the facial expression recognition unit is used for contrasting the facial action parameter vector with the facial expression judgment condition and recognizing the facial expression information of each part of the face.
Preferably, the head pose recognition module includes:
the three-dimensional modeling unit is used for converting the position vectors of the facial feature points into three-dimensional coordinates in a three-dimensional world coordinate system and three-dimensional coordinates in an image collector coordinate system, and solving a relational formula of the image coordinates and the world coordinates:
,
to obtain a rotation matrix R and a translation matrix T;
wherein the content of the first and second substances,is a facial feature point P in a three-dimensional world coordinate systemiIs determined by the three-dimensional coordinates of (a),is a facial feature point PiThree-dimensional coordinates in the image collector coordinate system;as facial feature points PiThe physical coordinates on the imaging plane are,as facial feature points PiPixel coordinates on an imaging plane, i is a natural number; the rotation matrix R represents angles of the face in three directions, including a head raising and lowering angle, a head shaking angle and a horizontal deflection angle;
and the head posture identification unit is used for obtaining the optimal angle estimation value by utilizing the fitting of a random sampling consistency algorithm and identifying the head posture information.
Preferably, the human body posture recognition module includes:
the human body joint identification unit is used for identifying a human body position area from the video data and identifying key feature points of joints and limbs in the human body position area by utilizing a deep convolutional neural network;
the human body posture identification unit is used for acquiring a plurality of joint vectors of a human body by utilizing the deep convolutional neural network, calculating the bending angle of the joint vectors, comparing the bending angle with the human body posture comparison condition and identifying human body posture information; the human body posture comparison conditions comprise a human body posture comparison condition lying on a bed, a human body posture comparison condition sitting on the bed, a human body posture comparison condition standing, and a human body posture comparison condition sitting on a chair.
Preferably, the activity assessment module comprises:
an activity evaluation unit for monitoring the statistical time for the patient to maintain the recognized body posture and monitoring the amount of movement of the patient to change from one body posture to another body posture within a certain period;
and the activity ability evaluation unit is used for judging the activity ability of the patient based on the statistical time, the movement amount and the heart rate data in the physiological data.
Preferably, the monitoring decision module includes:
the pain nursing unit is used for comparing facial expression information of each part of the face with a pain expression judgment condition and identifying whether a patient has a pain expression; if not, the nursing instruction is not sent to the nursing end; if so: if the time of the appearance of the pain expression is required to be identified to be greater than the pain threshold time, or if the frequency of the appearance of the pain expression in a certain period is required to be identified to be greater than the pain threshold frequency, if the condition is met, sending a nursing instruction to a nursing end, otherwise, not sending the instruction;
the head shaking nursing unit is used for identifying whether the frequency of the head shaking gesture in the head gesture information in a certain period is greater than the head shaking threshold frequency, if so, sending a nursing instruction to a nursing end, and otherwise, not sending the instruction;
and the comprehensive decision unit is used for comprehensively evaluating the patient condition and sending a comprehensive nursing instruction to the nursing end according to the patient condition according to the facial expression information, the head posture information, the activity data, the physiological data and the vital sign data.
Preferably, the system further comprises a light source emitter disposed in the ICU for providing a light source to assist the image collector in operating in a dark environment.
Preferably, the system further comprises a database for storing facial expression information, head pose information, activity data, physiological data and vital sign data for the corresponding patient.
Preferably, the system further comprises a plurality of accelerometers worn at the patient's joints for detecting the motion of the patient's limbs. The monitoring decision module further comprises a dangerous nursing unit for identifying the joint movement angle, judging whether the patient falls down or moves violently when the joint movement angle is larger than the safe movement angle threshold value, and sending a nursing instruction to a nursing end.
The invention has the following beneficial effects:
the invention discloses an Artificial Intelligence system (AI) for automatically identifying the expression and the state of an illness of a patient by identifying the facial expression and the body posture of the patient through videos. The technique aims to provide some precautionary and meticulous care to provide more comprehensive data about the condition of a patient by accurately and precisely quantifying the patient's actions, as an expert is accompanied by the patient at all times, carefully calibrating the treatment. The invention can carry out comprehensive automatic monitoring on the patient by combining with the real-time vital sign data and the physiological data, the system can relieve the pressure of the overload of the personnel in the intensive care unit, help to manage the repeated patient evaluation in real time, and can realize more accurate prediction and detection of the emergency. More importantly, if the technology is applied to an Intensive Care Unit (ICU), the technology can provide better service for critically ill patients, reduce the pain of the patients and improve the life safety. The invention does not replace manpower, but can be a part of a medical system, so that doctors and nurses can exert their skills when they are needed most, and the utilization rate of medical resources is improved.
Drawings
Fig. 1 is a block diagram of an ICU intelligent nursing system based on patient video identification according to the present invention;
FIG. 2 is an example of a convolutional neural network employed by the face detection module to identify face location regions and facial feature points;
FIG. 3 is a schematic diagram of a standard face model with 68 standard facial feature points identified;
FIG. 4 is a schematic representation of facial expressions, illustrating a painful expression;
FIG. 5 is a diagram of a deep convolutional network structure for human pose recognition;
FIG. 6 is a schematic diagram showing the identification of characteristic points of the whole body when the human body is in a standing state and the hands are lifted and held up to the two ears; the points in the figure are key points of joints and limbs, and the connecting lines link the joints and the limbs together to form the body framework in the current state.
Detailed Description
The following are specific embodiments of the present invention and are further described with reference to the drawings, but the present invention is not limited to these embodiments.
Referring to fig. 1, an ICU intelligent nursing system based on patient video identification includes an image collector, an intelligent monitoring device, a body index monitoring device, and a nursing end. The image collector is used for collecting video data in the ICU, is arranged in the ICU and is opposite to the sickbed, and for example, a high-resolution and wide-field camera is adopted. The body index monitoring device is used for monitoring physiological data and vital sign data of patients, the physiological data comprises data such as blood pressure, red blood cell number, blood fat and euglobulin, and the vital sign data comprises electrocardio and arterial blood pressure data. The body index monitoring device is a medical instrument for realizing the data monitoring, such as a heart rate monitor and the like. The intelligent monitoring device is respectively connected with the image collector, the body index monitoring device and the nursing end. The image collector collects video data and inputs the video data to the intelligent monitoring device, and the body index monitoring device is used for inputting monitored physiological data and vital sign data to the intelligent monitoring device. The intelligent monitoring device carries out monitoring, analysis and judgment, sends a monitoring instruction to a nursing end for the condition needing monitoring, and medical personnel carry out nursing actions according to the received monitoring instruction.
The intelligent monitoring device comprises a face detection module, a facial expression recognition module, a head gesture recognition module, a human body gesture recognition module, an activity evaluation module and a monitoring decision module. The face detection module is used for identifying a face position area from video data and obtaining a plurality of facial feature point position vectors in the face position area. The facial expression recognition module is used for recognizing facial expression information based on the facial feature point position vectors and the facial expression contrast vectors. The head posture identification module is used for converting the position vectors of the facial feature points into three-dimensional coordinates and identifying head posture information by using a random sampling consistency algorithm. The human body posture identification module is used for identifying a human body position area from the video data, obtaining a plurality of joint vectors in the human body position area, and identifying human body posture information based on the plurality of joint vectors and a human body posture comparison condition. And the activity evaluation module is used for evaluating the activity condition of the patient on the human posture information obtained in the monitoring period to obtain activity data. The monitoring decision module is used for judging whether to send a nursing instruction to a nursing end according to the facial expression information or the head posture information; and comprehensively evaluating the state of the patient according to the facial expression information, the head posture information, the activity data, the physiological data and the vital sign data, and sending a comprehensive nursing instruction to the nursing end according to the state of the patient.
Specifically, the face detection module includes a face recognition unit and a feature point recognition unit. And the face recognition unit is used for recognizing a face position area from the video data by utilizing computer vision and deep learning technology. The feature point identification unit is used for identifying a plurality of facial feature points of the face position area according to the standard face model and determining a plurality of facial feature point position vectors. The standard face model comprises a plurality of standard face feature points set based on standard face contours. For example, referring to fig. 2, the method of multi-task convolutional neural network detection based on deep learning is adopted to complete face detection and face feature point extraction. And sorting the output results of the first layer and the second layer of convolutional neural networks according to the size of the face, and quickly acquiring a face region according to the approximate pixel value of the face. The third layer of convolutional neural network outputs several facial feature points (generally, 68 facial feature points are output, and more facial feature points can be output according to needs). The 68 feature points are determined by a standard face model, see fig. 3. The 68 feature points include 17 feature points at the cheek contours, 5 x 2 feature points at the eyebrows, 6 x 2 feature points at the eyes, 4 feature points at the bridge of the nose, 5 feature points at the nose head, 20 feature points at the mouth. Based on the output facial feature points, facial feature point position vectors are obtained.
Specifically, the facial expression recognition module comprises a facial action extraction unit and a facial expression recognition unit. The facial action extraction unit is used for extracting facial action parameter vectors of all parts of the face from the position vectors of the plurality of facial feature points. The facial motion parameter vector comprises an eyebrow motion parameter vector, an eye motion parameter vector, a cheek motion parameter vector, a mouth motion parameter vector, a nose motion parameter vector and a palate motion parameter vector. For example, each eyebrow is provided with 5 feature points, and based on the 5 feature points, motion parameter vectors of the overall morphology of the eyebrow are obtained, such as a vector containing a plurality of adjacent feature points, a vector between the 18 th feature point and the 19 th feature point, a vector between the 19 th and 20 th feature points, a vector between the 20 th and 21 st feature points, and a vector between the 21 st and 22 nd feature points. The mouth is provided with 20 characteristic points, wherein the upper lip is provided with 10 characteristic points, the lower lip is provided with 10 characteristic points, motion parameter vectors of the integral shapes of the upper arc surface and the lower arc surface of the upper lip and the lower lip respectively are obtained, such as a vector comprising a plurality of adjacent characteristic points, taking the upper lip as an example, a vector comprising 49 th characteristic point and 50 th characteristic point, a vector comprising 50 th characteristic point and 51 st characteristic point, a vector comprising 51 st characteristic point and 52 th characteristic point, a vector comprising 52 th characteristic point and 53 th characteristic point, a vector comprising 53 th characteristic point and 54 th characteristic point, a vector comprising 61 st characteristic point and 62 th characteristic point, a vector comprising 62 th characteristic point and 63 th characteristic point, a vector comprising 63 th characteristic point and 64 th characteristic point, and a vector comprising 64 th characteristic point and 65 th characteristic point; the nose is provided with 9 characteristic points, and the motion parameter vector of the integral shape of the nose bridge and the motion vector of the integral shape of the nose head are obtained, such as the vectors between a plurality of adjacent characteristic points. The motion parameter vector of the entire shape of the other part is also determined based on the contour formed by the feature points (specifically, referring to the number in fig. 3, the contour of the part can be roughly formed by connecting the feature points of the same part in consecutive numbers).
And the facial expression recognition unit is used for contrasting the facial action parameter vectors with facial expression judgment conditions and recognizing facial expression information of each part of the face. The facial expression judgment condition is also determined based on a plurality of standard facial feature points set by a standard face model, for example, 5 feature points at the eyebrows have specific standard motion parameter vectors when representing the raised state of the eyebrows; when representing the state of eyebrow drop (see 111 in fig. 4), there is also its specific standard motion parameter vector; when the bending or creasing state of the eyebrows is represented, specific standard motion parameter vectors are provided, the motion vectors recognized by a patient in real time are compared with the standard motion parameter vectors, and when the motion vectors recognized in real time are approximately matched with the standard motion parameter vectors, the facial expression information is determined to contain the raised eyebrows or the dropped eyebrows or the bent eyebrows or the creases of the eyebrows; or, the facial expression judgment condition may adopt a plurality of vector form judgment conditions, and if the position relationship among a plurality of vectors is judged to meet the conditions of two sides being low and the middle being high, it is determined that the facial expression information includes eyebrow bending; when the position relation among the vectors is judged to meet the conditions that one side close to the nose bridge is high and one side close to the ears is low, determining that the facial expression information contains eyebrow sagging; when the position relation among the vectors is judged to meet the conditions that one side close to the nose bridge is low and one side close to the ears is high, determining that the facial expression information contains the raised eyebrows; and when the position relations among the vectors are judged to be almost consistent, and the distances between the feature points at the two ends of the eyebrows are shortened, or the position relations of high, low and high appear, determining that the facial expression information contains eyebrow creases. For another example, 20 feature points at the lips have their specific standard motion parameter vectors when they indicate that the lips are too high (see 114 in fig. 4), closed, left-falling (see 117 in fig. 4), and raised mouth corners, and the facial expression information is determined to include too high lips, closed lips, left-falling, and raised mouth corners when the motion vectors recognized in real time by comparing the motion vectors recognized by the patient with the standard motion parameter vectors and the motion vectors recognized in real time substantially match with the standard motion parameter vectors. Or determining that the positions of the vectors are higher than the positions of the vectors in the standard lip opening and closing state, and determining that the facial expression information contains lip elevation; judging that the distance between the vectors on the upper and lower arc surfaces of the upper lip is shorter than the distance between the vectors on the upper and lower arc surfaces of the upper lip in the opening and closing state of the lips, and determining that the facial expression information contains tight lip closure; when the position relation among the vectors on the upper arc surface of the upper lip is judged to present a low, high, low, high and low similar relation, determining that the facial expression information contains a left-falling lip; and when the positions of the vectors at the two sides of the lips are higher than the position of the vector in the middle, determining that the facial expression information contains the mouth corner rising. For another example, when the 9 feature points at the nose indicate the states of the nose wrinkle, the nostril opening, etc., particularly the specific standard motion parameter vector, the motion vector recognized in real time for the patient is compared with the standard motion parameter vector, and when the motion vector recognized in real time is approximately matched with the standard motion parameter vector, the facial expression information is determined to include the nose wrinkle (see 116 in fig. 4) and the nostril opening. Or when the vector distance at the nose bridge is judged to be shorter than the vector distance at the nose bridge in the normal state of the standard nose, determining that the facial expression information contains the nose wrinkle; and when the vector distance of the nose is judged to be larger than the vector distance of the nose under the normal state of the standard nose, determining that the facial expression information contains nostril opening.
In addition to the above-described facial expressions, for the eyes, eye closure (see 112 in fig. 4), eyelid locking (see 115 in fig. 4), eye opening, and the like are included; for cheek, cheek elevation (see 113 in fig. 4), maxilla elevation (see 116 in fig. 4), mandible retraction, etc. are also included. The expression recognition can be realized by referring to the above manner, for example, by comparing the motion vector recognized in real time with the standard motion parameter vector, or by determining the conditions such as position, distance, length relationship, etc. of the motion vector recognized in real time.
Specifically, the head gesture recognition module comprises a three-dimensional modeling unit and a head gesture recognition unit. And the three-dimensional modeling unit is used for converting the position vectors of the facial feature points into three-dimensional coordinates in a three-dimensional world coordinate system and three-dimensional coordinates in an image collector coordinate system to construct a three-dimensional formula (1).
Formula (1)
And solving a relational formula of the image coordinates and the world coordinates to obtain a rotation matrix R and a translation matrix T:
formula (2)
Wherein the content of the first and second substances,is a facial feature point P in a three-dimensional world coordinate systemiIs determined by the three-dimensional coordinates of (a),is a facial feature point PiThree-dimensional coordinates in the image collector coordinate system;as facial feature points PiThe physical coordinates on the imaging plane are,as facial feature points PiPixel coordinates on an imaging plane, i is a natural number; the rotation matrix R represents the angles of the face in three directions, including a head raising and lowering angle, a head shaking angle and a horizontal deflection angle.
The head posture identifying unit is used for obtaining the optimal angle estimation value by utilizing random sampling consensus RANSAC algorithm fitting and identifying the head posture information.
Specifically, the human body posture recognition module comprises a human body joint recognition unit and a human body posture recognition unit. The human body joint identification unit is used for identifying a human body position area from the video data and identifying key feature points of joints and limbs in the human body position area by utilizing a deep convolutional neural network. According to the method, a human body position area is firstly identified through a deep convolutional neural network, and then key feature points of joints and limbs in the area are identified based on real-time human body 3D gestures of the deep convolutional neural network. Fig. 5 is a structure of a MobileNet posture recognition network, and points in fig. 6 are skeleton points, namely, key feature points of joints and limbs are identified. There are 18 skeletal points in fig. 6. And processing the input video image by utilizing a multilayer convolution network and outputting an identification result. The human body posture identification unit is used for acquiring a plurality of joint vectors of a human body by utilizing a deep convolutional neural network, calculating the bending angle of the joint vectors, comparing the bending angle with the human body posture comparison condition and identifying human body posture information; the human body posture comparison conditions comprise a human body posture comparison condition lying on a bed, a human body posture comparison condition sitting on the bed, a human body posture comparison condition standing, and a human body posture comparison condition sitting on a chair.
The human body posture recognition will be specifically described by taking fig. 6 as an example. Draw a straight line along the length edge direction of the bed, and draw a straight line artificially along the length of the bed in combination with the drawing of figure 6The angle between the waist point 121 as the origin and the skeleton point 120 on the neck and the knee point 122 is defined as the originThe knee point 122 is used as the origin, and the angle between the waist point 121 and the foot point 123 is used as the angleStraight line and straight line formed by point 120 on the neck and foot point 123Is at an included angle of. Lying on a bedIs =180 ° ±And is and =0°± (ii) a Standing upIs =180 ° ±And is and =90°±2* (ii) a Sit on the bedIs =90 ° ±And is and=180°±(ii) a Sitting on a chairIs =90 ° ±And is and =90°± whereinTo allow for the deviation angle, it may be 15 °. It can be seen that the above-mentioned angle range conditions are the human body posture collation condition of lying on the bed, the human body posture collation condition of sitting on the bed, the human body posture collation condition of standing, and the human body posture collation condition of sitting on the chair. By calculating correspondences in real time、、The human body posture can be judged by respectively comparing the angle values with the angle range conditions.
Specifically, the activity evaluation module comprises an activity amount evaluation unit and an activity capacity evaluation unit. The activity evaluation unit is used for monitoring the statistical time of the patient for keeping the recognized human body posture and monitoring the moving amount of the patient for changing from one human body posture to another human body posture in a certain period. According to the human body posture information identified by the human body posture identification module, the posture of a patient in a certain period and the posture of the patient in the next period can be confirmed. Determining the movement amount according to the posture change, and considering that the movement amount is about 0.4 m if the user sits on the bed after lying on the bed; if the patient sits on the bed to stands, the movement amount is about 0.8 m; the amount of movement is considered to be about 1 meter from standing to sitting on the chair. The movement trajectory of the patient in the ICU generally follows the movement of lying, sitting, standing, and sitting on a bed, and when a plurality of movements are completed, the movements are superimposed on each other to obtain a total movement. The activity ability evaluation unit is used for judging the activity ability of the patient based on the statistical time, the movement amount and the heart rate data in the physiological data. For example, in the counting time, the total movement amount is counted to obtain the total movement amount, and the average value of the heart rate data in the time is combined to determine whether the patient keeps the normal heart rate level in the movement amount, if so, the activity of the patient is good, no additional alarm prompt is needed, and if not, an alarm prompt is needed to be given to a nursing end for nursing, or a prompt device (sound or display) in the system is used for giving an alarm prompt to the patient to pause.
Specifically, the monitoring decision module comprises a pain nursing unit, a head shaking nursing unit and a comprehensive decision unit. The pain nursing unit is used for comparing facial expression information of each part of the face with a pain expression judgment condition and identifying whether a patient has a pain expression; if not, the nursing instruction is not sent to the nursing end; if so: and if the time of the appearance of the pain expression is required to be identified to be greater than the pain threshold time, or the frequency of the appearance of the pain expression in a certain period is required to be identified to be greater than the pain threshold frequency, if the condition is met, sending a nursing instruction to a nursing end, otherwise, not sending the instruction. Fig. 4 shows a schematic of pain expression. In the event of pain, facial expressions typically present a condition of drooping eyebrows, closed eyes, raised cheeks, too high lips, tight eyelids, wrinkled nose, or left-falling mouth. For example, if the pain expression determination condition is set to be that facial expression information has four states of eyebrow dropping, eye closing, eyelid locking and mouth skimming, the patient is identified to have a pain expression. Alternatively, the pain expression determination condition may be set such that any one of the states in fig. 4 exists, and then the pain expression is recognized. However, the recognition is too single, and the possibility of false recognition exists. The pain expression determination condition may be set as needed to enable recognition and the recognition state is not limited to one. The head shaking nursing unit is used for identifying whether the frequency of the head shaking gesture in the head gesture information in a certain period is greater than the head shaking threshold frequency, if so, sending a nursing instruction to a nursing end, and otherwise, not sending the instruction. The patient is determined to have the shaking head gesture by obtaining the shaking head angle in the head gesture information, the times of recognizing all gestures are counted in a certain period, and the frequency of the actual shaking head gesture is calculated. If the counting time is one minute, the shaking threshold frequency is set to be 2hz, and when the frequency is exceeded, the medical staff is required to take urgent care. The comprehensive decision unit is used for comprehensively evaluating the state of the patient according to the facial expression information, the head posture information, the activity data, the physiological data and the vital sign data and sending a comprehensive nursing instruction to the nursing end according to the state of the patient. The comprehensive decision unit can be realized by utilizing a two-layer convolutional neural network, and facial expression information, head posture information, activity data, physiological data and vital sign data are input into the convolutional neural network as input parameters. The network comprises convolution layers of 3 x 2 and 3 x 4, fully connected layers adopt 1 x 4, and finally, a Softmax layer is adopted to output the probability grade of the patient condition. Obtaining the disease condition according to the highest probability. The condition of the patient can be indicated as healthy by 0, mild by 1, moderate by 2 and critical by 3. For healthy patients, the comprehensive nursing instruction is to perform routine nursing every day; for mild patients, the comprehensive nursing instruction is to perform mild nursing every day; for moderate patients, the comprehensive nursing instruction is to perform moderate nursing every day; for dangerous patients, the comprehensive nursing instruction is to nurse the patients every day, such as polling every 30 minutes to 1 hour. The nursing principle conforms to the nursing principle in the ICU of the hospital.
The system also comprises a light source emitter arranged in the ICU and used for providing a light source to assist the image collector to work in a dark environment. The light source emitter can adopt invisible infrared light for human eyes, and when light needs to be dimmed or turned off at night, the image collector can still normally collect video data in the ICU.
The system of the invention further comprises a database for storing facial expression information, head posture information, activity data, physiological data and vital sign data corresponding to the patient. The personal information of the patient can be determined by the house number and the medical record number of the ICU where the patient is located. Data monitored during the ICU is bundled with the patient for later medical record query. Or, the face detection module can be used for face recognition and memory, recording the face form, and comparing the patient pictures stored in the database to recognize the personal information of the patient. And then, the data monitored in the ICU period is bundled and stored with the patient so as to be convenient for inquiring medical records in subsequent diagnosis and treatment.
The system also comprises a plurality of accelerometers which are worn at the joints of the patient and used for detecting the actions of the limbs of the patient. The number of the accelerometers is 4, for example, 2 are arranged at the wrist and 2 are arranged at the ankle, and other joints can be arranged according to the requirement. The monitoring decision module further comprises a dangerous nursing unit for identifying the joint movement angle, judging whether the patient falls down or moves violently when the joint movement angle is larger than the safe movement angle threshold value, and sending a nursing instruction to a nursing end. Medical personnel drive to the ICU for emergency treatment at the first time.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the present invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.
Claims (10)
1. An ICU intelligent nursing system based on patient video identification comprises an image collector, an intelligent monitoring device, a body index monitoring device and a nursing end; the image collector is used for collecting video data in the ICU; the physical index monitoring device is used for monitoring physiological data and vital sign data of a patient; the intelligent monitoring device is respectively connected with the image collector, the body index monitoring device and the nursing end; its characterized in that, intelligent monitoring device includes:
the face detection module is used for identifying a face position area from video data and obtaining a plurality of facial feature point position vectors in the face position area;
the facial expression recognition module is used for recognizing facial expression information based on the position vectors of the facial feature points and the facial expression comparison vectors;
the head posture identification module is used for converting the position vectors of the facial feature points into three-dimensional coordinates and identifying head posture information by utilizing a random sampling consistency algorithm;
the human body posture identification module is used for identifying a human body position area from the video data, obtaining a plurality of joint vectors in the human body position area, and identifying human body posture information based on the plurality of joint vectors and a human body posture comparison condition;
the activity evaluation module is used for evaluating the activity condition of the patient on the human posture information obtained in the monitoring period to obtain activity data;
the monitoring decision module is used for judging whether to send a nursing instruction to a nursing end according to the facial expression information or the head posture information; and comprehensively evaluating the state of the patient according to the facial expression information, the head posture information, the activity data, the physiological data and the vital sign data, and sending a comprehensive nursing instruction to the nursing end according to the state of the patient.
2. The ICU intelligent nursing system based on patient video recognition of claim 1, wherein the face detection module comprises:
the face recognition unit is used for recognizing a face position area from the video data by utilizing computer vision and deep learning technology;
the characteristic point identification unit is used for identifying a plurality of facial characteristic points of the face position area according to the standard face model and determining a plurality of facial characteristic point position vectors;
the standard face model comprises a plurality of standard face feature points set based on standard face contours.
3. The ICU smart nursing system based on patient video recognition of claim 1 wherein the facial expression recognition module comprises:
the facial motion extraction unit is used for extracting facial motion parameter vectors of all parts of the face from the position vectors of the plurality of facial feature points; the facial motion parameter vector comprises an eyebrow motion parameter vector, an eye motion parameter vector, a cheek motion parameter vector, a mouth motion parameter vector, a nose motion parameter vector and a palate motion parameter vector;
and the facial expression recognition unit is used for contrasting the facial action parameter vector with the facial expression judgment condition and recognizing the facial expression information of each part of the face.
4. The system of claim 1, wherein the head pose recognition module comprises:
the three-dimensional modeling unit is used for converting the position vectors of the facial feature points into three-dimensional coordinates in a three-dimensional world coordinate system and three-dimensional coordinates in an image collector coordinate system, and solving a relational formula of the image coordinates and the world coordinates:
,
to obtain a rotation matrix R and a translation matrix T;
wherein the content of the first and second substances,is a facial feature point P in a three-dimensional world coordinate systemiIs determined by the three-dimensional coordinates of (a),is a facial feature point PiThree-dimensional coordinates in the image collector coordinate system;as facial feature points PiThe physical coordinates on the imaging plane are,as facial feature points PiPixel coordinate on the imaging plane, i is naturalCounting; the rotation matrix R represents angles of the face in three directions, including a head raising and lowering angle, a head shaking angle and a horizontal deflection angle;
and the head posture identification unit is used for obtaining the optimal angle estimation value by utilizing the fitting of a random sampling consistency algorithm and identifying the head posture information.
5. The ICU intelligent nursing system based on patient video recognition of claim 1, wherein the body posture recognition module comprises:
the human body joint identification unit is used for identifying a human body position area from the video data and identifying key feature points of joints and limbs in the human body position area by utilizing a deep convolutional neural network;
the human body posture identification unit is used for acquiring a plurality of joint vectors of a human body by utilizing the deep convolutional neural network, calculating the bending angle of the joint vectors, comparing the bending angle with the human body posture comparison condition and identifying human body posture information; the human body posture comparison conditions comprise a human body posture comparison condition lying on a bed, a human body posture comparison condition sitting on the bed, a human body posture comparison condition standing, and a human body posture comparison condition sitting on a chair.
6. The system of claim 1, wherein the activity assessment module comprises:
an activity evaluation unit for monitoring the statistical time for the patient to maintain the recognized body posture and monitoring the amount of movement of the patient to change from one body posture to another body posture within a certain period;
and the activity ability evaluation unit is used for judging the activity ability of the patient based on the statistical time, the movement amount and the heart rate data in the physiological data.
7. The ICU intelligent nursing system based on patient video identification as claimed in claim 1, wherein the monitoring decision module comprises:
the pain nursing unit is used for comparing facial expression information of each part of the face with a pain expression judgment condition and identifying whether a patient has a pain expression; if not, the nursing instruction is not sent to the nursing end; if so: if the time of the appearance of the pain expression is required to be identified to be greater than the pain threshold time, or if the frequency of the appearance of the pain expression in a certain period is required to be identified to be greater than the pain threshold frequency, if the condition is met, sending a nursing instruction to a nursing end, otherwise, not sending the instruction;
the head shaking nursing unit is used for identifying whether the frequency of the head shaking gesture in the head gesture information in a certain period is greater than the head shaking threshold frequency, if so, sending a nursing instruction to a nursing end, and otherwise, not sending the instruction;
and the comprehensive decision unit is used for comprehensively evaluating the patient condition and sending a comprehensive nursing instruction to the nursing end according to the patient condition according to the facial expression information, the head posture information, the activity data, the physiological data and the vital sign data.
8. The system of claim 1, further comprising a light source transmitter disposed in the ICU for providing a light source to assist the image collector in operating in a dark environment.
9. The system of claim 1, further comprising a database for storing facial expression information, head pose information, activity data, physiological data, and vital sign data corresponding to the patient.
10. The ICU intelligent nursing system based on patient video identification as claimed in claim 7, further comprising a plurality of accelerometers worn on joints of the patient for detecting the limb movements of the patient, wherein the monitoring decision module further comprises a dangerous nursing unit for identifying the joint movement angle, and when the joint movement angle is greater than the threshold value of the safe movement angle, judging that the patient falls down or moves violently, sending a nursing instruction to a nursing end.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110682391.XA CN113257440A (en) | 2021-06-21 | 2021-06-21 | ICU intelligent nursing system based on patient video identification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110682391.XA CN113257440A (en) | 2021-06-21 | 2021-06-21 | ICU intelligent nursing system based on patient video identification |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113257440A true CN113257440A (en) | 2021-08-13 |
Family
ID=77188672
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110682391.XA Pending CN113257440A (en) | 2021-06-21 | 2021-06-21 | ICU intelligent nursing system based on patient video identification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113257440A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106355147A (en) * | 2016-08-26 | 2017-01-25 | 张艳 | Acquiring method and detecting method of live face head pose detection regression apparatus |
CN109806113A (en) * | 2019-03-14 | 2019-05-28 | 郑州大学 | A kind of ward ICU horizontal lower limb rehabilitation intelligent interaction robot group system based on ad hoc network navigation |
US20210121121A1 (en) * | 2019-10-28 | 2021-04-29 | International Business Machines Corporation | Parkinson?s disease treatment adjustment and rehabilitation therapy based on analysis of adaptive games |
-
2021
- 2021-06-21 CN CN202110682391.XA patent/CN113257440A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106355147A (en) * | 2016-08-26 | 2017-01-25 | 张艳 | Acquiring method and detecting method of live face head pose detection regression apparatus |
CN109806113A (en) * | 2019-03-14 | 2019-05-28 | 郑州大学 | A kind of ward ICU horizontal lower limb rehabilitation intelligent interaction robot group system based on ad hoc network navigation |
US20210121121A1 (en) * | 2019-10-28 | 2021-04-29 | International Business Machines Corporation | Parkinson?s disease treatment adjustment and rehabilitation therapy based on analysis of adaptive games |
Non-Patent Citations (6)
Title |
---|
张鑫 等: "基于计算机视觉的患者面部偏转角度对面瘫评定的影响分析", 《深圳中西医结合杂志》 * |
徐建萍 等: "《疾病观察与护理技能丛书 风湿免疫科疾病观察与护理技能》", 30 April 2019, 北京:中国医药科技出版社 * |
林强 等: "《行为识别与智能计算》", 30 November 2016, 西安:西安电子科技大学出版社 * |
王崇仁 等: "《医事困学录 津门大全王士相学术经验集》", 31 January 2012, 北京:中国中医药出版社 * |
邓开发: "《人工智能与艺术设计》", 30 September 2019, 上海:华东理工大学出版社 * |
郭杰荣 等: "《光电信息技术实验教程》", 31 December 2015, 西安:西安电子科技大学出版社 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sathyanarayana et al. | Vision-based patient monitoring: a comprehensive review of algorithms and technologies | |
Zhang et al. | Activity monitoring using a smart phone's accelerometer with hierarchical classification | |
US10213152B2 (en) | System and method for real-time measurement of sleep quality | |
Wang et al. | Unconstrained video monitoring of breathing behavior and application to diagnosis of sleep apnea | |
US7420472B2 (en) | Patient monitoring apparatus | |
US7502498B2 (en) | Patient monitoring apparatus | |
CN109558865A (en) | A kind of abnormal state detection method to the special caregiver of need based on human body key point | |
CN109394247B (en) | Multi-feature fusion diagnosis user emotion monitoring method | |
Zamzmi et al. | An approach for automated multimodal analysis of infants' pain | |
WO2017193497A1 (en) | Fusion model-based intellectualized health management server and system, and control method therefor | |
WO2001088836A1 (en) | Method and apparatus for remote medical monitoring incorporating video processing and system of motor tasks | |
CN110477925A (en) | A kind of fall detection for home for the aged old man and method for early warning and system | |
CN105792732A (en) | Apnea safety control | |
CN109863561A (en) | Equipment, system and method for patient-monitoring to predict and prevent bed from falling | |
CN108784669A (en) | A kind of contactless heartbeat and disordered breathing monitor system and method | |
CN112022096A (en) | Sleep state monitoring method and device | |
CN106205001A (en) | A kind of hospital security warning system | |
Werth et al. | Deep learning approach for ECG-based automatic sleep state classification in preterm infants | |
Pogorelc et al. | Home-based health monitoring of the elderly through gait recognition | |
CN113257440A (en) | ICU intelligent nursing system based on patient video identification | |
CN113143223A (en) | Edge artificial intelligence infant monitoring method | |
Yi et al. | Home interactive elderly care two-way video healthcare system design | |
Ahmed et al. | Internet of health things driven deep learning-based system for non-invasive patient discomfort detection using time frame rules and pairwise keypoints distance feature | |
Lyu et al. | Skeleton-Based Sleep Posture Recognition with BP Neural Network | |
CN113762085B (en) | Artificial intelligence-based infant incubator system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |