CN114601454A - Method for monitoring bedridden posture of patient - Google Patents
Method for monitoring bedridden posture of patient Download PDFInfo
- Publication number
- CN114601454A CN114601454A CN202210243496.XA CN202210243496A CN114601454A CN 114601454 A CN114601454 A CN 114601454A CN 202210243496 A CN202210243496 A CN 202210243496A CN 114601454 A CN114601454 A CN 114601454A
- Authority
- CN
- China
- Prior art keywords
- patient
- posture
- skeleton
- joint
- human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000012544 monitoring process Methods 0.000 title claims abstract description 22
- 230000006399 behavior Effects 0.000 claims abstract description 19
- 238000010586 diagram Methods 0.000 claims abstract description 14
- 230000009471 action Effects 0.000 claims abstract description 9
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 19
- 210000000988 bone and bone Anatomy 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 230000000875 corresponding effect Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 238000013500 data storage Methods 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims description 2
- 238000003062 neural network model Methods 0.000 claims 2
- 238000011897 real-time detection Methods 0.000 claims 1
- 201000010099 disease Diseases 0.000 abstract description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 abstract description 5
- 230000036544 posture Effects 0.000 description 35
- 238000004364 calculation method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000006872 improvement Effects 0.000 description 4
- 230000000474 nursing effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 206010033799 Paralysis Diseases 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000007306 turnover Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 208000028399 Critical Illness Diseases 0.000 description 1
- 206010011953 Decreased activity Diseases 0.000 description 1
- 206010011985 Decubitus ulcer Diseases 0.000 description 1
- 206010012239 Delusion Diseases 0.000 description 1
- 206010035664 Pneumonia Diseases 0.000 description 1
- 208000004210 Pressure Ulcer Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004820 blood count Methods 0.000 description 1
- 238000009534 blood test Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 206010061428 decreased appetite Diseases 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 231100000868 delusion Toxicity 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 208000019116 sleep disease Diseases 0.000 description 1
- 208000020685 sleep-wake disease Diseases 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1116—Determining posture transitions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2505/00—Evaluating, monitoring or diagnosing in the context of a particular type of medical care
- A61B2505/03—Intensive care
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2505/00—Evaluating, monitoring or diagnosing in the context of a particular type of medical care
- A61B2505/05—Surgical care
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Physiology (AREA)
- Veterinary Medicine (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for monitoring the bedridden posture of a patient, which comprises the following steps: s1) acquiring image information of human body posture behaviors through a depth camera; s2) extracting human skeleton information from the collected image information to obtain human action skeleton map information; s3) the patient lying in bed is identified according to the human body action skeleton diagram. The step S3 classifies the posture of the relevant image by using a classification feature algorithm, and compares the classified posture type result with the national standard bed-lying posture. The monitoring method for the bedridden posture of the patient can accurately identify whether the posture of the patient is correct or not, so that the behavior of the patient after the operation or when the patient with serious diseases is in bed rest can be automatically corrected in a standard way.
Description
Technical Field
The invention relates to a method for recognizing human body postures, in particular to a method for monitoring the lying posture of a patient.
Background
The patients who lie in bed for a long time and cannot take care of themselves in life due to various factors show an increasing trend. With respect to data statistics, there are many patients who only need the rehabilitation and nursing apparatus domestically. For paralyzed patients, who spend most of their time in bed, they need to turn over frequently to prevent bed clothes from dragging. At present, the nursing beds capable of turning over are too simple in function, require assistance of medical staff, are difficult to meet the requirement of turning over, or are high in price and difficult to popularize. In addition, the patient with mobility disability lies in bed for a long time, and certain psychological disorders can be caused inevitably, which are specifically represented by sleep disorder, mental hypoactivity, excessive thinking, depression, pessimism, self-responsibility, inappetence, physical decline, delusions and the like, and have great influence on the development and recovery of the disease.
The human body posture recognition and analysis field mainly analyzes and understands the posture behaviors of individuals from videos or images so as to judge the correctness and the effectiveness of the postures of the individuals. The human behavior recognition technology is widely applied to various fields, such as security monitoring systems, human-computer interaction systems and the like, and is not commonly applied in the medical field, but this does not represent the defect that the posture behavior technology has applicability in the aspect of medical care. Instead, with the rapid development of information digitization, the human behavior recognition technology has a wide development prospect in the medical field.
Currently, the improvement of the nursing method in the medical field mainly reflects in the improvement scheme and the voice control of medical equipment, and few methods for recognizing, monitoring and analyzing the behavior and posture of a patient during the nursing period are produced. The improvements are mainly embodied in that the use experience of a patient is increased, secondary damage caused in the use process is avoided, data such as patient behaviors and the like cannot be effectively utilized after the use, a certain data waste phenomenon is caused, the improvement at the current stage cannot be accurately evaluated, and the existing system and functions are difficult to improve so as to provide an effective theoretical basis. The patient's bed position data and behavior data are susceptible to influences of unrelated factors, such as clothing, lighting, gauze dressing, and the like. Without noise reduction preprocessing of a large amount of image data, it is difficult to accurately identify the correctness of the posture of the patient and to specify the standard behaviors of the patient such as sitting, lying, leaning and the like.
Patients and family members hospitalized in a ward can always hear doctors and nurses to emphasize that the patients with disturbance of consciousness or paralysis need to turn over and put in proper positions regularly. However, since the family members are not aware of the importance of the disease, a considerable number of patients still have complications such as pressure sores or pneumonia caused by falling down. These sequelae not only cause the patients themselves to lose their ability to take care of themselves, but also increase the burden on the family and society.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for monitoring the lying posture of a patient, which can accurately identify whether the posture of the patient is correct or not, so that the standard correction can be automatically carried out on the behavior of the patient after the operation or when the patient with serious diseases lies in bed for rest.
The technical scheme adopted by the invention for solving the technical problems is to provide a method for monitoring the lying posture of a patient, which comprises the following steps: s1) acquiring image information of human posture behaviors through a depth camera; s2) extracting human skeleton information from the collected image information to obtain human action skeleton map information; s3) the patient lying in bed is identified according to the human body action skeleton diagram.
Compared with the prior art, the invention has the following beneficial effects: the monitoring method for the bedridden posture of the patient can accurately identify whether the posture of the patient is correct or not, so that the behavior of the patient after the operation or when the patient with serious diseases is in bed rest can be automatically corrected in a standard way.
Drawings
FIG. 1 is a schematic view of the monitoring process of the present invention for the patient's bed-ridden posture;
FIG. 2 is a diagram illustrating the format of the vectorized standard labels file processed according to the present invention.
Detailed Description
The invention is further described below with reference to the figures and examples.
The invention provides a method for monitoring the lying posture of a patient, which comprises the steps of extracting image information of human posture behaviors and key point depth information of a human skeleton map, extracting the extracted image into an original database through a posture recognition algorithm to perform behavior comparison fitting, fitting the extracted data into a human skeleton map cloud end, performing feature extraction on the image data of the skeleton map, further constructing a 3D (three-dimensional) coordinate unit structure map, performing related image posture classification by utilizing a classification feature algorithm, and performing matching rate calculation, skeleton joint deviation three-dimensional angle calculation and joint space vector accurate numerical calculation on a classified posture type result and a posture relative position coordinate of an original national standard unit.
Referring to fig. 1, the method for monitoring the lying posture of a patient provided by the invention comprises the following specific processing steps:
1. data monitoring
Data acquisition chooses for use the degree of depth camera to carry out multi-angle acquisition video and image data, because the patient head of sick bed is at the head of a bed, because the ward barrier shelters from and indoor light influences, consequently, settles the degree of depth camera and surveillance camera head in ward ceiling top, the place ahead, the left side, the right, wait four directions, acquire the depth information of four directions respectively. The camera can capture multi-azimuth image information of a patient, and no barrier for blocking the sight line exists in the range.
2. Data processing
In the process of identifying and analyzing the lying posture of the patient, the human skeleton information graph can be focused, the alpha phase optimization model can obviously highlight the internal human information such as the skeleton and the like of the patient in the lying process and is not influenced by the indoor environment, such as the dressing of the patient, the decoration of a ward, medical equipment, light and the like, the interference of identification is reduced, and the identification precision is improved.
The video information of human body behaviors and the depth information of key points of the human body skeleton map can be collected through the depth camera. And extracting human skeleton information from the collected video data. Human skeleton information can be extracted through an alpha phase, the alpha phase detects human skeleton joint points in a bottom-up mode, firstly, confidence degree of human body part detection is predicted through a network, positions of human key skeleton points are detected, after the positions of the key skeleton points are obtained, affinity between the parts which are associated with the parts is predicted through the network, finally, the confidence degree and the affinity are analyzed through a greedy algorithm, the key points are connected, and therefore human action skeleton diagram information can be obtained.
And extracting the acquired image into a bed skeleton map of the patient by using an Alphapose posture recognition algorithm. The skeleton diagram is composed of 25 joint points and 24 skeleton edges, and is a natural structure presentation of human joints and bones. The original skeleton data in each frame is always provided as a sequence of vectors, each vector representing the two or three dimensional coordinates of a respective human joint.
Taking the center position of the sickbed as an origin O coordinate, calculating the three-position vector offset and the offset angle of the human skeleton for measuring the standard posture accuracy, and assuming that the standard posture offset of the patient is (x)1,x2,x3) The actual patient skeleton offset is (y)1,y2,y3) Then the actual error offset vector is (x)1-y1,x2-y2,x3-y3)
3. Gesture recognition
The human body target detection of Alphapose adopts yolov5 algorithm and aims at a network training module. The system comprises a data acquisition sub-module, a data standardization sub-module and a model training sub-module. The main flow is to obtain the image file of the standard example through a data interface, and the data can be updated and shared in real time. The network training module for the network includes the preparation and preprocessing of data sets, the training of processed data labels, and the like.
1) Wherein the data acquisition submodule
The method is mainly used for training image and video data of a gesture recognition model, and particularly, the main source is obtained in three aspects, one is that the method comprises a national standard example image file stored in a standard library, and the example gesture image data can be automatically classified. The second is mainly real-time storage data for testing and comparison when the hospital detects the patient behaviors in real time.
2) Data standardization submodule
The data standardization sub-module performs vectorization processing on the national standard example image files stored in the standard library, trains the network by using a standard txt label file, converts an original image data set file used for training into a txt file data form, and the processed vectorization standard labels file is shown in fig. 2: in the labels file there is one line per object, each line being in class, x _ center, y _ center, width, height format. The box coordinates must be in a normalized xyz format (from 0 to 1). Since only the type of the object such as the lying posture of the patient is considered in the present invention, the labels file corresponding to each image has only one row, and the type is set to 0. Coordinate points at the upper left and the lower right of the patient image are extracted from the picture name, and a labels file in txt format is output after a series of operations.
The calculation formula is as follows:
wherein the coordinates of the center point of the normalized rectangular frame of the detected patient image are represented as (x _ center, y _ center), and the width and height of the rectangular frame of the detected image are set as width and height. w, h are respectively expressed as the width and length of the rectangular box of the normalized detected patient image.
3) Model training submodule
Pictures are randomly imported from a national standard patient posture standard data set to serve as the data set of the invention, and 70% of the data set serves as a training set and 30% of the data set serves as a verification set. For larger training sets, where the training data is similar to the pre-training data, the pre-trained weight parameters may be used to initialize the network and then begin training. In order to obtain a better model, the invention trains 300 epochs and stores the optimal model. And selecting an Adam optimizer, wherein the learning rate is reduced along with the increase of the training times, network parameters are continuously adjusted, and the main change of the hyper-parameters mainly comprises weight attenuation, learning rate, momentum and the like. Wherein the learning rate is an indispensable hyper-parameter. The learning rate is adjusted according to the training performance, and it should be noted that the convergence rate of the function is reduced when the value is too low, and the global optimum value of the objective function is difficult to obtain when the value is too high. In order to prevent the occurrence of the model overfitting phenomenon, the weight attenuation is included, and meanwhile, the specification item is included in the cost function. The learning rate decay can prevent the objective function from converging at a local optimum while maintaining the convergence rate.
4) Gesture recognition submodule
And (3) performing feature extraction on the extracted skeleton diagram by using a diagram convolution neural network, and given a 2D or 3D form coordinate sequence of the body joint, constructing a naturally-connected space-time diagram which takes the joint as a node and the human body structure and time as edges. Thus, the input to the model is the joint coordinate vectors on the graph nodes. This can be seen as an image-based simulation CNN, where the input is formed by pixel vectors located on a 2D image grid. And (4) performing multilayer space-time graph convolution operation on input data to generate a feature graph with a higher level on the graph. It will then be classified by the standard SoftMax classifier into the corresponding action class.
The features of the human skeleton include spatial relative features and motion features, the spatial features being extracted from the joints and the skeleton. The joints are represented as 3D coordinates and the bones as the difference between the two joint coordinates. The motion characteristics are generally represented as motion information, which includes displacement, joint direction, motion velocity, acceleration, and the like.
For features including spatial relative features, including the calculation of the relative vector size and error size of the calculated joint angle, assume the spatial vector size L of a specification file (referring to a training file, i.e., a file in the national standards library)M=(xm,ym,zm) Wherein M is the definition name of each joint, the label is M belonging to {1, 2, 3.. M }, and the space vector of the joint detected in real time is LN=(xn,yn,zn) Wherein N is the definition name of each joint detected in real time, the label is N epsilon {1, 2, 3.. N }, the space vector of the joint of the patient detected in real time is the error p, and the calculation formula is as followsWherein the larger the value of p is, the lower the image model matching rate is, the larger the degree of representing the posture of the patient from the standard posture is, and the posture feedback needs to be corrected in time; otherwise, the matching degree is higher, and the correction is not needed. If the value setting threshold value of p is larger than 0.5, the situation that the posture difference of the patient is too large is indicated, and alarm processing is needed. Otherwise, the data can be automatically stored in the data storage submodule to be used as training data for model training to be called, and therefore a closed-loop detection storage system is formed.
The motion features include computing motion information of the joints as coordinate differences along a time dimension. Similarly, the deformation of a bone is represented as the difference between vectors of the same bone in successive frames. Formally, the motion of the joint v in time t is calculated asDeformation of the patient's bone defines the joint as similarly defined:the acceleration of the patient's joint l is defined in a calculation:wherein v ist1Refers to the instantaneous speed of the patient captured by the camera at time t1, where vt2Refers to the instantaneous speed of the patient captured by the camera at time t 2; wherein the average movement trend direction of the joint of the patient isThe patient motion trend direction is the direction of the AB vector, and the coordinate of the starting point of the patient starting motion is A ═ x1,y1,z1) The final joint coordinate position is B ═ x2,y2,z2)。
And finally, the obtained skeleton map information and skeleton information of the human body can be fed to a posture recognition module for recognition and analysis in the first several stages, and recognition, analysis and alarm are mainly carried out on standard recovery resting posture behaviors of postoperative patients or critically ill patients when the patients are in bed rest, such as (a patient lying on the back position and a patient lying on the side position).
The invention realizes the closed-loop management of the system and the secondary utilization of data, compared with the traditional openposition model, the adopted Alphaposition optimization model firstly detects the human body and then obtains key points and a skeleton, so the accuracy and the AP value (the AP is measured to be the quality of the learned model on each category) of the Alphaposition optimization model are obviously higher than the openposition model.
The target detection algorithm of the invention: YOLOv5 is implemented in PyTorch, benefits from a mature PyTorch ecosystem, is simpler to support, is easier to deploy, and has the following advantages over YOLOv4, YOLOv 5:
(1) the speed is faster. On YOLOv5 Colab notebook, running TeslaP100, it can be seen that the inference time per image only needs 0.007 seconds, which means 140 Frames Per Second (FPS), at a speed more than 2 times that of YOLOv 4.
(2) The precision is higher. In the Roboflow test on the blood cell count and test (BCCD) dataset, 100 epochs alone were trained to achieve an average accuracy (mAP) of approximately 0.895. Admittedly EfficientDet and YOLOv4 perform comparably, but it is very rare to have such a comprehensive performance boost without any loss in accuracy.
(3) The volume is smaller. The weight file for YOLOv5 is 27 megabytes. The weight file for YOLOv4 (in the Darknet architecture) was 244 megabits. YOLOv5 is approximately 90% smaller than YOLOv 4! This means that YOLOv5 can be more easily deployed onto embedded devices.
Although the present invention has been described with respect to one or more embodiments thereof, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.
Claims (8)
1. A method for monitoring the bed position of a patient is characterized by comprising the following steps:
s1) acquiring image information of human posture behaviors through a depth camera;
s2) extracting human skeleton information from the collected image information to obtain human action skeleton map information;
s3) the patient lying in bed is identified according to the human body action skeleton diagram.
2. The method for monitoring the posture of a patient in bed as claimed in claim 1, wherein the depth camera of step S1 is connected to a plurality of cameras, which are disposed above, in front of, on the left of and on the right of the ceiling of the ward to acquire the depth information in four directions, respectively.
3. The method for monitoring the lying posture of the patient as claimed in claim 1, wherein the step S2 of detecting the human skeletal joint points by the alphaphase model in a bottom-up manner comprises:
firstly, predicting the confidence coefficient of human body part detection through a network, and detecting the position of a key skeleton point of a human body; after the positions of the key skeletal points are obtained, the affinity between the related parts is predicted through a network; and finally, analyzing the confidence and the affinity by a greedy algorithm, and connecting the key points to obtain a human body action skeleton diagram of the patient.
4. The method for monitoring the posture of a patient in bed as claimed in claim 3, wherein the human body motion skeleton map is composed of 25 joint points and 24 skeleton edges, the original skeleton data in each frame is always provided as a vector sequence, each vector represents the two-dimensional or three-dimensional coordinates of the corresponding human body joint; the step S2 further includes calculating the three-position vector offset and the offset angle of the human body skeleton with the center position of the patient bed as the origin O coordinate.
5. The method for monitoring the lying posture of the patient as claimed in claim 1, wherein the step S3 of classifying the relevant image posture by using a classification feature algorithm and comparing the classified posture type result with the national standard lying posture specifically comprises:
s31) acquiring the national standard example image files stored in the standard library and carrying out vectorization processing to generate a patient posture standard data set;
s32) randomly importing pictures from the patient posture standard data set as a target data set of the neural network model, taking 70% of the target data set as a training set and taking 30% of the target data set as a verification set;
s33) carrying out feature extraction on the obtained human body action skeleton diagram by using a diagram convolution neural network, giving a 2D or 3D form coordinate sequence of a body joint, and constructing a naturally communicated space-time diagram which takes the joint as a node and the human body structure and time as sides; and generating a higher-level feature map on input data by adopting multilayer space-time diagram convolution operation, and then classifying the feature map into a corresponding action category through a standard SoftMax classifier.
6. The method for monitoring the ambulatory position of a patient as set forth in claim 5, wherein said vectorized standard labels file processed in step S31 has a row of objects, each row including a category, x _ center, y _ center, width and height format, and wherein (x _ center, y _ center) is the coordinates of the center point of the normalized rectangular frame of the image of the patient.
7. The method for monitoring the posture of a patient in bed as set forth in claim 5, wherein the features of the human skeleton include spatially relative features extracted from joints and bones, and the step S33 includes:
calculating the relative vector size and error size of joint angle, and assuming the space vector size L of the specification fileM=(xm,ym,zm) Wherein M is the definition name of each joint, and the label is M belonging to {1, 2, 3.. M };
real-time detection of joint space vector LN=(xn,yn,zn) Wherein N is the definition name of each joint detected in real time, and the label is N belonging to {1, 2, 3.. N };
the space vector error p of the joint of the patient is detected in real time,wherein a larger value of p indicates a lower image model matching rate and a larger degree of representing the posture of the patient from the standard posture; if the value of p is larger than the set threshold value, judging that the posture difference of the patient is too large, feeding back the posture to be corrected in real time and carrying out alarm processing; otherwise, the data is automatically stored in the data storage submodule and is used as training data of the neural network model to be called, and therefore a closed loop detection storage system is formed.
8. The method for monitoring the posture of a patient lying in bed as claimed in claim 5, wherein the characteristics of the human skeleton include motion characteristics, and the step S33 includes:
calculating the motion information of the joint as a coordinate difference along a time dimension, and simultaneously setting the deformation of the skeleton as a vector difference of the same skeleton in the continuous frames; the average movement trend direction of the patient's joints isThe patient motion trend direction is the direction of AB vector, and the coordinate of the patient starting motion starting point is A ═ Cx1,y1,z1) The final joint coordinate position is B ═ x2,y2,z2)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210243496.XA CN114601454A (en) | 2022-03-11 | 2022-03-11 | Method for monitoring bedridden posture of patient |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210243496.XA CN114601454A (en) | 2022-03-11 | 2022-03-11 | Method for monitoring bedridden posture of patient |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114601454A true CN114601454A (en) | 2022-06-10 |
Family
ID=81863747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210243496.XA Pending CN114601454A (en) | 2022-03-11 | 2022-03-11 | Method for monitoring bedridden posture of patient |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114601454A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116564460A (en) * | 2023-07-06 | 2023-08-08 | 四川省医学科学院·四川省人民医院 | Health behavior monitoring method and system for leukemia child patient |
CN116602663A (en) * | 2023-06-02 | 2023-08-18 | 深圳市震有智联科技有限公司 | Intelligent monitoring method and system based on millimeter wave radar |
CN117598875A (en) * | 2023-11-20 | 2024-02-27 | 广州碧缇维斯健康科技有限公司 | Nursing cabin capable of realizing remote monitoring and management |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113749651A (en) * | 2021-10-18 | 2021-12-07 | 长春理工大学 | Pressure evaluation method and pressure evaluation system based on human body posture recognition |
-
2022
- 2022-03-11 CN CN202210243496.XA patent/CN114601454A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113749651A (en) * | 2021-10-18 | 2021-12-07 | 长春理工大学 | Pressure evaluation method and pressure evaluation system based on human body posture recognition |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116602663A (en) * | 2023-06-02 | 2023-08-18 | 深圳市震有智联科技有限公司 | Intelligent monitoring method and system based on millimeter wave radar |
CN116602663B (en) * | 2023-06-02 | 2023-12-15 | 深圳市震有智联科技有限公司 | Intelligent monitoring method and system based on millimeter wave radar |
CN116564460A (en) * | 2023-07-06 | 2023-08-08 | 四川省医学科学院·四川省人民医院 | Health behavior monitoring method and system for leukemia child patient |
CN116564460B (en) * | 2023-07-06 | 2023-09-12 | 四川省医学科学院·四川省人民医院 | Health behavior monitoring method and system for leukemia child patient |
CN117598875A (en) * | 2023-11-20 | 2024-02-27 | 广州碧缇维斯健康科技有限公司 | Nursing cabin capable of realizing remote monitoring and management |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ahmed et al. | A deep-learning-based smart healthcare system for patient’s discomfort detection at the edge of internet of things | |
Liu et al. | In-bed pose estimation: Deep learning with shallow dataset | |
CN114601454A (en) | Method for monitoring bedridden posture of patient | |
Valstar et al. | Spontaneous vs. posed facial behavior: automatic analysis of brow actions | |
CN109949341B (en) | Pedestrian target tracking method based on human skeleton structural features | |
Chen et al. | Fall detection system based on real-time pose estimation and SVM | |
Heydarzadeh et al. | In-bed posture classification using deep autoencoders | |
Hauptmann et al. | Automated analysis of nursing home observations | |
Wang et al. | Lying pose recognition for elderly fall detection | |
CN111062356B (en) | Method for automatically identifying abnormal human body actions from monitoring video | |
Chen et al. | Automated pain detection from facial expressions using facs: A review | |
Fieraru et al. | Learning complex 3D human self-contact | |
Patwary et al. | Fuzziness based semi-supervised multimodal learning for patient’s activity recognition using RGBDT videos | |
CN114241270A (en) | Intelligent monitoring method, system and device for home care | |
Zavala-Mondragon et al. | CNN-SkelPose: a CNN-based skeleton estimation algorithm for clinical applications | |
Ahmed et al. | Internet of health things driven deep learning-based system for non-invasive patient discomfort detection using time frame rules and pairwise keypoints distance feature | |
CN115661856A (en) | User-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet | |
Wang et al. | Real time pose recognition of covered human for diagnosis of sleep apnoea | |
CN115565245A (en) | ICU patient self-unplugging tube behavior early warning method based on RGB video monitoring | |
Huang et al. | The Development of Artificial Intelligence (AI) Algorithms to avoid potential baby sleep hazards in smart buildings | |
Lee et al. | Qualitative pose estimation by discriminative deformable part models | |
Inoue et al. | Bed exit action detection based on patient posture with long short-term memory | |
Zhang et al. | Bed-Leaving Action Recognition Based on YOLOv3 and AlphaPose | |
Rege et al. | Vision-based approach to senior healthcare: Depth-based activity recognition with convolutional neural networks | |
Lyu et al. | Skeleton-based sleep posture recognition with BP neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |