CN116013548B - Intelligent ward monitoring method and device based on computer vision - Google Patents
Intelligent ward monitoring method and device based on computer vision Download PDFInfo
- Publication number
- CN116013548B CN116013548B CN202211570413.4A CN202211570413A CN116013548B CN 116013548 B CN116013548 B CN 116013548B CN 202211570413 A CN202211570413 A CN 202211570413A CN 116013548 B CN116013548 B CN 116013548B
- Authority
- CN
- China
- Prior art keywords
- patient
- action
- information
- monitored
- vital sign
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012544 monitoring process Methods 0.000 title claims abstract description 22
- 230000009471 action Effects 0.000 claims abstract description 219
- 238000013507 mapping Methods 0.000 claims abstract description 16
- 238000011217 control strategy Methods 0.000 claims abstract description 15
- 238000012216 screening Methods 0.000 claims abstract description 15
- 238000001514 detection method Methods 0.000 claims description 75
- 230000033001 locomotion Effects 0.000 claims description 75
- 230000001575 pathological effect Effects 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 18
- 238000006073 displacement reaction Methods 0.000 claims description 16
- 238000005070 sampling Methods 0.000 claims description 14
- 238000003062 neural network model Methods 0.000 claims description 8
- 230000000295 complement effect Effects 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000004140 cleaning Methods 0.000 claims description 5
- 238000000586 desensitisation Methods 0.000 claims description 5
- 238000012806 monitoring device Methods 0.000 claims description 5
- 230000000474 nursing effect Effects 0.000 abstract 1
- 230000036544 posture Effects 0.000 description 80
- 230000000875 corresponding effect Effects 0.000 description 48
- 238000004891 communication Methods 0.000 description 11
- 230000006378 damage Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 208000027418 Wounds and injury Diseases 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 5
- 201000010099 disease Diseases 0.000 description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 5
- 208000014674 injury Diseases 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 238000005096 rolling process Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 210000003127 knee Anatomy 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- 201000009906 Meningitis Diseases 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 210000003423 ankle Anatomy 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 2
- 239000008280 blood Substances 0.000 description 2
- 210000004369 blood Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000002414 leg Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 229910052760 oxygen Inorganic materials 0.000 description 2
- 239000001301 oxygen Substances 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 208000008035 Back Pain Diseases 0.000 description 1
- 206010010904 Convulsion Diseases 0.000 description 1
- 206010012735 Diarrhoea Diseases 0.000 description 1
- 206010019196 Head injury Diseases 0.000 description 1
- 206010019280 Heart failures Diseases 0.000 description 1
- 208000013016 Hypoglycemia Diseases 0.000 description 1
- 208000018982 Leg injury Diseases 0.000 description 1
- 206010061225 Limb injury Diseases 0.000 description 1
- 206010024552 Lip dry Diseases 0.000 description 1
- 208000008930 Low Back Pain Diseases 0.000 description 1
- 206010033645 Pancreatitis Diseases 0.000 description 1
- 206010033647 Pancreatitis acute Diseases 0.000 description 1
- 206010035664 Pneumonia Diseases 0.000 description 1
- 206010053649 Vascular rupture Diseases 0.000 description 1
- 206010047700 Vomiting Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 208000028752 abnormal posture Diseases 0.000 description 1
- 201000003229 acute pancreatitis Diseases 0.000 description 1
- 208000007502 anemia Diseases 0.000 description 1
- 208000007474 aortic aneurysm Diseases 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 206010006451 bronchitis Diseases 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 239000003651 drinking water Substances 0.000 description 1
- 235000020188 drinking water Nutrition 0.000 description 1
- 201000003511 ectopic pregnancy Diseases 0.000 description 1
- 230000005713 exacerbation Effects 0.000 description 1
- 210000002683 foot Anatomy 0.000 description 1
- 230000002218 hypoglycaemic effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000035935 pregnancy Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000036391 respiratory frequency Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000036387 respiratory rate Effects 0.000 description 1
- 208000023504 respiratory system disease Diseases 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000008961 swelling Effects 0.000 description 1
- 206010042772 syncope Diseases 0.000 description 1
- 210000003371 toe Anatomy 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
- 238000009423 ventilation Methods 0.000 description 1
- 230000008673 vomiting Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The application provides an intelligent ward monitoring method and device based on computer vision. The method comprises the following steps: determining first action posture information of a patient to be monitored; acquiring second action posture information and first vital sign information; acquiring third action posture information and second vital sign information of a patient to be monitored; determining each first hidden danger action characteristic corresponding to the age characteristic, sex characteristic and illness state characteristic of the patient to be monitored based on a preset mapping relation; screening out each second hidden danger action feature according to the first vital sign information and the second vital sign information; acquiring predicted action characteristics corresponding to a patient to be monitored; and determining a control strategy corresponding to the current patient to be monitored according to the predicted action characteristics and the second hidden danger action characteristics, and executing the control strategy. The intelligent nursing device can intelligently take care of and protect patients under the condition that the family members of the patients are not present, and can timely inform the supervisory person of the patients, so that the safety of the patients is effectively ensured.
Description
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an intelligent ward monitoring method and device based on computer vision.
Background
In hospital wards, many times, after the operation is successful, serious patients can be secondarily injured or even endangered due to abnormal postures due to inconvenient actions. For example, the elderly over eighties may fall from the bed after surgery without care, which may be a risk to the elderly. In ward, the things like everything are that some patients with bad physical conditions can damage themselves when doing some instant and simple actions, for example, the patients can touch boiled water on a table, or press a transfusion tube, or kick some medical machines, such as needles, etc. Therefore, it is very important to construct an intelligent ward and to realize the human body posture detection of a patient in the ward.
In the related art, the posture of a patient is often monitored by only a monitoring camera or a bracelet with a posture sensor, and information inherent in the imaging process of a camera is lost due to high complexity of the human body and the movement. The actions of some key parts of the human body are frequently shielded and automatically shielded, so that accurate posture estimation is difficult to realize on the human body, and great potential safety hazards are brought to the patient, so that the gesture actions of the patient are accurately predicted, dangers are prevented from happening on the patient in time, the safety of the patient in a ward is guaranteed, and the problem to be solved is urgently needed at present.
Disclosure of Invention
The application provides an intelligent ward monitoring method and device based on computer vision.
According to a first aspect of the present application, there is provided a computer vision-based intelligent ward monitoring method, comprising:
processing a multi-frame image containing a patient to be monitored, which is shot by an intelligent control panel, so as to determine first action posture information of the patient to be monitored;
acquiring second action posture information and first vital sign information of the patient to be monitored based on first detection equipment worn by the pathological feature parts of the plurality of marks on the patient to be monitored;
acquiring third action posture information and second vital sign information of the patient to be monitored based on second detection equipment on a patient bed where the patient to be monitored is located, wherein the first detection equipment and the second detection equipment are associated;
determining each first hidden danger action characteristic corresponding to the age characteristic, sex characteristic and illness state characteristic of the patient to be monitored based on a preset mapping relation;
screening out second hidden danger action features from the first hidden danger action features according to the first vital sign information and the second vital sign information;
Acquiring predicted action characteristics corresponding to the patient to be monitored according to the first action posture information, the second action posture information and the third action posture information;
and determining a control strategy corresponding to the current patient to be monitored according to the predicted action characteristic and each second hidden danger action characteristic, and executing the control strategy.
According to a second aspect of the present application, there is provided a computer vision based intelligent ward monitoring apparatus comprising:
the first determining module is used for processing the multi-frame image containing the patient to be monitored, which is shot by the intelligent control panel, so as to determine the first action posture information of the patient to be monitored;
the first acquisition module is used for acquiring second action posture information and first vital sign information of the patient to be monitored based on each first detection device worn on the pathological feature parts of the plurality of marks on the patient to be monitored;
the second acquisition module is used for acquiring third action posture information and second vital sign information of the patient to be monitored based on second detection equipment on a patient bed where the patient to be monitored is located, wherein the first detection equipment and the second detection equipment are associated;
The second determining module is used for determining each first hidden danger action characteristic corresponding to the age characteristic, the sex characteristic and the illness state characteristic of the patient to be monitored based on a preset mapping relation;
the screening module is used for screening out second hidden danger action features from the first hidden danger action features according to the first vital sign information and the second vital sign information;
the third acquisition module is used for acquiring the predicted action characteristics corresponding to the patient to be monitored according to the first action posture information, the second action posture information and the third action posture information;
and the third determining module is used for determining a control strategy corresponding to the current patient to be monitored according to the predicted action characteristic and each second hidden danger action characteristic and executing the control strategy.
According to a third aspect of the present application, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the computer vision based intelligent ward monitoring method provided in the foregoing first aspect of the present application.
In the embodiment of the disclosure, the device firstly processes a multi-frame image containing a patient to be monitored, which is shot by an intelligent control panel, so as to determine first action posture information of the patient to be monitored, then obtains second action posture information and first vital sign information of the patient to be monitored based on first detection equipment worn by a plurality of marked pathological feature parts on the patient to be monitored, then obtains third action posture information and second vital sign information of the patient to be monitored based on second detection equipment on a patient bed where the patient to be monitored is located, wherein the first detection equipment and the second detection equipment are associated, then determines first hidden danger action features corresponding to age features, sex features and characteristics of the patient to be monitored based on a preset mapping relation, then screens out second hidden danger action features from the first hidden danger action features according to the first vital sign information and the second vital sign information, then obtains third action posture information and the second hidden danger action features of the patient to be monitored, and predicts the patient to be monitored according to the first action posture information, predicts the first hidden danger action features and the second hidden danger action features, and controls the patient to be monitored according to the current action strategies. Therefore, when the motion of the current patient is predicted, the first motion attitude information, the second motion attitude information and the third motion attitude information of the patient are considered, so that the predicted motion characteristics can accurately and reliably predict the motion of each body part of the current patient in the future, including pathological special parts, and the first hidden danger motion characteristics are acquired according to the age characteristics, the sex characteristics and the illness characteristics of the patient, and are screened according to vital sign information, the second hidden danger motion characteristics more accurately and reliably reflect the motion characteristics possibly causing harm to the patient, and then accidents of the patient can be timely prevented by comparing the predicted motion characteristics with the hidden danger motion characteristics, the safety of the patient can be ensured, the patient can be intelligently and carelessly protected under the condition that the family of the patient is not located, the supervision personnel of the patient can be timely notified, and the safety of the patient can be effectively ensured.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flowchart of a method for monitoring an intelligent ward based on computer vision according to an embodiment of the present application;
FIG. 2 is a block diagram of an intelligent ward monitoring apparatus based on computer vision according to an embodiment of the present application;
fig. 3 is a diagram illustrating an example of an architecture of an electronic device of the computer vision-based smart ward monitoring method according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
The invention relates to a human motion and gesture recognition method, which is characterized in that visual information of videos is directly analyzed and processed by a computer, a moving human body and gesture in the videos are detected, information and purposes to be conveyed by the gesture are described and expressed, along with development of computer technology and information technology, the society needs higher-requirement video analysis, the human gesture information can be expected to be directly obtained through the analysis of the videos by the computer, and human gesture estimation is a key technology to be finally realized in the field of computer vision.
It should be noted that, the computer vision-based smart ward monitoring method of the present embodiment may be executed by a computer vision-based smart ward monitoring device, where the computer vision-based smart ward monitoring device includes, but is not limited to, an independent server, a distributed server, a server cluster, and a cloud server, and the server is used as an execution body, which is not limited herein, and the server may be electrically connected to the intelligent control panel.
Computer vision-based intelligent ward monitoring methods, systems, and storage media according to embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of an intelligent ward monitoring method based on computer vision according to an embodiment of the present application. As shown in fig. 1, the computer vision-based intelligent ward monitoring method may include the following steps.
And step 101, processing a multi-frame image containing a patient to be monitored, which is shot by an intelligent control panel, so as to determine first action posture information of the patient to be monitored.
In the embodiment of the disclosure, the intelligent control panel is electrically connected with the central processing unit and can process and calculate data. The intelligent control panel can serve as a remote controller panel and an equipment panel for providing services for users, such as ordering, switching on and off lamps, calling doctors and nurses, checking medical records and checking treatment schemes, and the users can realize various controls based on various touch keys on the intelligent control panel so as to acquire various services.
In the embodiment of the disclosure, the intelligent control panel is provided with the image pickup device, such as a binocular camera, so that each patient in the current ward can be subjected to image pickup, and the family members and doctors of the patients can conveniently see the state of the current patient.
The patient to be monitored may be a patient to be monitored currently. It should be noted that, physiological characteristics of different patients are different, if there is a patient head injury, there is no vigorous movement of the head, some patients have a leg injury, the legs cannot be greatly pressed, some patients just do surgery, cannot get out of bed, fall or turn over, some patients have eyes injured, and therefore the room is not suitable for adjusting the lamp to a strong light mode.
It should be noted that, in the present disclosure, the patient condition information and medical record information of the patient may be recorded in advance in the device, which may be recorded in advance in a database corresponding to the patient in the device by a doctor or a nurse, or may be recorded in advance by a family member of the patient, so that the information of the patient may be fully recorded. When the patient condition information recorded in the present apparatus is insufficient, the apparatus may be reminded in a predetermined period.
The first action gesture information may be action gesture information of the patient to be monitored recorded at each moment in a specified time period, and the action gesture information includes spatial position information corresponding to key points of each part of the patient marked in advance.
Wherein, each position key point on the patient can be respectively positioned on the right shoulder, the left shoulder, the right elbow, the right wrist, the left elbow, the left wrist, the right hip, the right knee, the right ankle, the left hip, the left knee, the left ankle, the nose, the right ear, the left eye, the right eye, etc.
Specifically, image data of one frame can be read from a camera, then the image data is transmitted to an reference function of a tfpos est image class to obtain a return value humans, the return value humans and the image data image are transmitted to a draw_humans function of the tfpos est image class, and the obtained return value is displayed through opencv, so that a human body posture detection diagram containing first action posture information can be obtained.
It should be noted that, the convolutional neural network is based on a multi-layer neural network, and a partially connected convolutional layer and a pooled layer are added in front of the fully connected layer, so that the function of feature learning is added and deep learning is realized. The basic composition of the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, an activation layer and an output layer, wherein the convolutional layer is the core of CNN and consists of a plurality of convolutional units, the aim is to extract the characteristics of an image, the activation layer has nonlinear segmentation capability, the pooling layer is also called as a downsampling layer, and the main aim is to keep most important information and reduce parameters after the characteristics are extracted by the convolutional layer, and the parameters can be generally divided into maximum pooling and average pooling.
As one possible implementation, the model openPose may be based on a human body posture neural network model, which is a bottom-up network model based on convolutional neural networks and supervised learning.
It should be noted that, through the human body posture neural network model openPose, a plurality of key points can be detected in the image as basic features, joint vectors and angles and length ratios between the vectors are constructed, and the posture of the human body is described through the information, so that the first action posture information is more reliable and accurate, and the later timely prediction of the hidden danger posture, such as falling, turning over and the like, can be facilitated, and the method is not limited.
Step 102, acquiring second action posture information and first vital sign information of a patient to be monitored based on first detection devices worn on pathological feature parts of a plurality of marks on the patient to be monitored.
The pathological feature part can be a part determined according to the condition of a patient to be monitored, such as the injury of the toes of the patient, or the injury of the fingers, or the injury of joints of the part of the legs, so that the gesture of the patient at some special parts can be better observed and identified, the protection of the patient is realized, some first detection devices can be worn nearby some pathological feature parts in advance, and the action gesture of the pathological feature parts can be more finely identified.
The second motion profile information may be motion profile information of a pathological feature portion of the patient detected by each of the first detection devices, for example, whether the finger is bent, bent and deformed, or is depressed or protruded, or a motion of an injured knee portion, etc., which is not limited herein.
Wherein the first detection device may be a detection device pre-mounted on a patient at a pre-marked pathological feature. The first detection device may comprise various types of sensors, such as a pressure sensor, an attitude sensor, a temperature sensor, and a measuring means for measuring first vital sign information of the patient, such as body surface temperature, heart beat, heart rate, blood pressure, blood oxygen concentration, respiration, and area size, color, swelling degree, etc. of the pathological feature, without limitation.
It should be noted that the first detection device may further include a head wearing device, which may be used to monitor facial information of the patient, such as vital sign information of pupils of eyes, solid colors of lips, such as whether the lips are white or purple, and information of a plurality of key points on the face, so as to detect whether an abnormal situation occurs on the patient according to the facial status of the patient in real time.
For example, if a patient's mouth is always jumping, it may be due to seizures. Thus, at this time, the corresponding second motion gesture information thereof may be recorded.
The first detection device may be a wearable device, and may include a fixing tool for fixing a specific part on the body, so as to monitor the six-degree-of-freedom gesture of the specific part.
The first detection device may further include a wireless communication module, such as a WiFi module or a ZigBee module, so that the acquired second motion gesture information and first vital sign information of the patient to be monitored may be uploaded to the server electrically connected to the intelligent control panel in real time.
Optionally, the server may acquire the second motion gesture information and the first vital sign information of the patient to be monitored based on the first detection device according to the target sampling frequency.
The target sampling frequency may be a periodic sampling according to a preset period, for example, the second motion gesture information and the first vital sign information are collected once every 5s or 2.5s, and the target sampling frequency may be specifically set according to experience, which is not limited herein.
Step 103, based on a second detection device on a patient bed where the patient to be monitored is located, acquiring third action posture information and second vital sign information of the patient to be monitored, wherein the first detection device and the second detection device are associated.
Optionally, the second detection device may also acquire third action posture information and second vital sign information of the patient to be monitored based on the second detection device according to the target sampling frequency, where the third action posture information and the second action posture information are acquired by the second detection device and the first detection device respectively.
The second detection device may be a monitoring device installed on a patient bed, such as a smart mattress, a pillow, a quilt or an armrest, etc., and may monitor pressures (such as foot pressure and body pressure) generated when various skins are contacted, so as to measure the third action posture information through various types of sensors, such as a large-area sensor, a high-density sensing matrix, a multi-touch sensing sheet, etc.
The sampling frequencies of the second detection device and the first detection device are the same, so that the information acquired at each moment can be aligned later. The second detection equipment and the first detection equipment can communicate with each other, so that time alignment can be guaranteed, usability and reliability of data are guaranteed, and the monitored action gesture information is more reliable.
For example, if the second detection device is an intelligent mattress, it may be a base pad, a ventilation pad, a temperature sensor, a pressure sensor, etc., and the buzzer, the operation button, the display screen and the microcomputer module are installed at the edge of the base pad, signals of the temperature sensor, the pressure sensor and the operation button may be transmitted to the server through the wireless network.
The third action posture information may be corresponding on-bed posture information of the patient, such as lying posture, prone posture, sitting posture, and other postures related to the bed, such as a hand-held bed, a foot-kicked bed, a top-of-head bed, etc., which are not limited herein, and the off-bed time and on-bed time of the patient may be determined by analyzing the third action posture information at a plurality of continuous moments, so as to assist in analysis and prediction of the possibility of the patient off-bed.
The second vital sign information may be vital sign information detected by each second detection device set on the hospital bed, such as sleep state, body movement state, heart rate, shake frequency, respiratory frequency, rolling times, and the like.
Step 104, determining each first hidden danger action feature corresponding to the age feature, sex feature and illness state feature of the patient to be monitored based on a preset mapping relation.
The first hidden danger action feature can be an action that a patient to be monitored can possibly bring a certain probability of damage to the safety of the patient when the patient finishes the action.
In particular, the first risk action characteristic that may injure the patient can be determined according to the disease condition characteristics of the patient. For example, an 18-year-old male patient may suffer from sudden syncope due to a heart failure or may suffer from a foreign body inhalation, so that the corresponding lung may be problematic, and thus, some corresponding actions of the upper body may be correspondingly used as hidden trouble actions, thereby avoiding exacerbation.
For example, some patients suffer from lumbago, but may suffer from acute pancreatitis, and therefore have to take food and drink water as hidden trouble, otherwise serious injury is caused to the patients.
For another example, some patients have aortic aneurysms, so even slight movements can cause vascular ruptures, endangering patient lives, and thus the corresponding actions that can cause injury to the patient need to be pre-acquired in a large database.
For example, some patients may have diarrhea, but may also have epidemic cerebrospinal meningitis, which is simply called epidemic cerebrospinal meningitis, and thus the patients cannot be shaken, falling down of the bed is prevented, the head is prevented from being deflected to one side during vomiting, the inhalation is prevented, and the corresponding vital sign changes such as pale complexion, dry lips and the like are observed.
It should be noted that, a mapping relationship table may be stored in advance, where the mapping relationship table may be obtained by collecting and training big data, and through patient condition information disclosed by various hospitals in various countries, and some possible hidden trouble actions, for example, data obtained from a disease analysis statistics DataBase (DDB), so that the server may directly infer and enumerate possible hidden trouble action behaviors of the patient to be monitored according to the age characteristics, sex characteristics and condition characteristics of the patient to be monitored.
The age characteristic may be infant, child, teenager, middle-aged, elderly, or specific ages, such as 4 years, 7 months, 78 years, and hidden danger of different ages at the time of illness is different, for example, a woman aged 28 years and a woman aged 79 years are in the same dangerous action, and the unsafe factor faced by a woman aged 79 years is much larger.
Alternatively, since the conditions of the male and female are very different from each other in many cases, such as diseases of the male and gynecological diseases, the possible hidden danger action features can be determined according to the sex features.
Step 105, screening out each second hidden danger action feature from each first hidden danger action feature according to the first vital sign information and the second vital sign information.
It should be noted that, since the first hidden danger action features are generally all hidden danger actions related to the illness state, sex and age of the patient, the data volume is very large, the data volume is very complex, and the safety influence of some actions on the patient is small in many cases. Therefore, in the embodiment of the disclosure, the first hidden danger action features can be screened according to the first vital sign information and the second vital sign information corresponding to the patient at the current moment, so that some important hidden danger action features corresponding to the real-time state of the current patient can be obtained and used as the second hidden danger working features.
Optionally, the server may first add the first vital sign information and the second vital sign information, perform data cleaning and data desensitization processing to determine third vital sign information, then obtain hidden danger attribute information corresponding to each third vital sign information based on a predetermined mapping relation table, then determine attribute information corresponding to each first hidden danger action feature, and screen each second hidden danger action feature from each first hidden danger action feature according to the hidden danger attribute information.
The hidden danger attribute information may be hidden danger features corresponding to vital sign information, for example, if the third vital sign information is facial color blushing, lips blushing, and the hidden danger attribute information may be anemia, hypoglycemia, or diabetes. If the third vital sign information is low in respiratory rate and low in blood oxygen concentration, the corresponding hidden danger attribute information may be bronchitis, pneumonia, respiratory diseases and the like, which is not limited herein.
Specifically, the server may determine, according to the attribute information corresponding to each first hidden danger action feature, a second hidden danger action feature corresponding to the hidden danger attribute information.
For example, if the first hidden danger operation features are X1, X2, X3, X4, and X5, respectively, the corresponding hidden danger attribute information is a, B, C, D, and E, respectively, and the currently determined hidden danger attribute information is D and E, so X4 and X5 may be used as the second hidden danger operation features, which are not limited herein.
It should be noted that, by adding the first vital sign information and the second vital sign information, and performing data cleaning and data desensitization processing, some repeated vital sign information can be removed, so that the vital sign information is more complete, incorrect, incomplete, irrelevant, inaccurate or other problematic data parts can be identified, and the third vital sign information is ensured to be free of errors, so as to provide support for the use and analysis of the data.
For example, the first vital sign information includes a and B, the second vital sign information includes B and C, D, and after the first vital sign information and the second vital sign information are added and data is cleaned, some duplicate information, such as B, may be removed, so that the third vital sign information a, B, C, D may be obtained, which is not limited herein. By desensitizing the data, leakage of privacy sensitive information of some patients, such as pregnancy, ectopic pregnancy, etc., can be avoided, without limitation.
And step 106, obtaining the predicted action characteristics corresponding to the patient to be monitored according to the first action posture information, the second action posture information and the third action posture information.
The predicted motion characteristic may be a motion that a patient may perform, which is currently predicted according to a motion posture of the patient to be monitored.
Optionally, the device may determine coordinates of each key point in the current world coordinate system based on the positions of the key points in the first action gesture information in the image captured by the intelligent control panel. The world coordinate system may be established with a certain position of the sickbed as an origin, for example, a midpoint of the bedside, which is not limited herein. The image shot by the intelligent control panel can be a depth image, so that the server can determine world coordinates corresponding to each key point by processing the depth image, and then can determine coordinates, namely position information, of other key points, such as each key point on the pathological feature part, in a world coordinate system by combining the second action gesture information and the third action gesture information. By analyzing the first action posture information, the second action posture information and the third action posture information of the patient to be monitored at a plurality of moments, the change trend and the moving direction of each key point on the patient at each moment can be determined, so that the actions of the patient after the patient can be predicted.
As a possible implementation manner, the first action posture information, the second action posture information and the third action posture information may be input into an action complement neural network model generated by training in advance, so as to output position information including each key point of a human body of the patient to be monitored in each frame of an action posture frame sequence, then determine a displacement amount and a displacement direction corresponding to each key point according to the position information of each key point of the human body in each frame, and then determine a predicted action feature corresponding to the patient to be monitored based on the displacement amount and the displacement direction corresponding to each key point in the human body.
When the first action posture information, the second action posture information and the third action posture information are input into the action complement neural network model which is generated by training in advance, the first action posture information, the second action posture information and the third action posture information corresponding to a plurality of moments can be input at the same time, wherein the action complement neural network model can be used for supplementing the complete posture and action of the current patient according to the first action posture information, the second action posture information and the third action posture information corresponding to each moment.
After the position information corresponding to each key point in the human body is determined, the server can predict the motion of the patient to be monitored at the future moment through the moving distance, the moving angle and the moving direction of each key point in each frame of image, so that the predicted motion characteristics can be obtained.
And step 107, determining a control strategy corresponding to the current patient to be monitored according to the predicted action characteristics and the second hidden danger action characteristics, and executing the control strategy.
Optionally, the server may calculate each degree of matching between the predicted action feature and each second hidden danger action feature, and send an early warning control command to the hospital bed under the condition that any degree of matching is greater than a preset threshold value, so as to control each type of protection equipment preset on the hospital bed to enter a precaution working mode, and then send, based on the intelligent control panel, reminding information to each terminal device held by a supervisor corresponding to the patient to be monitored, where the supervisor includes a family member, a main doctor and a nurse of the patient to be monitored, and the reminding information includes the current image of the patient to be monitored.
For example, if the predicted motion feature is rolling over the bed, and the second hidden danger motion feature is also rolling over the bed, the corresponding matching degree is 0.9 and is greater than the matching degree threshold, the corresponding control strategy may be executed.
Specifically, the cosine similarity between the predicted motion feature and each of the second hidden danger motion features may be used as each matching degree. Wherein the matching degree threshold may be 0.75.
The protection device may be some elastic means mounted on the bed, so that it can be slowly ejected from the bed, to protect the patient from falling off the bed, or to prevent the patient from taking some action, such as rolling over, drinking water, etc. It should be noted that there may be a plurality of types of protection devices, so that protection may be provided to the patient from various angles.
The protection device is internally provided with a communication module so as to communicate with the server, and the server can control the protection device to work according to specific working parameters by sending instructions to the protection device, so that any working mode, such as a precaution working mode, is achieved.
The terminal equipment held by the supervisor can be a mobile phone or a computer, a watch and the like, and can establish communication with the server.
In the embodiment of the disclosure, the device firstly processes a multi-frame image containing a patient to be monitored, which is shot by an intelligent control panel, so as to determine first action posture information of the patient to be monitored, then obtains second action posture information and first vital sign information of the patient to be monitored based on first detection equipment worn by a plurality of marked pathological feature parts on the patient to be monitored, then obtains third action posture information and second vital sign information of the patient to be monitored based on second detection equipment on a patient bed where the patient to be monitored is located, wherein the first detection equipment and the second detection equipment are associated, then determines first hidden danger action features corresponding to age features, sex features and characteristics of the patient to be monitored based on a preset mapping relation, then screens out second hidden danger action features from the first hidden danger action features according to the first vital sign information and the second vital sign information, then obtains third action posture information and the second hidden danger action features of the patient to be monitored, and predicts the patient to be monitored according to the first action posture information, predicts the first hidden danger action features and the second hidden danger action features, and controls the patient to be monitored according to the current action strategies. Therefore, when the motion of the current patient is predicted, the first motion attitude information, the second motion attitude information and the third motion attitude information of the patient are considered, so that the predicted motion characteristics can accurately and reliably predict the motion of each body part of the current patient in the future, including pathological special parts, and the first hidden danger motion characteristics are acquired according to the age characteristics, the sex characteristics and the illness characteristics of the patient, and are screened according to vital sign information, the second hidden danger motion characteristics more accurately and reliably reflect the motion characteristics possibly causing harm to the patient, and then accidents of the patient can be timely prevented by comparing the predicted motion characteristics with the hidden danger motion characteristics, the safety of the patient can be ensured, the patient can be intelligently and carelessly protected under the condition that the family of the patient is not located, the supervision personnel of the patient can be timely notified, and the safety of the patient can be effectively ensured.
Fig. 2 is a schematic structural diagram of an intelligent ward monitoring device based on computer vision according to an embodiment of the disclosure.
As shown in fig. 2, the computer vision-based intelligent ward monitoring apparatus 200 comprises:
a first determining module 210, configured to process a multi-frame image including a patient to be monitored, which is captured by the intelligent control panel, so as to determine first motion gesture information of the patient to be monitored;
a first obtaining module 220, configured to obtain second motion gesture information and first vital sign information of the patient to be monitored based on respective first detection devices worn by the pathological feature parts of the plurality of marks on the patient to be monitored;
a second obtaining module 230, configured to obtain third action posture information and second vital sign information of the patient to be monitored based on a second detecting device on a patient bed where the patient to be monitored is located, where the first detecting device and the second detecting device are associated;
the second determining module 240 is configured to determine, based on a preset mapping relationship, each first hidden danger action feature corresponding to the age feature, the sex feature and the disease feature of the patient to be monitored;
the screening module 250 is configured to screen each second hidden danger action feature from the first hidden danger action features according to the first vital sign information and the second vital sign information;
A third obtaining module 260, configured to obtain predicted motion characteristics corresponding to the patient to be monitored according to the first motion gesture information, the second motion gesture information, and the third motion gesture information;
and a third determining module 270, configured to determine a control policy corresponding to the patient to be monitored currently according to the predicted motion feature and the second hidden danger motion features, and execute the control policy.
Optionally, the third determining module is specifically configured to:
calculating each matching degree between the predicted action feature and each second hidden danger action feature, and sending an early warning control command to the sickbed under the condition that any matching degree is larger than a preset threshold value so as to control each type of protection equipment preset on the sickbed to enter a precaution working mode;
based on the intelligent control panel, reminding information is sent to each terminal device which is held by a supervisor corresponding to the patient to be monitored, the supervisor comprises family members, main doctors and nurses of the patient to be monitored, and the reminding information comprises the current image of the patient to be monitored.
Optionally, the third obtaining module is specifically configured to:
Inputting the first action posture information, the second action posture information and the third action posture information into an action complement neural network model generated by pre-training so as to output position information containing each key point of a human body of the patient to be monitored in each frame of an action posture frame sequence;
determining the displacement amount and the displacement direction corresponding to each key point according to the position information of each key point of the human body in each frame;
and determining the predicted action characteristics corresponding to the patient to be monitored based on the displacement amount and the displacement direction corresponding to each key point in the human body.
Optionally, the screening module is specifically configured to:
adding the first vital sign information and the second vital sign information, and performing data cleaning and data desensitization processing to determine third vital sign information;
acquiring hidden danger attribute information corresponding to each third vital sign information based on a predetermined mapping relation table;
and determining attribute information corresponding to each first hidden danger action feature, and screening each second hidden danger action feature from each first hidden danger action feature according to the hidden danger attribute information.
Optionally, the first obtaining module is specifically configured to:
acquiring second action posture information and first vital sign information of the patient to be monitored based on the first detection equipment according to a target sampling frequency;
the second obtaining module is specifically configured to:
acquiring third action posture information and second vital sign information of the patient to be monitored based on the second detection device according to the target sampling frequency,
wherein the third motion gesture information and the second motion gesture information are synchronously acquired by the second detection device and the first detection device, respectively.
In the embodiment of the disclosure, the device firstly processes a multi-frame image containing a patient to be monitored, which is shot by an intelligent control panel, so as to determine first action posture information of the patient to be monitored, then obtains second action posture information and first vital sign information of the patient to be monitored based on first detection equipment worn by a plurality of marked pathological feature parts on the patient to be monitored, then obtains third action posture information and second vital sign information of the patient to be monitored based on second detection equipment on a patient bed where the patient to be monitored is located, wherein the first detection equipment and the second detection equipment are associated, then determines first hidden danger action features corresponding to age features, sex features and characteristics of the patient to be monitored based on a preset mapping relation, then screens out second hidden danger action features from the first hidden danger action features according to the first vital sign information and the second vital sign information, then obtains third action posture information and the second hidden danger action features of the patient to be monitored, and predicts the patient to be monitored according to the first action posture information, predicts the first hidden danger action features and the second hidden danger action features, and controls the patient to be monitored according to the current action strategies. Therefore, when the motion of the current patient is predicted, the first motion attitude information, the second motion attitude information and the third motion attitude information of the patient are considered, so that the predicted motion characteristics can accurately and reliably predict the motion of each body part of the current patient in the future, including pathological special parts, and the first hidden danger motion characteristics are acquired according to the age characteristics, the sex characteristics and the illness characteristics of the patient, and are screened according to vital sign information, the second hidden danger motion characteristics more accurately and reliably reflect the motion characteristics possibly causing harm to the patient, and then accidents of the patient can be timely prevented by comparing the predicted motion characteristics with the hidden danger motion characteristics, the safety of the patient can be ensured, the patient can be intelligently and carelessly protected under the condition that the family of the patient is not located, the supervision personnel of the patient can be timely notified, and the safety of the patient can be effectively ensured.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
FIG. 3 illustrates a schematic block diagram of an example electronic device 300 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 3, the apparatus 300 includes a computing unit 301 that may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 302 or a computer program loaded from a storage unit 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the device 300 may also be stored. The computing unit 301, the ROM 302, and the RAM 303 are connected to each other by a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Various components in device 300 are connected to I/O interface 305, including: an input unit 306 such as a keyboard, a mouse, etc.; an output unit 307 such as various types of displays, speakers, and the like; a storage unit 308 such as a magnetic disk, an optical disk, or the like; and a communication unit 309 such as a network card, modem, wireless communication transceiver, etc. The communication unit 309 allows the device 300 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 301 performs the various methods and processes described above, such as the computer vision-based intelligent ward monitoring method. For example, in some embodiments, the computer vision based smart ward monitoring method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 300 via the ROM 302 and/or the communication unit 309. When the computer program is loaded into RAM 303 and executed by computing unit 301, one or more of the steps of the computer vision based intelligent ward monitoring method described above may be performed. Alternatively, in other embodiments, the computing unit 301 may be configured to perform the computer vision based intelligent ward monitoring method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Vi rtua lPr ivate Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (10)
1. An intelligent ward monitoring method based on computer vision, which is characterized by comprising the following steps:
processing a multi-frame image containing a patient to be monitored, which is shot by an intelligent control panel, so as to determine first action posture information of the patient to be monitored;
acquiring second action posture information and first vital sign information of the patient to be monitored based on first detection equipment worn by the pathological feature parts of the plurality of marks on the patient to be monitored;
Acquiring third action posture information and second vital sign information of the patient to be monitored based on second detection equipment on a patient bed where the patient to be monitored is located, wherein the first detection equipment and the second detection equipment are associated; the third action gesture information and the second action gesture information are synchronously acquired by the second detection equipment and the first detection equipment respectively; the sampling frequency of the second detection device is the same as that of the first detection device;
determining each first hidden danger action characteristic corresponding to the age characteristic, sex characteristic and illness state characteristic of the patient to be monitored based on a preset mapping relation;
screening out second hidden danger action features from the first hidden danger action features according to the first vital sign information and the second vital sign information;
acquiring predicted action characteristics corresponding to the patient to be monitored according to the first action posture information, the second action posture information and the third action posture information;
and determining a control strategy corresponding to the current patient to be monitored according to the predicted action characteristic and each second hidden danger action characteristic, and executing the control strategy.
2. The method of claim 1, wherein determining a control strategy corresponding to a current patient to be monitored based on the predicted motion feature and the hidden danger motion feature, and executing the control strategy, comprises:
calculating each matching degree between the predicted action feature and each second hidden danger action feature, and sending an early warning control command to the sickbed under the condition that any matching degree is larger than a preset threshold value so as to control each type of protection equipment preset on the sickbed to enter a precaution working mode;
based on the intelligent control panel, reminding information is sent to each terminal device which is held by a supervisor corresponding to the patient to be monitored, the supervisor comprises family members, main doctors and nurses of the patient to be monitored, and the reminding information comprises the current image of the patient to be monitored.
3. The method according to claim 1, wherein the obtaining the predicted motion feature corresponding to the patient to be monitored according to the first motion gesture information, the second motion gesture information, and the third motion gesture information includes:
Inputting the first action posture information, the second action posture information and the third action posture information into an action complement neural network model generated by pre-training so as to output position information containing each key point of a human body of the patient to be monitored in each frame of an action posture frame sequence;
determining the displacement amount and the displacement direction corresponding to each key point according to the position information of each key point of the human body in each frame;
and determining the predicted action characteristics corresponding to the patient to be monitored based on the displacement amount and the displacement direction corresponding to each key point in the human body.
4. The method of claim 1, wherein the screening each second hidden danger action feature from the each first hidden danger action feature based on the first vital sign information and the second vital sign information comprises:
adding the first vital sign information and the second vital sign information, and performing data cleaning and data desensitization processing to determine third vital sign information;
acquiring hidden danger attribute information corresponding to each third vital sign information based on a predetermined mapping relation table;
And determining attribute information corresponding to each first hidden danger action feature, and screening each second hidden danger action feature from each first hidden danger action feature according to the hidden danger attribute information.
5. The method of claim 1, wherein the acquiring the second motion profile information and the first vital sign information of the patient to be monitored comprises:
acquiring second action posture information and first vital sign information of the patient to be monitored based on the first detection equipment according to a target sampling frequency;
the acquiring the third action posture information and the second vital sign information of the patient to be monitored comprises:
acquiring third action posture information and second vital sign information of the patient to be monitored based on the second detection device according to the target sampling frequency,
wherein the third motion gesture information and the second motion gesture information are synchronously acquired by the second detection device and the first detection device, respectively.
6. An intelligent ward monitoring device based on computer vision, which is characterized by comprising:
the first determining module is used for processing the multi-frame image containing the patient to be monitored, which is shot by the intelligent control panel, so as to determine the first action posture information of the patient to be monitored;
The first acquisition module is used for acquiring second action posture information and first vital sign information of the patient to be monitored based on each first detection device worn on the pathological feature parts of the plurality of marks on the patient to be monitored;
the second acquisition module is used for acquiring third action posture information and second vital sign information of the patient to be monitored based on second detection equipment on a patient bed where the patient to be monitored is located, wherein the first detection equipment and the second detection equipment are associated; the third action gesture information and the second action gesture information are synchronously acquired by the second detection equipment and the first detection equipment respectively; the sampling frequency of the second detection device is the same as that of the first detection device;
the second determining module is used for determining each first hidden danger action characteristic corresponding to the age characteristic, the sex characteristic and the illness state characteristic of the patient to be monitored based on a preset mapping relation;
the screening module is used for screening out second hidden danger action features from the first hidden danger action features according to the first vital sign information and the second vital sign information;
the third acquisition module is used for acquiring the predicted action characteristics corresponding to the patient to be monitored according to the first action posture information, the second action posture information and the third action posture information;
And the third determining module is used for determining a control strategy corresponding to the current patient to be monitored according to the predicted action characteristic and each second hidden danger action characteristic and executing the control strategy.
7. The apparatus of claim 6, wherein the third determining module is specifically configured to:
calculating each matching degree between the predicted action feature and each second hidden danger action feature, and sending an early warning control command to the sickbed under the condition that any matching degree is larger than a preset threshold value so as to control each type of protection equipment preset on the sickbed to enter a precaution working mode;
based on the intelligent control panel, reminding information is sent to each terminal device which is held by a supervisor corresponding to the patient to be monitored, the supervisor comprises family members, main doctors and nurses of the patient to be monitored, and the reminding information comprises the current image of the patient to be monitored.
8. The apparatus of claim 6, wherein the third acquisition module is specifically configured to:
inputting the first action posture information, the second action posture information and the third action posture information into an action complement neural network model generated by pre-training so as to output position information containing each key point of a human body of the patient to be monitored in each frame of an action posture frame sequence;
Determining the displacement amount and the displacement direction corresponding to each key point according to the position information of each key point of the human body in each frame;
and determining the predicted action characteristics corresponding to the patient to be monitored based on the displacement amount and the displacement direction corresponding to each key point in the human body.
9. The apparatus of claim 6, wherein the screening module is specifically configured to:
adding the first vital sign information and the second vital sign information, and performing data cleaning and data desensitization processing to determine third vital sign information;
acquiring hidden danger attribute information corresponding to each third vital sign information based on a predetermined mapping relation table;
and determining attribute information corresponding to each first hidden danger action feature, and screening each second hidden danger action feature from each first hidden danger action feature according to the hidden danger attribute information.
10. The apparatus of claim 6, wherein the first acquisition module is specifically configured to:
acquiring second action posture information and first vital sign information of the patient to be monitored based on the first detection equipment according to a target sampling frequency;
The second obtaining module is specifically configured to:
acquiring third action posture information and second vital sign information of the patient to be monitored based on the second detection device according to the target sampling frequency,
wherein the third motion gesture information and the second motion gesture information are synchronously acquired by the second detection device and the first detection device, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211570413.4A CN116013548B (en) | 2022-12-08 | 2022-12-08 | Intelligent ward monitoring method and device based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211570413.4A CN116013548B (en) | 2022-12-08 | 2022-12-08 | Intelligent ward monitoring method and device based on computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116013548A CN116013548A (en) | 2023-04-25 |
CN116013548B true CN116013548B (en) | 2024-04-09 |
Family
ID=86027371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211570413.4A Active CN116013548B (en) | 2022-12-08 | 2022-12-08 | Intelligent ward monitoring method and device based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116013548B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117100517B (en) * | 2023-10-23 | 2023-12-19 | 河北普康医疗设备有限公司 | Electric medical bed remote control system based on 5G communication |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111184512A (en) * | 2019-12-30 | 2020-05-22 | 电子科技大学 | Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient |
CN111326230A (en) * | 2020-01-20 | 2020-06-23 | 深圳市丞辉威世智能科技有限公司 | Auxiliary training method, device, control terminal and medium |
CN111493882A (en) * | 2020-06-02 | 2020-08-07 | 上海健康医学院 | Old people falling prediction and exercise rehabilitation intervention guidance system and method |
CN112669566A (en) * | 2020-12-16 | 2021-04-16 | 问境科技(上海)有限公司 | Nursing early warning method and system based on human body posture analysis |
CN113257440A (en) * | 2021-06-21 | 2021-08-13 | 杭州金线连科技有限公司 | ICU intelligent nursing system based on patient video identification |
CN113303997A (en) * | 2021-06-30 | 2021-08-27 | 上海交通大学医学院附属第九人民医院 | Intelligent sickbed and intelligent sickbed monitoring system |
CN113688740A (en) * | 2021-08-26 | 2021-11-23 | 燕山大学 | Indoor posture detection method based on multi-sensor fusion vision |
CN113889223A (en) * | 2021-10-25 | 2022-01-04 | 合肥工业大学 | Gesture recognition rehabilitation system based on computer vision |
CN114022956A (en) * | 2021-11-01 | 2022-02-08 | 上海林港人工智能科技有限公司 | Method for multi-dimensional intelligent study and judgment of body-building action and movement effect |
WO2022036777A1 (en) * | 2020-08-21 | 2022-02-24 | 暨南大学 | Method and device for intelligent estimation of human body movement posture based on convolutional neural network |
CN114495280A (en) * | 2022-01-29 | 2022-05-13 | 吉林大学第一医院 | Whole-day non-accompanying ward patient falling detection method based on video monitoring |
KR102397248B1 (en) * | 2021-11-01 | 2022-05-13 | 주식회사 스위트케이 | Image analysis-based patient motion monitoring system and method for providing the same |
CN114900806A (en) * | 2022-06-08 | 2022-08-12 | 中国人民解放军空军军医大学 | Internet of things intensive care system |
CN115148336A (en) * | 2022-06-15 | 2022-10-04 | 赵韧 | AI discernment is supplementary psychological disorders of lower System for evaluating treatment effect of patient |
CN115345906A (en) * | 2022-08-22 | 2022-11-15 | 南京邮电大学 | Human body posture tracking method based on millimeter wave radar |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11638538B2 (en) * | 2020-03-02 | 2023-05-02 | Charter Communications Operating, Llc | Methods and apparatus for fall prevention |
-
2022
- 2022-12-08 CN CN202211570413.4A patent/CN116013548B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111184512A (en) * | 2019-12-30 | 2020-05-22 | 电子科技大学 | Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient |
CN111326230A (en) * | 2020-01-20 | 2020-06-23 | 深圳市丞辉威世智能科技有限公司 | Auxiliary training method, device, control terminal and medium |
CN111493882A (en) * | 2020-06-02 | 2020-08-07 | 上海健康医学院 | Old people falling prediction and exercise rehabilitation intervention guidance system and method |
WO2022036777A1 (en) * | 2020-08-21 | 2022-02-24 | 暨南大学 | Method and device for intelligent estimation of human body movement posture based on convolutional neural network |
CN112669566A (en) * | 2020-12-16 | 2021-04-16 | 问境科技(上海)有限公司 | Nursing early warning method and system based on human body posture analysis |
CN113257440A (en) * | 2021-06-21 | 2021-08-13 | 杭州金线连科技有限公司 | ICU intelligent nursing system based on patient video identification |
CN113303997A (en) * | 2021-06-30 | 2021-08-27 | 上海交通大学医学院附属第九人民医院 | Intelligent sickbed and intelligent sickbed monitoring system |
CN113688740A (en) * | 2021-08-26 | 2021-11-23 | 燕山大学 | Indoor posture detection method based on multi-sensor fusion vision |
CN113889223A (en) * | 2021-10-25 | 2022-01-04 | 合肥工业大学 | Gesture recognition rehabilitation system based on computer vision |
CN114022956A (en) * | 2021-11-01 | 2022-02-08 | 上海林港人工智能科技有限公司 | Method for multi-dimensional intelligent study and judgment of body-building action and movement effect |
KR102397248B1 (en) * | 2021-11-01 | 2022-05-13 | 주식회사 스위트케이 | Image analysis-based patient motion monitoring system and method for providing the same |
CN114495280A (en) * | 2022-01-29 | 2022-05-13 | 吉林大学第一医院 | Whole-day non-accompanying ward patient falling detection method based on video monitoring |
CN114900806A (en) * | 2022-06-08 | 2022-08-12 | 中国人民解放军空军军医大学 | Internet of things intensive care system |
CN115148336A (en) * | 2022-06-15 | 2022-10-04 | 赵韧 | AI discernment is supplementary psychological disorders of lower System for evaluating treatment effect of patient |
CN115345906A (en) * | 2022-08-22 | 2022-11-15 | 南京邮电大学 | Human body posture tracking method based on millimeter wave radar |
Non-Patent Citations (1)
Title |
---|
骨骼关键点检测技术在康复评估中的应用进展;王睿;朱业安;卢巍;;中国康复医学杂志(第07期);122-126 * |
Also Published As
Publication number | Publication date |
---|---|
CN116013548A (en) | 2023-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10600204B1 (en) | Medical environment bedsore detection and prevention system | |
US10755817B2 (en) | Systems, apparatuses and methods for predicting medical events and conditions reflected in gait | |
US9710920B2 (en) | Motion information processing device | |
US7502498B2 (en) | Patient monitoring apparatus | |
US20150320343A1 (en) | Motion information processing apparatus and method | |
US20200237225A1 (en) | Wearable patient monitoring systems and associated devices, systems, and methods | |
JP2020500572A (en) | System and method for patient fall detection | |
RU2679864C2 (en) | Patient monitoring system and method | |
US10229491B1 (en) | Medical environment monitoring system | |
CN109863561A (en) | Equipment, system and method for patient-monitoring to predict and prevent bed from falling | |
CN109891516A (en) | Equipment, system and method for patient-monitoring to predict and prevent bed from falling | |
EP3504649B1 (en) | Device, system and method for patient monitoring to predict and prevent bed falls | |
JP6822328B2 (en) | Watching support system and its control method | |
WO2018036953A1 (en) | Device, system and method for patient monitoring to predict and prevent bed falls | |
Alnaggar et al. | Video-based real-time monitoring for heart rate and respiration rate | |
CN116013548B (en) | Intelligent ward monitoring method and device based on computer vision | |
US10489661B1 (en) | Medical environment monitoring system | |
US10475206B1 (en) | Medical environment event parsing system | |
US20240138775A1 (en) | Systems and methods for detecting attempted bed exit | |
Rescio et al. | Ambient and wearable system for workers’ stress evaluation | |
Inoue et al. | Bed-exit prediction applying neural network combining bed position detection and patient posture estimation | |
US10229489B1 (en) | Medical environment monitoring system | |
CN117136028A (en) | patient monitoring system | |
CN112739257B (en) | Apparatus, system, and method for providing a skeletal model | |
US20220415513A1 (en) | System and method for predicting diabetic retinopathy progression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |