CN116664925A - Method, device, equipment and storage medium for identifying target in intensive care unit - Google Patents

Method, device, equipment and storage medium for identifying target in intensive care unit Download PDF

Info

Publication number
CN116664925A
CN116664925A CN202310573889.1A CN202310573889A CN116664925A CN 116664925 A CN116664925 A CN 116664925A CN 202310573889 A CN202310573889 A CN 202310573889A CN 116664925 A CN116664925 A CN 116664925A
Authority
CN
China
Prior art keywords
medical
medical equipment
personnel
roi
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310573889.1A
Other languages
Chinese (zh)
Other versions
CN116664925B (en
Inventor
袁玉峰
席子昂
刘荔
王莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Zhongnan Hospital of Wuhan University
Original Assignee
Tsinghua University
Zhongnan Hospital of Wuhan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Zhongnan Hospital of Wuhan University filed Critical Tsinghua University
Priority to CN202310573889.1A priority Critical patent/CN116664925B/en
Publication of CN116664925A publication Critical patent/CN116664925A/en
Application granted granted Critical
Publication of CN116664925B publication Critical patent/CN116664925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for identifying an object in an intensive care unit, wherein the method comprises the following steps: acquiring personnel data and medical equipment data of an intensive care unit scene; labeling the personnel data and the medical equipment data, and training the labeled personnel and medical equipment by utilizing a target detection model to obtain the ROI coordinates of the personnel interest area and the ROI coordinates of the medical equipment; and positioning and identifying medical staff, patients and medical equipment based on the staff interest region ROI coordinates and the medical equipment ROI coordinates, and automatically identifying high-risk patients according to the interaction frequency of the medical staff and the medical equipment beside the patients. The application realizes the implementation and on-line observation of the illness state of the patient, the implementation and supervision of the medical diagnosis and treatment behavior and the implementation and supervision of the use of medical equipment, improves the treatment capacity of severe patients, reduces the death rate of illness, improves the medical efficiency and saves the medical cost.

Description

Method, device, equipment and storage medium for identifying target in intensive care unit
Technical Field
The present application relates to the field of medical technologies, and in particular, to a method, an apparatus, a device, and a storage medium for identifying an object in an intensive care unit.
Background
Intensive Care Unit (ICU) is a special medical scene integrating the characteristics of typical complex medical scenes such as critical patient condition, frequent medical staff diagnosis and treatment, more medical equipment, high utilization rate and the like. Due to the critical illness condition of patients, the rapid emergency medical treatment sensitivity within 24 hours is required to be maintained in the scene, and the efficient configuration of medical equipment resources is ensured.
The current ICU scene, the technology that medical personnel relied on to observe patient's state of illness includes: bedside ward inspection observation and remote video monitoring observation. The prior art has the following defects: under the observation at the bedside, the individual bias is larger, the medical level of medical staff is completely relied on, and the diagnosis and treatment behavior can not realize the quality control of the whole flow; remote video monitoring observation is finished by manpower to a certain extent, and the condition of missing detection exists and real-time online cannot be achieved. On the limitation of the prior art, the medical staff just checks or recalls the service condition of the medical equipment only by manpower in real time, and especially in a large ICU, the medical efficiency is difficult to improve. Therefore, under the prior art, automatic identification of high-risk patients and efficient configuration of medical equipment cannot be solved, so that the medical efficiency is reduced, and the safety of the patients is affected.
Disclosure of Invention
The application mainly aims to provide a method, a device, equipment and a storage medium for identifying targets in an intensive care unit, which can realize the implementation of on-line observation of the condition of a patient, the implementation supervision of medical diagnosis and treatment behaviors and the implementation supervision of the use of medical equipment, improve the treatment capability of the intensive care unit, reduce the death rate of the patient, improve the medical efficiency and save the medical cost.
In a first aspect, the present application provides a method for identifying an object in an intensive care unit, the method comprising the steps of:
acquiring personnel data and medical equipment data of an intensive care unit scene;
labeling the personnel data and the medical equipment data, and training the labeled personnel and medical equipment by utilizing a target detection model to obtain the ROI coordinates of the personnel interest area and the ROI coordinates of the medical equipment;
and positioning and identifying medical staff, patients and medical equipment based on the staff interest region ROI coordinates and the medical equipment ROI coordinates, and automatically identifying high-risk patients according to the interaction frequency of the medical staff and the medical equipment beside the patients.
With reference to the first aspect, as an optional implementation manner, a personnel tag and a medical device tag are respectively set to label the personnel data and the medical device data, where the personnel data includes a medical personnel and a patient, and the medical device includes: sickbed, computer, table at the tail of the bed, treatment disc, infusion support, electrocardiograph monitor, respirator, hemodialysis machine, external membrane pulmonary oxygenation equipment, hand disinfectant and mobile phone;
and training the marked personnel and medical equipment by using the YoloV5 target detection model to obtain personnel ROI coordinates and ROI coordinates of each medical equipment.
With reference to the first aspect, as an optional implementation manner, the personnel ROI coordinates are compared with the sickbed ROI coordinates, and when the overlapping area of the personnel ROI coordinates and the sickbed ROI coordinates is smaller than the set area, the personnel ROI coordinates are identified as medical personnel;
when the overlapping area of the personnel interested region ROI coordinate and the sickbed ROI coordinate is larger than or equal to a set area, calculating the length-width ratio of the personnel ROI coordinate:
when the longitudinal distance of the personnel ROI coordinate is smaller than or equal to the transverse distance, the personnel ROI coordinate is identified as a patient;
when the longitudinal distance of the person is greater than the transverse distance, identifying the ROI coordinate of the person as a medical care person;
and inputting the marked medical equipment into an example segmentation model for training to obtain the medical equipment Mask region, and comparing the medical equipment Mask region with the medical equipment ROI to perform positioning identification on the medical equipment.
With reference to the first aspect, as an optional implementation manner, comparing a Mask region of the medical device output by the example segmentation model with an ROI output by the object in the target detection model;
and eliminating Mask areas outside the ROI, and reserving Mask areas inside the ROI as identification areas of the medical equipment.
With reference to the first aspect, as an optional implementation manner, according to the positioning and identifying medical equipment, a current state and number of the equipment are determined so as to perform efficient configuration on the medical equipment.
With reference to the first aspect, as an optional implementation manner, when the interaction frequency between the medical staff and the medical device beside the patient is greater than a set number of times, determining that the patient is a high-risk patient;
and when the interaction frequency of the medical staff and medical equipment beside the patient is smaller than the set times, determining that the patient is a low-risk patient.
With reference to the first aspect, as an optional implementation manner, personnel data and medical equipment data of the intensive care unit scene are collected according to a symmetrical camera installed in the intensive care unit scene, wherein the direction of the symmetrical camera is from the bed tail to the bed head area and from the bed head to the bed tail area.
In a second aspect, the present application provides an in-intensive care unit target recognition apparatus, comprising:
the acquisition unit is used for acquiring personnel data and medical equipment data of an intensive care unit scene;
the processing unit is used for marking the personnel data and the medical equipment data, training the marked personnel and medical equipment by utilizing the target detection model, and obtaining the personnel region of interest (ROI) coordinates and the medical equipment ROI coordinates;
and the identification unit is used for carrying out positioning identification on the medical staff, the patient and the medical equipment based on the ROI coordinates of the staff interest area and the ROI coordinates of the medical equipment, and automatically identifying the high-risk patient according to the interaction frequency of the medical staff and the medical equipment beside the patient.
In a third aspect, the present application also provides an electronic device, including: a processor; a memory having stored thereon computer readable instructions which, when executed by the processor, implement the method of any of the first aspects.
In a fourth aspect, the present application also provides a computer readable storage medium storing computer program instructions which, when executed by a computer, cause the computer to perform the method of any one of the first aspects.
The application provides a method, a device, equipment and a storage medium for identifying an object in an intensive care unit, wherein the method comprises the following steps: acquiring personnel data and medical equipment data of an intensive care unit scene; labeling the personnel data and the medical equipment data, and training the labeled personnel and medical equipment by utilizing a target detection model to obtain the ROI coordinates of the personnel interest area and the ROI coordinates of the medical equipment; and positioning and identifying medical staff, patients and medical equipment based on the staff interest region ROI coordinates and the medical equipment ROI coordinates, and automatically identifying high-risk patients according to the interaction frequency of the medical staff and the medical equipment beside the patients. The application realizes the implementation and on-line observation of the illness state of the patient, the implementation and supervision of the medical diagnosis and treatment behavior and the implementation and supervision of the use of medical equipment, improves the treatment capacity of severe patients, reduces the death rate of illness, improves the medical efficiency and saves the medical cost.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flowchart of a method for identifying an object in an intensive care unit according to an embodiment of the present application;
fig. 2 is a schematic diagram of an object recognition device in an intensive care unit according to an embodiment of the present application;
FIG. 3 is a schematic diagram of personnel identification provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of medical device identification provided in an embodiment of the present application;
fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic diagram of a computer readable program medium according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
Under the prior art, automatic identification (medical care gathers other medical equipment beside a patient for a long time and medical care frequently operates other medical equipment beside the patient) and efficient configuration (whether the medical equipment is in a use state or not) of the medical equipment cannot be solved, so that the medical efficiency is reduced, and the safety of the patient is affected.
The embodiment of the application provides a method, a device, equipment and a storage medium for identifying targets in an intensive care unit, which can realize the implementation of on-line observation of the condition of a patient, the implementation supervision of medical diagnosis and treatment behaviors and the implementation supervision of the use of medical equipment, and improve the treatment capability of the intensive care unit, reduce the death rate of the patient, improve the medical efficiency and save the medical cost.
In order to achieve the technical effects, the application has the following general ideas:
a method for target identification in an intensive care unit, the method comprising the steps of:
s101: personnel data and medical equipment data of an intensive care unit scene are collected.
S102: labeling the personnel data and the medical equipment data, and training the labeled personnel and medical equipment by utilizing a target detection model to obtain the ROI coordinates of the personnel interest area and the ROI coordinates of the medical equipment.
S103: and positioning and identifying medical staff, patients and medical equipment based on the staff interest region ROI coordinates and the medical equipment ROI coordinates, and automatically identifying high-risk patients according to the interaction frequency of the medical staff and the medical equipment beside the patients.
Embodiments of the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a method for identifying an object in an intensive care unit according to the present application, as shown in fig. 1, the method includes the steps of:
step S101, acquiring personnel data and medical equipment data of an intensive care unit scene.
Specifically, according to the symmetrical cameras installed in the monitoring room scene, the personnel data and the medical equipment data of the intensive care room scene are acquired, wherein the directions of the symmetrical cameras are from the bed tail to the bed head area and from the bed head to the bed tail area.
The method is convenient to understand and illustrate, the building layout characteristics in the ICU are surveyed, the arrangement positions and the number of cameras are determined according to the environment of the on-site bed units (generally, two depth cameras are adopted, and the positions and the number of the cameras can be set according to requirements).
In one embodiment, 2 cameras are erected in a unit space of a sickbed in the ICU, the directions of the cameras face from the tail of the sickbed to the head area of the sickbed and the directions of the heads of the sickbed to the bed area, 2 videos are recorded, and video streams of each camera are calculated independently. Two cameras facing in opposite directions can avoid the problem that the vision is blocked to cause the recognition, and the medical care can be exposed to the vision of a certain camera in any situation.
And S102, marking the personnel data and the medical equipment data, and training the marked personnel and medical equipment by utilizing a target detection model to obtain the ROI coordinates of the personnel interest area and the ROI coordinates of the medical equipment.
Specifically, a personnel tag and a medical device tag are respectively set, and personnel data and medical device data are marked, wherein the personnel data comprise medical personnel and patients, and the medical device comprises: sickbed, computer, bedside table, treatment tray, infusion support, electrocardiograph monitor, respirator, hemodialysis machine, epicardial pulmonary oxygenation equipment, hand disinfectant and mobile phone. And training the marked personnel and medical equipment by using the YoloV5 target detection model to obtain personnel ROI coordinates and ROI coordinates of each medical equipment.
In one embodiment, medical care, patients and sickbeds are respectively marked according to data acquired by a camera, and are set into three types of labels, and personnel ROI coordinates are obtained by training in a YoloV5 target detection model so as to identify and distinguish doctors and patients.
In an embodiment, medical equipment is marked according to medical equipment data acquired by a camera, training is performed in a YoloV5 target detection model, and ROI coordinates of the medical equipment are used for medical equipment identification.
The ROI is (region of interest), and the region of interest is the region of interest. In machine vision and image processing, a region to be processed is outlined from a processed image in a box, circle, ellipse, irregular polygon and the like, and is called a region of interest.
And step 103, positioning and identifying medical staff, patients and medical equipment based on the ROI coordinates of the staff interest area and the ROI coordinates of the medical equipment, and automatically identifying high-risk patients according to the interaction frequency of the medical staff and the medical equipment beside the patients.
Specifically, comparing the ROI coordinates of the person interest region with the ROI coordinates of the sickbed, and identifying the ROI coordinates of the person as medical care personnel when the overlapping area of the ROI coordinates of the person interest region and the ROI coordinates of the sickbed is smaller than a set area; when the overlapping area of the personnel interested region ROI coordinate and the sickbed ROI coordinate is larger than or equal to a set area, calculating the length-width ratio of the personnel ROI coordinate: when the longitudinal distance of the personnel ROI coordinate is smaller than or equal to the transverse distance, the personnel ROI coordinate is identified as a patient; when the longitudinal distance of the person is greater than the transverse distance, identifying the ROI coordinate of the person as a medical care person; and inputting the marked medical equipment into an example segmentation model for training to obtain the medical equipment Mask region, and comparing the medical equipment Mask region with the medical equipment ROI to perform positioning identification on the medical equipment.
It is convenient to understand and exemplify that the yolv 5 target detection model cannot distinguish between doctors and patients by 100%, and auxiliary correction is performed after the model trains the monitoring result.
The personnel (including medical care and patients) and sickbed rectangular ROI (the region to be processed is outlined by rectangle on the processed image, called the region of interest, abbreviated as ROI) output by the target detection model are provided with 4-point coordinates, the personnel coordinate region and the sickbed coordinate region are compared and calculated, if the personnel coordinate region has an area which is more than or equal to 80% and coincides with the sickbed coordinate region, the next optimization is carried out, and if the coincidence area is less than 80%, the personnel ROI label is defined as medical care.
Wherein entering the next optimization comprises: since patients in ICU ward lose autonomous mobility almost completely, lying or prone position is in sickbed region during course, and thus the aspect ratio of rectangular ROI is significantly different from medical care due to standing and lying postures, aspect ratio calculation is performed on the above-mentioned output personnel coordinates, if longitudinal (wide) distance is less than or equal to transverse (long) distance, the coordinates are confirmed as patients. The personnel coordinate modification label is rated as medical care if the longitudinal (wide) distance is greater than the transverse (long) distance.
Optionally, the number of medical care and patients in the field of view of each camera is counted after training and optimization.
The medical equipment is trained in two steps and is distinguished in a combined way, and the medical equipment is specifically: training is carried out in a YoloV5 target detection model, the model outputs 4-point rectangular ROIs (of medical equipment) of various objects, training is carried out in a Mask RCNN example segmentation model, the model outputs Mask areas of various objects, and the existing example segmentation model has misjudgment on the Mask areas output by various objects in an ICU scene, so that the confusion degree of various objects is high, and the target detection model results with high precision are synchronously utilized to correct so as to improve the object detection precision.
Identification purpose of medical device: the treatment of patients in the ICU is basically finished by means of medical equipment, and high-risk patients can be identified by identifying the operation frequency of the medical equipment and medical staff on the medical equipment. Meanwhile, medical staff can be assisted in knowing in real time whether different equipment of different types is currently in a use state, what equipment is currently idle and available for a patient, what positions are respectively, if emergency patients need rescuing, the equipment can be obtained first in what positions, and the like.
Benefit after identification: firstly, the real-time identification of patients with different risks in the ICU can be realized (for example, peripheral equipment of a certain patient is basically in low-frequency medical care contact, so that the patient possibly has stable illness state under the existing treatment means and is a low-risk patient, and the other patient is frequently in contact with medical care equipment, so that various life indexes of the patient possibly are unstable and are high-risk death patients). And secondly, the management efficiency of medical equipment in the ICU is improved. ICU devices are numerous and complex, are inefficient to manually inventory in daily work, and cannot know if the device is in use and in what patient. Under the technology, the use condition and the positioning of various devices can be monitored in real time, statistics can be carried out on the basis, and the number and the positions of the residual usable devices are convenient for the management of clinical medical staff.
Optionally, the video data is subjected to target detection and instance segmentation simultaneously, and the independent target detection model is utilized to carry out region correction on the result of the instance segmentation model, so that the precision of various object coordinates output by the instance segmentation is improved.
It should be noted that, the object detection model can accurately locate information and the position of a certain object in the image, output a bounding box of the object, generally a 4-point rectangular ROI area, and the example segmentation model classifies a specific object in the image at the pixel point level on the basis of the bounding box and needs to distinguish different individuals in the same class. The model outputs a certain item (i.e., instance) defined area, i.e., mask area, typically an irregular area.
In one embodiment, comparing Mask regions of the medical device output by the example segmentation model with ROIs output by the class of items in the target detection model; and eliminating Mask areas outside the ROI, and reserving Mask areas inside the ROI as identification areas of the medical equipment.
The Mask area of various articles is taken as a main part, and the correction is sequentially traversed, so that two conditions are provided:
case 1: and (3) overlapping a Mask region of a certain object with the target detection ROI, comparing the Mask region of various equipment objects output by the example segmentation model with the ROI output by the object in the target detection model, eliminating the Mask region outside the ROI, and only reserving the Mask region inside the ROI as the identification region of the object.
Case 2: if a Mask region of an output object appears on a certain object, but the ROI coordinates are not output in the target detection model, discarding the Mask region to prevent misjudgment and further optimize the recognition accuracy.
In one embodiment, the current state and number of the devices are determined according to the identified medical devices so as to efficiently configure the medical devices.
In one embodiment, when the frequency of interaction between the healthcare worker and the medical device beside the patient is greater than a set number of times, determining that the patient is a high risk patient; and when the interaction frequency of the medical staff and medical equipment beside the patient is smaller than the set times, determining that the patient is a low-risk patient. The setting times of the interaction frequency between the medical care and the equipment can be set in a self-defined mode according to requirements.
To sum up, first of all, this patent technology can real-time supervision hospital ICU ward in medical personnel and patient's position, discerns and carries out real-time statistics to the differentiation of different grade type personnel. Secondly, the ICU intensive care unit medical equipment is complex and more in quantity, and the technology can accurately locate and identify various medical equipment and conduct real-time statistical analysis.
The technology can realize real-time identification, distinguishing, positioning and statistics of personnel and medical equipment in an ICU ward, further realize on-line observation of patient conditions, implementation supervision of medical diagnosis and treatment behaviors and implementation supervision of medical equipment use (interaction frequency of medical care and equipment) by means of the technology, thereby improving the treatment capacity of severe patients, reducing the death rate, improving the medical efficiency and saving the medical cost.
Referring to fig. 2, fig. 2 is a schematic diagram of an object recognition device in an intensive care unit according to the present application, and as shown in fig. 2, the device includes:
acquisition unit 201: which is used for acquiring personnel data and medical device data of an intensive care unit scene.
The processing unit 202: the method is used for marking the personnel data and the medical equipment data, and training the marked personnel and medical equipment by utilizing a target detection model to obtain the personnel region of interest (ROI) coordinates and the medical equipment ROI coordinates.
The identification unit 203: the method is used for carrying out positioning identification on medical staff, patients and medical equipment based on the ROI coordinates of the staff interest area and the ROI coordinates of the medical equipment, and automatically identifying high-risk patients according to the interaction frequency of the medical staff and the medical equipment beside the patients.
Further, in a possible implementation manner, the processing unit is further configured to set a personnel tag and a medical device tag respectively, so as to label the personnel data and the medical device data, where the personnel data includes a medical personnel and a patient, and the medical device includes: sickbed, computer, table at the tail of the bed, treatment disc, infusion support, electrocardiograph monitor, respirator, hemodialysis machine, external membrane pulmonary oxygenation equipment, hand disinfectant and mobile phone;
and training the marked personnel and medical equipment by using the YoloV5 target detection model to obtain personnel ROI coordinates and ROI coordinates of each medical equipment.
Further, in a possible implementation manner, the identifying unit is further configured to compare the ROI coordinate of the person region of interest with the ROI coordinate of the patient bed, and identify the ROI coordinate of the person as a medical care person when the overlapping area of the ROI coordinate of the person region of interest and the ROI coordinate of the patient bed is smaller than a set area;
when the overlapping area of the personnel interested region ROI coordinate and the sickbed ROI coordinate is larger than or equal to a set area, calculating the length-width ratio of the personnel ROI coordinate:
when the longitudinal distance of the personnel ROI coordinate is smaller than or equal to the transverse distance, the personnel ROI coordinate is identified as a patient;
when the longitudinal distance of the person is greater than the transverse distance, identifying the ROI coordinate of the person as a medical care person;
and inputting the marked medical equipment into an example segmentation model for training to obtain the medical equipment Mask region, and comparing the medical equipment Mask region with the medical equipment ROI to perform positioning identification on the medical equipment.
Further, in one possible implementation manner, the identifying unit is further configured to compare the Mask area of the medical device output by the example segmentation model with the ROI output by the object in the object detection model;
and eliminating Mask areas outside the ROI, and reserving Mask areas inside the ROI as identification areas of the medical equipment.
Further, in one possible implementation manner, the method further comprises a determining unit, which is used for determining the current state and number of the medical equipment according to the positioning identification of the medical equipment so as to efficiently configure the medical equipment.
Further, in a possible implementation manner, the identification unit is further configured to determine that the patient is a high risk patient when the frequency of interaction between the medical staff and the medical device beside the patient is greater than a set number of times;
and when the interaction frequency of the medical staff and medical equipment beside the patient is smaller than the set times, determining that the patient is a low-risk patient.
Further, in a possible implementation manner, the acquisition unit is further configured to acquire personnel data and medical equipment data of the intensive care unit scene according to a symmetrical camera installed in the intensive care unit scene, where the direction of the symmetrical camera is from the bed tail toward the bed head area and from the bed head toward the bed tail area.
Referring to fig. 3, fig. 3 is a schematic diagram of personnel identification provided by the present application, as shown in fig. 3:
marking the medical care, the patient and the sickbed according to the acquired data (video stream 1 data in the cameras 1 and 2) respectively, setting the medical care, the patient and the sickbed as three types of labels, and training in a YoloV5 target detection model to obtain the ROIs of the medical care, the patient and the sickbed.
Training result optimization 1: the personnel (including medical care and patients) and sickbed rectangular ROI (the region to be processed is outlined by rectangle on the processed image, called the region of interest, abbreviated as ROI) output by the target detection model are provided with 4-point coordinates, the personnel coordinate region and the sickbed coordinate region are compared and calculated, if the personnel coordinate region has an area which is more than or equal to 80% and coincides with the sickbed coordinate region, the next optimization is carried out, and if the coincidence area is less than 80%, the personnel ROI label is defined as medical care.
Training result optimization 2: since patients in ICU ward lose autonomous mobility almost completely, lying or prone position is in sickbed region during course, and thus the aspect ratio of rectangular ROI is significantly different from medical care due to standing and lying postures, so aspect ratio calculation is performed on personnel coordinates output in correction training result optimization 1, if longitudinal (wide) distance is less than or equal to transverse (long) distance, the coordinates are confirmed as patients. The personnel coordinate modification label is rated as medical care if the longitudinal (wide) distance is greater than the transverse (long) distance.
And counting the medical care and the number of patients in the visual field area of each camera after training and optimizing.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating identification of a medical device according to the present application, as shown in fig. 4:
the video stream 2 data in the cameras 1 and 2 are used for marking specific marked objects in ICU (information and communication units) such as sickbeds, computers, bed tail tables, treatment trays, infusion supports, electrocardiograph monitors, respirators, hemodialysis machines, extracorporeal membrane pulmonary oxygenation Equipment (ECMO), hand disinfectant, mobile phones and the like, setting labels, training and jointly distinguishing in two steps, namely a YoloV5 target detection model and a MaskRCNN example segmentation model, and respectively obtaining a 4-point rectangular ROI for outputting various objects by the YoloV5 target detection model and a MaskRCNN example segmentation model and outputting Mask areas for outputting various objects by the MaskRCNN example segmentation model.
Because the existing example segmentation model can misjudge Mask areas output by various objects in an ICU scene, the confusion degree of various objects is high, and therefore the object detection model results with high precision are synchronously utilized to correct so as to improve the object detection precision.
Comparing Mask areas of various equipment articles with ROIs output by the articles in the target detection model, when a Mask area of an article coincides with the target detection ROI, comparing the Mask areas of various equipment articles output by the example segmentation model with the ROIs output by the articles in the target detection model, eliminating Mask areas outside the ROIs, and only reserving Mask areas inside the ROIs as identification areas of the article. When a Mask region of an output object appears on a certain object, but the ROI coordinates are not output in the object detection model, the Mask region is discarded. And counting the number of various devices and articles in the visual field area of each camera after correction calculation.
An electronic device 500 according to such an embodiment of the application is described below with reference to fig. 5. The electronic device 500 shown in fig. 5 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 5, the electronic device 500 is embodied in the form of a general purpose computing device. The components of electronic device 500 may include, but are not limited to: the at least one processing unit 510, the at least one memory unit 520, and a bus 530 connecting the various system components, including the memory unit 520 and the processing unit 510.
Wherein the storage unit stores program code that is executable by the processing unit 510 such that the processing unit 510 performs steps according to various exemplary embodiments of the present application described in the above-mentioned "example methods" section of the present specification.
The storage unit 520 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 521 and/or cache memory 522, and may further include Read Only Memory (ROM) 523.
The storage unit 520 may also include a program/utility 524 having a set (at least one) of program modules 525, such program modules 525 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 530 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 500 may also communicate with one or more external devices (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 500, and/or any device (e.g., router, modem, etc.) that enables the electronic device 500 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 550. Also, electronic device 500 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 560. As shown, network adapter 560 communicates with other modules of electronic device 500 over bus 530. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 500, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
According to an aspect of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification. In some possible embodiments, the various aspects of the application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the application as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
Referring to fig. 6, a program product 600 for implementing the above-described method according to an embodiment of the present application is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present application, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (10)

1. A method for identifying an object in an intensive care unit, comprising:
acquiring personnel data and medical equipment data of an intensive care unit scene;
labeling the personnel data and the medical equipment data, and training the labeled personnel and medical equipment by utilizing a target detection model to obtain the ROI coordinates of the personnel interest area and the ROI coordinates of the medical equipment;
and positioning and identifying medical staff, patients and medical equipment based on the staff interest region ROI coordinates and the medical equipment ROI coordinates, and automatically identifying high-risk patients according to the interaction frequency of the medical staff and the medical equipment beside the patients.
2. The method of claim 1, wherein labeling the personnel data and the medical device data and training the labeled personnel and medical device with a target detection model to obtain personnel region of interest ROI coordinates and medical device ROI coordinates, comprises:
setting personnel tags and medical equipment tags respectively to annotate the personnel data and medical equipment data, wherein the personnel data comprises medical personnel and patients, and the medical equipment comprises: sickbed, computer, table at the tail of the bed, treatment disc, infusion support, electrocardiograph monitor, respirator, hemodialysis machine, external membrane pulmonary oxygenation equipment, hand disinfectant and mobile phone;
and training the marked personnel and medical equipment by using the YoloV5 target detection model to obtain personnel ROI coordinates and ROI coordinates of each medical equipment.
3. The method of claim 1, wherein the positioning and identifying the healthcare worker, the patient, and the medical device based on the person region of interest ROI coordinates and the medical device ROI coordinates comprises:
comparing the ROI coordinates of the person interest region with the ROI coordinates of the sickbed, and identifying the ROI coordinates of the person as medical care personnel when the overlapping area of the ROI coordinates of the person interest region and the ROI coordinates of the sickbed is smaller than a set area;
when the overlapping area of the personnel interested region ROI coordinate and the sickbed ROI coordinate is larger than or equal to a set area, calculating the length-width ratio of the personnel ROI coordinate:
when the longitudinal distance of the personnel ROI coordinate is smaller than or equal to the transverse distance, the personnel ROI coordinate is identified as a patient;
when the longitudinal distance of the person is greater than the transverse distance, identifying the ROI coordinate of the person as a medical care person;
and inputting the marked medical equipment into an example segmentation model for training to obtain the medical equipment Mask region, and comparing the medical equipment Mask region with the medical equipment ROI to perform positioning identification on the medical equipment.
4. A method according to claim 3, wherein comparing the medical device Mask region with the medical device ROI to locate the medical device comprises:
comparing Mask areas of medical equipment output by the example segmentation model with ROIs output by the object in the target detection model;
and eliminating Mask areas outside the ROI, and reserving Mask areas inside the ROI as identification areas of the medical equipment.
5. A method according to claim 3, further comprising:
and determining the current state and number of the equipment according to the positioning and identifying medical equipment so as to efficiently configure the medical equipment.
6. The method of claim 1, wherein automatically identifying high risk patients based on the frequency of medical personnel interaction with medical devices beside the patient comprises:
when the interaction frequency of the medical staff and medical equipment beside the patient is greater than a set number of times, determining that the patient is a high-risk patient;
and when the interaction frequency of the medical staff and medical equipment beside the patient is smaller than the set times, determining that the patient is a low-risk patient.
7. The method of claim 1, wherein the acquiring and labeling personnel and medical device data of the intensive care unit scene comprises:
and acquiring personnel data and medical equipment data of the intensive care unit scene according to the symmetrical cameras arranged in the intensive care unit scene, wherein the directions of the symmetrical cameras are from the bed tail to the bed head area and from the bed head to the bed tail area.
8. An in-intensive care unit target recognition device, comprising:
the acquisition unit is used for acquiring personnel data and medical equipment data of an intensive care unit scene;
the processing unit is used for marking the personnel data and the medical equipment data, training the marked personnel and medical equipment by utilizing the target detection model, and obtaining the personnel region of interest (ROI) coordinates and the medical equipment ROI coordinates;
and the identification unit is used for carrying out positioning identification on the medical staff, the patient and the medical equipment based on the ROI coordinates of the staff interest area and the ROI coordinates of the medical equipment, and automatically identifying the high-risk patient according to the interaction frequency of the medical staff and the medical equipment beside the patient.
9. An electronic device, the electronic device comprising:
a processor;
a memory having stored thereon computer readable instructions which, when executed by the processor, implement the method of any of claims 1 to 7.
10. A computer readable storage medium, characterized in that it stores computer program instructions, which when executed by a computer, cause the computer to perform the method according to any one of claims 1 to 7.
CN202310573889.1A 2023-05-17 2023-05-17 Method, device, equipment and storage medium for identifying target in intensive care unit Active CN116664925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310573889.1A CN116664925B (en) 2023-05-17 2023-05-17 Method, device, equipment and storage medium for identifying target in intensive care unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310573889.1A CN116664925B (en) 2023-05-17 2023-05-17 Method, device, equipment and storage medium for identifying target in intensive care unit

Publications (2)

Publication Number Publication Date
CN116664925A true CN116664925A (en) 2023-08-29
CN116664925B CN116664925B (en) 2023-12-26

Family

ID=87725279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310573889.1A Active CN116664925B (en) 2023-05-17 2023-05-17 Method, device, equipment and storage medium for identifying target in intensive care unit

Country Status (1)

Country Link
CN (1) CN116664925B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726821A (en) * 2024-02-05 2024-03-19 武汉理工大学 Medical behavior identification method for region shielding in medical video

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030174873A1 (en) * 2002-02-08 2003-09-18 University Of Chicago Method and system for risk-modulated diagnosis of disease
US20130262357A1 (en) * 2011-10-28 2013-10-03 Rubendran Amarasingham Clinical predictive and monitoring system and method
US20150025329A1 (en) * 2013-07-18 2015-01-22 Parkland Center For Clinical Innovation Patient care surveillance system and method
CN109036500A (en) * 2018-07-19 2018-12-18 苏州麦迪斯顿医疗科技股份有限公司 A kind of Clinical Alert method, apparatus, equipment and storage medium
CN113100716A (en) * 2021-04-16 2021-07-13 东南大学附属中大医院 Patient centralized monitoring method and device, electronic equipment and storage medium
CN113257440A (en) * 2021-06-21 2021-08-13 杭州金线连科技有限公司 ICU intelligent nursing system based on patient video identification
CN114141390A (en) * 2021-12-09 2022-03-04 温澜清江(北京)科技有限公司 Remote ICU data network system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030174873A1 (en) * 2002-02-08 2003-09-18 University Of Chicago Method and system for risk-modulated diagnosis of disease
US20130262357A1 (en) * 2011-10-28 2013-10-03 Rubendran Amarasingham Clinical predictive and monitoring system and method
US20150025329A1 (en) * 2013-07-18 2015-01-22 Parkland Center For Clinical Innovation Patient care surveillance system and method
CN109036500A (en) * 2018-07-19 2018-12-18 苏州麦迪斯顿医疗科技股份有限公司 A kind of Clinical Alert method, apparatus, equipment and storage medium
CN113100716A (en) * 2021-04-16 2021-07-13 东南大学附属中大医院 Patient centralized monitoring method and device, electronic equipment and storage medium
CN113257440A (en) * 2021-06-21 2021-08-13 杭州金线连科技有限公司 ICU intelligent nursing system based on patient video identification
CN114141390A (en) * 2021-12-09 2022-03-04 温澜清江(北京)科技有限公司 Remote ICU data network system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LOUISE S. VAN GALEN ET AL.: "Delayed Recognition of Deterioration of Patients in General Wards Is Mostly Caused by Human Related Monitoring Failures: A Root Cause Analysis of Unplanned ICU Admissions", 《PLOS》 *
张元元等: "Braden 量表与MEWS 在重症监护室重度创伤损伤患者 压力性损伤中的预测价值分析", 《齐鲁护理杂志》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726821A (en) * 2024-02-05 2024-03-19 武汉理工大学 Medical behavior identification method for region shielding in medical video
CN117726821B (en) * 2024-02-05 2024-05-10 武汉理工大学 Medical behavior identification method for region shielding in medical video

Also Published As

Publication number Publication date
CN116664925B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
US11699526B2 (en) Alarm notification system
US11182899B2 (en) Systems and methods for processing electronic images to detect contamination
CN116664925B (en) Method, device, equipment and storage medium for identifying target in intensive care unit
US10365946B2 (en) Clustering based process deviation detection
CN110050276B (en) Patient identification system and method
US20160162641A1 (en) Medical support server and medical support system
US11734954B2 (en) Face recognition method, device and electronic equipment, and computer non-volatile readable storage medium
US20210192270A1 (en) Person indentification systems and methods
WO2024046433A1 (en) Epidemic prevention supervision method and apparatus for realizing ia by combining rpa and ai, and electronic device
EP3467770B1 (en) Method for analysing a medical imaging data set, system for analysing a medical imaging data set, computer program product and a computer-readable medium
CN102419890A (en) Monitor alarming system and method
CN115101186B (en) Hospital treatment information management method and device based on big data
US20190290872A1 (en) Tracheal intubation procedure monitoring
WO2023005662A1 (en) Image processing method and apparatus, electronic device, program product and computer-readable storage medium
CN111192679B (en) Method, device and storage medium for processing image data abnormality
Baskaran et al. Using facial landmark detection on thermal images as a novel prognostic tool for emergency departments
CN116664819B (en) Medical staff hand recognition positioning method, device, equipment and storage medium
Majumder Development of a fast Fourier transform-based analytical method for COVID-19 diagnosis from chest X-ray images using GNU octave
CN117495656A (en) Face privacy protection method, device and system and electronic equipment
Kodhai Smart attendance system for COVID-19
CN108766549A (en) A kind of medical image intelligence high in the clouds browse displays system and application method
EP4181080A1 (en) Monitoring an entity in a medical facility
Su et al. Combining frontal and profile view facial images to predict difficult-to-intubate patients using AI
TWM618053U (en) Smart electric eye system for epidemic prevention
WO2024015620A1 (en) Tracking performance of medical procedures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant