CN113627219A - Instrument detection method and device and computer equipment - Google Patents

Instrument detection method and device and computer equipment Download PDF

Info

Publication number
CN113627219A
CN113627219A CN202010384325.XA CN202010384325A CN113627219A CN 113627219 A CN113627219 A CN 113627219A CN 202010384325 A CN202010384325 A CN 202010384325A CN 113627219 A CN113627219 A CN 113627219A
Authority
CN
China
Prior art keywords
body cavity
instrument
instruments
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010384325.XA
Other languages
Chinese (zh)
Inventor
李晓东
贺光琳
赵智全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Haikang Huiying Technology Co ltd
Original Assignee
Hangzhou Haikang Huiying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Haikang Huiying Technology Co ltd filed Critical Hangzhou Haikang Huiying Technology Co ltd
Priority to CN202010384325.XA priority Critical patent/CN113627219A/en
Publication of CN113627219A publication Critical patent/CN113627219A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/08Accessories or related features not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/08Accessories or related features not otherwise provided for
    • A61B2090/0804Counting number of instruments used; Instrument detectors

Abstract

The application discloses an instrument detection method and device and computer equipment, and belongs to the technical field of information. The method comprises the following steps: acquiring a body cavity image through an endoscope; detecting the motion of an instrument according to the acquired body cavity image, wherein the motion of the instrument refers to the motion of taking out or putting in the instrument; adjusting the number of instruments according to the detected action of the instruments, wherein the number of the instruments refers to the number of the instruments located in the body. This application can make statistics of apparatus quantity in real time at the operation in-process to can be convenient for the doctor in view of the above realize the apparatus inspection in the operation in-process or the apparatus reexamination after the operation, and then can effectively avoid leaving over the emergence internal and lead to medical accident because of unnecessary surgical instruments.

Description

Instrument detection method and device and computer equipment
Technical Field
The present application relates to the field of information technology, and in particular, to a method and an apparatus for detecting an instrument, and a computer device.
Background
With medical advances, surgery may be performed using minimally invasive techniques. Minimally invasive techniques require a small incision to insert the endoscope through the patient's skin and into the body cavity. A camera may be built into the endoscope to capture images of the body cavity. The body cavity image can be displayed to the doctor in real time to help the doctor to check and treat under an intuitive internal visual angle. During the surgical procedure, it is often necessary to use surgical instruments for auxiliary treatment. For example, hemostatic and duct closure procedures are performed using clips, or hemostatic and debridement are performed using gauze.
Disclosure of Invention
The application provides an instrument detection method, an instrument detection device, computer equipment and a storage medium, which can effectively avoid medical accidents caused by the fact that unnecessary surgical instruments are left in a body.
In one aspect, there is provided a method of instrument detection, the method comprising:
acquiring a body cavity image through an endoscope;
detecting the motion of an instrument according to the acquired body cavity image, wherein the motion of the instrument refers to the motion of taking out or putting in the instrument;
adjusting the number of instruments according to the detected action of the instruments, wherein the number of the instruments refers to the number of the instruments located in the body.
Optionally, the detecting an instrument action according to the acquired body cavity image includes:
detecting instrument actions corresponding to the multiple body cavity images when the multiple body cavity images are collected, wherein the multiple body cavity images are continuous multiple body cavity images collected by the endoscope, or the multiple body cavity images are extracted from the body cavity images collected by the endoscope within a preset time.
Optionally, the detecting the instrument motions corresponding to the multiple body cavity images includes:
and inputting the multiple body cavity images into a motion recognition model to obtain instrument motions corresponding to the multiple body cavity images, wherein the motion recognition model is used for recognizing instrument motions appearing in the multiple body cavity images.
Optionally, the plurality of body cavity images are a plurality of consecutive body cavity images acquired by the endoscope, and after the adjusting the number of instruments according to the detected motion of the instruments, the method further includes:
and correspondingly storing the other body cavity images except the last body cavity image in the plurality of body cavity images and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the plurality of body cavity images and the number of instruments after adjustment.
Optionally, the method for acquiring a plurality of body cavity images includes the steps of extracting a plurality of body cavity images from body cavity images acquired by the endoscope within a preset time period, and after adjusting the number of instruments according to detected instrument actions, further including:
and correspondingly storing other body cavity images except the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of the instruments after adjustment.
Optionally, before the acquiring the body cavity image by the endoscope, the method further includes:
setting an initial number of instruments per instrument type to 0;
the method for detecting the action of the instrument according to the acquired body cavity image comprises the following steps:
detecting instrument actions and determining the instrument type corresponding to each detected instrument action according to the detected body cavity image;
the adjusting the number of instruments according to the detected instrument actions comprises:
adding 1 to the number of instruments of the instrument type corresponding to the putting-in action of each instrument when the putting-in action of each instrument is detected;
every time an instrument removal action is detected, the number of instruments of the instrument type corresponding to the instrument removal action is reduced by 1.
Optionally, after the adjusting the number of the instruments, the method further includes:
and displaying the number of the instruments.
Optionally, the method further comprises:
displaying one body cavity image every time one body cavity image is acquired;
the displaying the number of instruments includes:
and displaying the number of instruments and the displayed body cavity image in an overlapping manner.
Optionally, after the acquiring the body cavity image by the endoscope, the method further comprises:
determining the position information of a target frame in one body cavity image every time one body cavity image is acquired, wherein the target frame is used for indicating an area with instruments;
and correspondingly storing the body cavity image and the position information of the target frame in the body cavity image.
Optionally, the determining the target frame position information in the one body cavity image includes:
and inputting the body cavity image into an instrument recognition model to obtain the position information of the target frame in the body cavity image, wherein the instrument recognition model is used for recognizing instruments existing in the body cavity image.
Optionally, the method further comprises:
and if the video playing instruction is detected, playing the stored body cavity image, and displaying a target frame in the body cavity image according to the target frame position information stored corresponding to the body cavity image when the body cavity image is played.
In one aspect, there is provided an instrument detection device, the device comprising:
the acquisition module is used for acquiring a body cavity image through an endoscope;
the detection module is used for detecting the action of the instrument according to the acquired body cavity image, wherein the action of the instrument refers to the action of taking out or putting in the instrument;
and the adjusting module is used for adjusting the number of the instruments according to the detected actions of the instruments, wherein the number of the instruments is the number of the instruments in the body.
Optionally, the detection module includes:
the detection unit is used for detecting instrument actions corresponding to a plurality of body cavity images every time the body cavity images are acquired, wherein the body cavity images are continuous body cavity images acquired by the endoscope, or the body cavity images are extracted from body cavity images acquired by the endoscope within a preset time period;
optionally, the detection unit is configured to:
inputting the multiple body cavity images into a motion recognition model to obtain instrument motions corresponding to the multiple body cavity images, wherein the motion recognition model is used for recognizing instrument motions appearing in the multiple body cavity images;
optionally, the plurality of body cavity images are a plurality of consecutive body cavity images acquired by the endoscope, the apparatus further comprising:
the first storage module is used for correspondingly storing other body cavity images except the last body cavity image in the body cavity images and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the body cavity images and the number of the instruments after adjustment;
optionally, the plurality of body cavity images are a plurality of body cavity images extracted from body cavity images acquired by the endoscope within a preset time period, and the apparatus further includes:
and the second storage module is used for correspondingly storing other body cavity images except the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of the instruments after adjustment.
Optionally, the apparatus further comprises:
a setting module for setting an initial instrument number of each instrument type to 0;
the detection module is used for detecting the action of the instrument according to the detected body cavity image and determining the type of the instrument corresponding to each detected action of the instrument;
the adjusting module is used for adding 1 to the number of instruments of the instrument type corresponding to the putting-in action of the instrument when the putting-in action of the instrument is detected; every time an instrument removal action is detected, the number of instruments of the instrument type corresponding to the instrument removal action is reduced by 1.
Optionally, the apparatus further comprises:
the first display module is used for displaying the number of the instruments.
The second display module is used for displaying one body cavity image when one body cavity image is acquired;
optionally, the first display module includes:
and the display unit is used for displaying the number of the instruments and the body cavity image which is displayed in an overlapping mode.
Optionally, the apparatus further comprises:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining the position information of a target frame in one body cavity image every time one body cavity image is acquired, and the target frame is used for indicating an area with instruments;
the third storage module is used for correspondingly storing the body cavity image and the target frame position information in the body cavity image;
optionally, the determining module includes:
and the acquisition unit is used for inputting the body cavity image into an instrument recognition model to obtain the position information of the target frame in the body cavity image, and the instrument recognition model is used for recognizing instruments existing in the body cavity image.
Optionally, the apparatus further comprises:
and the playing module is used for playing the stored body cavity image if the video playing instruction is detected, and displaying the target frame in the body cavity image according to the target frame position information stored corresponding to the body cavity image when the body cavity image is played.
In one aspect, a computer device is provided, where the computer device includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus, the memory is used to store a computer program, and the processor is used to execute the program stored in the memory to implement the steps of the above-mentioned instrument detection method.
In one aspect, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned instrument detection method.
In one aspect, a computer program product comprising instructions is provided, which when run on a computer, causes the computer to perform the steps of the above-described instrument detection method.
The technical scheme provided by the application can at least bring the following beneficial effects:
when the endoscope is used for collecting body cavity images, the motion of the instrument is detected according to the collected body cavity images, so that the motion of putting in the instrument and the motion of taking out the instrument can be detected in real time in the operation process. Thereafter, the number of instruments is adjusted based on the detected instrument motion. So, statistics apparatus quantity in real time at the operation in-process to can be convenient for the doctor in view of the above realize the apparatus inspection in the operation in-process or the apparatus after the operation is reviewed, and then can effectively avoid leaving over the emergence of internal and leading to the medical negligence because of unnecessary surgical instruments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of an instrument detection method provided in an embodiment of the present application;
FIG. 2 is a schematic view of an endoscopic system provided by an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an instrument detection device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of another instrument detection device provided in the embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference to "a plurality" in this application means two or more. In the description of the present application, "/" indicates an OR meaning, for example, A/B may indicate A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, for the convenience of clearly describing the technical solutions of the present application, the terms "first", "second", and the like are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Before explaining the embodiments of the present application in detail, an application scenario of the embodiments of the present application will be described.
Currently, in the surgical process, surgical instruments are often used for auxiliary treatment, such as a clamp for hemostasis and a pipeline closing operation, or gauze for hemostasis and cleaning. Doctors often have difficulty in accurately knowing the number of surgical instruments in a patient, so that a plurality of unnecessary medical accidents that the surgical instruments are left in the body occur, the patient needs to carry out secondary operations, and extra pain is caused to the patient.
Therefore, the embodiment of the application provides an instrument detection method, which can automatically detect the action of putting the surgical instrument into the body and the action of taking the surgical instrument out of the body in the surgical process, and intelligently count the number of the surgical instruments in the body according to the action. Because the number of the instruments is counted in real time in the operation process, doctors can check the instruments in the operation process or review the instruments after the operation is finished, and therefore medical accidents caused by leaving unnecessary surgical instruments in the body can be effectively avoided.
The instrument detection method provided by the embodiments of the present application will be described in detail below.
Fig. 1 is a flowchart of an instrument detection method provided in an embodiment of the present application. Referring to fig. 1, the method includes:
step 101: body cavity images are acquired by an endoscope.
It should be noted that the endoscope is a tube including a built-in camera, and can enter the human body through a natural duct of the human body or through a small incision made by an operation, and then acquire images of the body cavity, thereby helping a doctor to perform examination and treatment under an intuitive internal view angle.
In addition, during surgery, the endoscope may continuously acquire images of the body cavity, which may provide a surgical field of view. Furthermore, the body cavity image acquired by the endoscope can be subjected to image preprocessing, so that the body cavity image can be subjected to operations such as image detection and image display in the following process. The image preprocessing may include filtering, image enhancement, image binarization, morphological operation, edge detection, and the like, which is not limited in the embodiment of the present application.
It should be noted that in the embodiment of the present application, each time a body cavity image is acquired, the body cavity image may be displayed. Therefore, the body cavity image is acquired and displayed in real time, so that a doctor can see the internal condition of the patient in time.
Step 102: and detecting the motion of the instrument according to the acquired body cavity image, wherein the motion of the instrument refers to the motion of taking out or putting in the instrument.
It should be noted that surgical instruments are often used for auxiliary treatment during the operation, such as a clip for hemostasis and tube sealing operation, or gauze for hemostasis and cleaning. The act of placing or removing the surgical instrument from the body often occurs during the surgical procedure. The instrument movement in the embodiment of the present application may be an instrument removing movement or an instrument inserting movement, the instrument removing movement refers to a movement of removing an instrument, and the instrument inserting movement refers to a movement of inserting an instrument.
In addition, the instrument movement detected from the acquired body cavity image is the instrument movement that can be reflected in the body cavity image, that is, the instrument movement that is being performed when the body cavity image is acquired. That is, embodiments of the present application may detect a current ongoing instrument motion in real time during a surgical procedure.
Specifically, the operation of step 102 may be: and detecting the instrument action corresponding to the multiple body cavity images every time the multiple body cavity images are acquired.
The plurality of body cavity images may be a plurality of consecutive body cavity images acquired by an endoscope. Further, the plurality of body cavity images used for detecting the operation of the instrument at a time may be partially repeated continuous images or may be completely non-repeated continuous images. For example, the plurality of body cavity images may be 10 consecutive body cavity images acquired by the endoscope, that is, each time 10 consecutive body cavity images are acquired, the instrument movement corresponding to the 10 body cavity images is detected. In this case, 10 body cavity images used for instrument inspection at a time may be partially repeated continuous images such as body cavity image 1-body cavity image 10, body cavity image 2-body cavity image 11, body cavity image 3-body cavity image 12, or may be completely non-repeated continuous images such as body cavity image 1-body cavity image 10, body cavity image 11-body cavity image 20, body cavity image 21-body cavity image 30.
Alternatively, the plurality of body cavity images may be a plurality of body cavity images extracted from body cavity images acquired by the endoscope for a predetermined period of time. And, the order of the plurality of body cavity images extracted coincides with the acquisition order of the plurality of body cavity images. The preset duration can be set according to the use requirement, and the preset duration can be 1 second, 2 seconds and the like. For example, the plurality of body cavity images may be 10 body cavity images extracted from body cavity images acquired by the endoscope within 1 second, that is, 10 body cavity images are extracted from body cavity images acquired every second according to the acquisition order, and the instrument operation corresponding to the 10 body cavity images is detected.
The operation of detecting the instrument motions corresponding to the multiple body cavity images may be: and inputting the multiple body cavity images into the motion recognition model to obtain instrument motions corresponding to the multiple body cavity images.
The motion recognition model is used to recognize the motion of the instrument appearing in a plurality of body cavity images. The motion recognition model can be a neural network model such as a convolutional neural network, a cyclic neural network, a deep neural network and the like.
In addition, the motion recognition model may be trained using a large number of image samples that include different instrument motions. For example, a plurality of training samples may be determined in advance, and for any one of the plurality of training samples, the sample data of the training sample is a plurality of body cavity images, and the label of the training sample is the instrument action occurring therein. The plurality of training samples may then be used for model training, specifically, the motion recognition model may be obtained by performing model training using sample data in the plurality of training samples as input and labels of the plurality of training samples as expected output.
In step 102, only the motion of the instrument is detected according to the acquired body cavity image, and the type of the instrument corresponding to the motion of the instrument is not detected. That is, only the presence of the instrument removed or placed is detected, without determining which type of instrument is specifically removed or placed. The types of instruments include, but are not limited to, clips, gauze, etc.
In another possible case, in step 102, while detecting the motion of the instrument according to the acquired body cavity image, the type of the instrument corresponding to each detected motion of the instrument may be determined. In this case, the instrument operation and the instrument type corresponding to the plurality of body cavity images can be detected every time the plurality of body cavity images are acquired. Specifically, the multiple body cavity images may be input to the motion recognition model, and the instrument motions and the instrument types corresponding to the multiple body cavity images may be obtained.
In this case, the motion recognition model can recognize the type of instrument present in a plurality of body cavity images and the motion (i.e., removal or insertion) performed by the instrument. The motion recognition model may be trained using a large number of image samples including different instrument motions. For example, a plurality of training samples may be determined in advance, and for any one of the plurality of training samples, the sample data of the training sample is a plurality of body cavity images, and the label of the training sample is the type of instrument present therein and the action performed by the instrument. The plurality of training samples may then be used for model training, specifically, the motion recognition model may be obtained by performing model training using sample data in the plurality of training samples as input and labels of the plurality of training samples as expected output.
It should be noted that in the embodiment of the present application, step 102 may be continuously performed when the first body cavity image is acquired through the endoscope in step 101. In this way, the instrument operation is detected at the same time as the endoscope starts acquiring the body cavity image.
Alternatively, step 102 may be continuously executed from a certain body cavity image when it is detected that an instrument is present in the body cavity image in the process of acquiring the body cavity image by the endoscope in step 101. Thus, when the surgical instrument is not used in the earlier stage of the operation, the instrument detection is not executed, and the instrument detection is started again when the surgical instrument is used in the operation, so that the processing resource can be saved.
Step 103: the number of instruments, which is the number of instruments located in the body, is adjusted based on the detected motion of the instruments.
During surgery, the number of surgical instruments present in the body varies as they are placed into and removed from the body. Therefore, the number of the instruments can be adjusted in real time according to the real-time detected instrument actions in the embodiment of the application. That is, the embodiment of the present application can intelligently count the number of surgical instruments located in the body during the surgical procedure. Because the number of the instruments is counted in real time in the operation process, doctors can check the instruments in the operation process or review the instruments after the operation is finished, and therefore medical accidents caused by leaving unnecessary surgical instruments in the body can be effectively avoided.
In one possible case, if no instrument type is detected in step 102, the initial number of instruments may be set to 0 before the operation, i.e. before step 101, since there are no surgical instruments in the body. Then in step 103, the number of instruments may be increased by 1 each time an instrument put action is detected; the number of instruments is reduced by 1 each time an instrument removal action is detected. In this case, the number of the various types of instruments is integrated, and the adjusted number of instruments is the total number of the various types of instruments located in the body.
In another possible case, if the instrument type is detected in step 102, before the operation, i.e. before step 101, the initial number of instruments per instrument type may be set to 0, since there are no surgical instruments in the body. Then, in step 103, when each instrument putting action is detected, the number of instruments of the instrument type corresponding to the instrument putting action is added by 1; and when each instrument extraction action is detected, subtracting 1 from the number of instruments of the instrument type corresponding to the instrument extraction action. In this case, the number of the respective types of instruments is independent of each other, and the adjusted number of instruments is the respective number of each type of instrument located in the body.
It should be noted that, during the operation, the motion of putting the surgical instrument into the body or taking the surgical instrument out of the body in the operation field can be identified in real time, and the statistics of the number of the surgical instruments is performed accordingly, and the number of the instruments changes in real time along with the motion of putting the surgical instrument into the body and the motion of taking the surgical instrument out of the body during the operation. In this way, the number of surgical instruments can be simply and accurately known.
Further, after the number of instruments is adjusted in step 103, the number of instruments may also be displayed. Therefore, the number of the instruments can be displayed in real time in the operation process, so that a doctor can be reminded of paying attention to the number of the surgical instruments existing in the patient body in time, and the doctor can check the surgical instruments in the operation process.
Specifically, the number of instruments may be displayed in superimposition with the body cavity image being displayed.
Because the number of the instruments can be changed in real time along with the progress of the operation, the number of the instruments and the body cavity image displayed in real time can be displayed in an overlapping mode, so that a doctor can know the number of the surgical instruments existing in the body of the current patient in time while watching the body cavity image.
It is to be noted that, when the number of instruments is not 0, it is also possible to determine that the surgical instrument state is a left-behind state. In this case, while the body cavity image is displayed, indication information for indicating the leaving state of the surgical instrument may be displayed, so that the doctor can quickly know that the surgical instrument is currently left in the body according to the indication information.
Furthermore, the embodiment of the application can be used for video recording, namely, each acquired body cavity image can be stored in the process of acquiring the body cavity image through the endoscope. In this case, after the number of instruments is adjusted in step 103, the number of instruments may also be written in the video information, that is, the body cavity image and the number of instruments may be stored in correspondence so that the body cavity image and the number of instruments may maintain a frame synchronization relationship.
Specifically, if the operation of the instrument corresponding to the plurality of consecutive body cavity images acquired by the endoscope is detected in step 102, the other body cavity images except the last body cavity image in the plurality of body cavity images are stored in correspondence with the number of instruments before adjustment, and the last body cavity image in the plurality of body cavity images is stored in correspondence with the number of instruments after adjustment.
If the step 102 is to detect the instrument actions corresponding to a plurality of body cavity images extracted from the body cavity images acquired by the endoscope within the preset time period, storing the other body cavity images except the last body cavity image in the body cavity images acquired by the endoscope within the preset time period and the number of instruments before adjustment correspondingly, and storing the last body cavity image in the body cavity images acquired by the endoscope within the preset time period and the number of instruments after adjustment correspondingly.
Therefore, after the operation is finished, when the video playing is carried out, namely when the stored body cavity images are played, the number of the instruments stored corresponding to the body cavity images can be displayed on the body cavity images while each body cavity image is played, so that a doctor can conveniently confirm the left condition of the surgical instruments when the video playback is carried out after the operation is finished, and the doctor can review the surgical instruments after the operation is finished.
In the embodiment of the application, when the body cavity image is collected through the endoscope, the motion of the instrument is detected according to the collected body cavity image, so that the motion of putting in and taking out the instrument can be detected in real time in the operation process. Thereafter, the number of instruments is adjusted based on the detected instrument motion. So, statistics apparatus quantity in real time at the operation in-process to can be convenient for the doctor in view of the above realize the apparatus inspection in the operation in-process or the apparatus after the operation is reviewed, and then can effectively avoid leaving over the emergence of internal and leading to the medical negligence because of unnecessary surgical instruments.
Alternatively, referring to fig. 1, after the body cavity image is captured by the endoscope in step 101, not only the number of instruments can be adjusted through steps 102 to 103, but also the following steps 104 to 105 can be performed:
step 104: and determining the position information of the target frame in one body cavity image every time one body cavity image is acquired.
It should be noted that the target box is used to indicate the area where the instrument is present, and the area may be a generally rectangular area. The target frame position information may include the size of the target frame (e.g., the length and width of the target frame) and the coordinates of the center point of the target frame.
In addition, in the embodiment of the application, when the body cavity images are acquired in real time, whether the surgical instrument exists in each body cavity image can be detected, and the specific position (namely the position information of the target frame) of the surgical instrument is given.
The operation of determining the position information of the target frame in the body cavity image may be: and inputting the body cavity image into the instrument recognition model to obtain the position information of the target frame in the body cavity image.
It should be noted that the instrument recognition model is used to recognize an instrument present in the body cavity image. The instrument recognition model can be a neural network model such as a convolutional neural network, a cyclic neural network, a deep neural network and the like. The instrument recognition model may be trained using a large number of image samples including different instruments. For example, a plurality of training samples may be determined in advance, and for any one of the plurality of training samples, the sample data of the training sample is an image of a body cavity, and the label of the training sample is a specific position of an instrument existing therein. Then, the plurality of training samples may be used to perform model training, specifically, the sample data in the plurality of training samples may be used as input, the labels of the plurality of training samples may be used as expected output, and model training is performed to obtain the instrument recognition model.
In addition, the instrument recognition model and the motion recognition model may be two independent models. Alternatively, the instrument recognition model and the motion recognition model may share a portion of a network.
For example, the instrument recognition model may include an image feature extraction network for extracting image features and transmitting them to an instrument detection network for detecting and outputting instrument positions based on the image features. The action recognition model can comprise an image characteristic extraction network and an action detection network, wherein the image characteristic extraction network is used for extracting image characteristics and transmitting the image characteristics to the action detection network, and the action detection network is used for detecting and outputting instrument actions according to the image characteristics. In this case, the instrument recognition model and the motion recognition model may share one image feature extraction network.
It should be noted that, in the embodiment of the present application, the position information of the target frame in each acquired body cavity image may be acquired in real time. In this case, while displaying any body cavity image in real time, the target frame can be displayed in the body cavity image according to the target frame position information of the body cavity image, so that the doctor can know the position of the surgical instrument appearing currently in time while watching the body cavity image.
Step 105: and correspondingly storing the body cavity image and the target frame position information in the body cavity image.
In the embodiment of the application, video recording can be performed, that is, each acquired body cavity image can be stored. In addition, the position information of the target frame may also be written in the video information, that is, each body cavity image and the position information of the target frame may be stored in correspondence so that the body cavity image and the position information of the target frame maintain a frame synchronization relationship.
Further, if the video playing instruction is detected, the stored body cavity image is played, and when the body cavity image is played, the target frame is displayed in the body cavity image according to the target frame position information stored corresponding to the body cavity image.
In this way, when the video is played after the operation is completed, if the surgical instrument appears in the video field, a target frame for indicating the position of the surgical instrument appears. That is to say, under the condition that the surgical instrument is not used in the early stage of the operation, the target frame does not appear in the video, and when the surgical instrument is used in the operation process, the target frame used for indicating the position of the surgical instrument appears, so that a doctor can conveniently review the use condition of the surgical instrument when the video is played back after the operation.
It should be noted that the video playing command is used to play the body cavity image captured and stored during the operation. The video playing instruction can be triggered by a user, and the user can trigger the video playing instruction through operations such as click operation, gesture operation, voice operation and somatosensory operation.
In some embodiments, when video recording is performed, that is, when the stored body cavity image is played, the target frame may be selectively displayed, that is, the target frame may be selectively displayed or not displayed according to a requirement. Specifically, a target frame display function may be set; when the target frame display function is started, the target frame can be displayed in one body cavity image according to the target frame position information stored corresponding to the body cavity image while the stored body cavity image is played; when the target frame display function is closed, only the stored body cavity image is played, and the target frame display operation is not executed.
It should be noted that the target box display function may be manually turned on or off by the user. For example, a target frame display button may be set on the video recording and playing interface; when the starting operation of the target frame display button is detected, starting a target frame display function; when the closing operation of the target frame display button is detected, the target frame display function is closed. Of course, the target frame display function may also be turned on or off in other manners, which is not limited in the embodiment of the present application.
In the embodiment of the application, when the body cavity image is acquired through the endoscope, the position information of the target frame in one body cavity image is determined every time one body cavity image is acquired, so that whether the surgical instrument is used or not can be detected in real time in the surgical process. And then, the body cavity image and the position information of the target frame in the body cavity image are correspondingly stored, so that a doctor can conveniently review the use condition of the surgical instrument when recording and replaying the image after the operation.
The instrument detection method shown in fig. 1 can be applied to an endoscope system, which will be described in detail with reference to fig. 2.
Fig. 2 is a schematic view of an endoscope system provided by an embodiment of the present application. Referring to fig. 2, the endoscope system may include: endoscope 21, smart detection device 22, and display device 23. The intelligent detection device 12 comprises an image processing module 201, an instrument action recognition module 202, an instrument quantity counting module 203, a display control module 204, an image output module 205, an instrument position extraction module 206 and a video storage module 207.
The endoscope 11 can collect a body cavity image and transmit the collected body cavity image to the image processing module 201 in the intelligent detection device 12.
The image processing module 201 receives and preprocesses the body cavity image acquired by the endoscope 11, and obtains a body cavity image which can be used for display output and image detection.
The instrument action recognition module 202 acquires the preprocessed body cavity image output by the image processing module 201, and detects whether an instrument putting action or an instrument taking action exists in the body cavity image through the action recognition model.
The instrument number statistics module 203 may add 1 to the number of instruments each time the instrument placement action is detected by the instrument action recognition module 202; the number of instruments is decremented by 1 each time an instrument removal action is detected by the instrument action recognition module 202. In this way, in the case where no surgical instrument is used in the earlier stage of the operation, the counted number of instruments is 0, and in the course of the operation, if a surgical instrument appears in the surgical field, the number of instruments is adjusted in real time.
The display control module 204 fuses the preprocessed body cavity image output by the image processing module 201 and the number of instruments counted by the instrument number counting module 203.
The image output module 205 outputs the image fused by the display control module 204 to the display device 23, and the display device 23 displays the fused image in real time, that is, displays the preprocessed body cavity image and the counted number of instruments in a superimposed manner.
The instrument position extraction module 206 acquires the preprocessed body cavity image output by the image processing module 201, and detects the position information of the target frame in the body cavity image through the instrument recognition model.
The video storage module 207 correspondingly stores the preprocessed body cavity images output by the image processing module 201, the number of instruments counted by the instrument number counting module 203, and the position information of the target frame detected by the instrument position extraction module 206. Specifically, the preprocessed body cavity image may be stored as a video, and the counted number of instruments and the detected position information of the target frame may be written in the video.
Fig. 3 is a schematic structural diagram of an instrument detection device according to an embodiment of the present application. Referring to fig. 3, the apparatus includes: an acquisition module 301, a detection module 302, and an adjustment module 303.
An acquisition module 301 for acquiring a body cavity image by an endoscope;
the detection module 302 is used for detecting the motion of the instrument according to the acquired body cavity image, wherein the motion of the instrument refers to the motion of taking out or putting in the instrument;
and an adjusting module 303, configured to adjust the number of instruments according to the detected motion of the instrument, where the number of instruments refers to the number of instruments located in the body.
Optionally, the detection module 302 includes:
the detection unit is used for detecting instrument actions corresponding to a plurality of body cavity images every time the body cavity images are acquired, wherein the body cavity images are continuous body cavity images acquired by the endoscope, or the body cavity images are extracted from the body cavity images acquired by the endoscope within a preset time period;
optionally, the detection unit is configured to:
inputting a plurality of body cavity images into a motion recognition model to obtain instrument motions corresponding to the body cavity images, wherein the motion recognition model is used for recognizing instrument motions appearing in the body cavity images;
optionally, the plurality of body cavity images are a plurality of consecutive body cavity images captured by an endoscope, the apparatus further comprising:
the first storage module is used for correspondingly storing other body cavity images except the last body cavity image in the body cavity images and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the body cavity images and the number of the instruments after adjustment;
optionally, the plurality of body cavity images are a plurality of body cavity images extracted from body cavity images acquired by the endoscope within a preset time period, and the apparatus further includes:
and the second storage module is used for correspondingly storing other body cavity images except the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of the instruments after adjustment.
Optionally, the apparatus further comprises:
a setting module for setting an initial instrument number of each instrument type to 0;
a detection module 302, configured to detect an instrument action according to the detected body cavity image and determine an instrument type corresponding to each detected instrument action;
the adjusting module 303 is configured to, every time an instrument putting action is detected, add 1 to the number of instruments of the instrument type corresponding to the instrument putting action; every time one instrument removing action is detected, the number of instruments of the instrument type corresponding to the instrument removing action is reduced by 1.
Optionally, the apparatus further comprises:
the first display module is used for displaying the number of the instruments.
The second display module is used for displaying one body cavity image when one body cavity image is acquired;
optionally, the first display module comprises:
and the display unit is used for displaying the number of the instruments and the body cavity image which is displayed in an overlapping mode.
Optionally, the apparatus further comprises:
the determining module is used for determining the position information of a target frame in one body cavity image every time one body cavity image is acquired, wherein the target frame is used for indicating the area with instruments;
the third storage module is used for correspondingly storing one body cavity image and the position information of the target frame in one body cavity image;
optionally, the determining module includes:
the acquisition unit is used for inputting a body cavity image into the instrument recognition model to obtain the position information of the target frame in the body cavity image, and the instrument recognition model is used for recognizing instruments existing in the body cavity image.
Optionally, the apparatus further comprises:
and the playing module is used for playing the stored body cavity image if the video playing instruction is detected, and displaying the target frame in one body cavity image according to the target frame position information stored corresponding to one body cavity image when one body cavity image is played.
In the embodiment of the application, when the body cavity image is collected through the endoscope, the motion of the instrument is detected according to the collected body cavity image, so that the motion of putting in and taking out the instrument can be detected in real time in the operation process. Thereafter, the number of instruments is adjusted based on the detected instrument motion. Because the number of the instruments is counted in real time in the operation process, doctors can check the instruments in the operation process or review the instruments after the operation is finished, and further medical accidents caused by leaving unnecessary surgical instruments in the body can be effectively avoided.
It should be noted that: in the apparatus detection device provided in the above embodiment, only the division of the above functional modules is used for illustration when detecting an apparatus, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the apparatus detection device and the apparatus detection method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 4 is a schematic structural diagram of an instrument detection device provided in an embodiment of the present application. Referring to fig. 4, the apparatus may be a terminal 400, and the terminal 400 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the instrument detection method provided by the embodiments described above.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 401, the memory 402, and the peripheral interface 403 may be implemented on separate chips or circuit boards, which are not limited in this application.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, disposed on the front panel of the terminal 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic position of the terminal 400 for navigation or LBS (Location Based Service). The Positioning component 408 may be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or a lower layer of the touch display screen 405. When the pressure sensor 413 is disposed on the side frame of the terminal 400, a user's holding signal to the terminal 400 can be detected, and the processor 401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
The proximity sensor 416, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually becomes larger, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is not intended to be limiting of terminal 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In some embodiments, a computer-readable storage medium is also provided, in which a computer program is stored, which, when being executed by a processor, implements the steps of the instrument detection method in the above-mentioned embodiments. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is noted that the computer-readable storage medium referred to herein may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
That is, in some embodiments, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the instrument detection method described above.
The above description is not intended to limit the present application to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present application.

Claims (17)

1. A method of instrument inspection, the method comprising:
acquiring a body cavity image through an endoscope;
detecting the motion of an instrument according to the acquired body cavity image, wherein the motion of the instrument refers to the motion of taking out or putting in the instrument;
adjusting the number of instruments according to the detected action of the instruments, wherein the number of the instruments refers to the number of the instruments located in the body.
2. The method of claim 1, wherein detecting instrument motion based on the acquired body cavity image comprises:
detecting instrument actions corresponding to the multiple body cavity images when the multiple body cavity images are collected, wherein the multiple body cavity images are continuous multiple body cavity images collected by the endoscope, or the multiple body cavity images are extracted from the body cavity images collected by the endoscope within a preset time.
3. The method of claim 2, wherein detecting instrument motion corresponding to the plurality of body cavity images comprises:
and inputting the multiple body cavity images into a motion recognition model to obtain instrument motions corresponding to the multiple body cavity images, wherein the motion recognition model is used for recognizing instrument motions appearing in the multiple body cavity images.
4. The method of claim 2, wherein the plurality of body cavity images are a plurality of consecutive body cavity images captured by the endoscope, and wherein adjusting the number of instruments based on the detected instrument motion further comprises:
and correspondingly storing the other body cavity images except the last body cavity image in the plurality of body cavity images and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the plurality of body cavity images and the number of instruments after adjustment.
5. The method of claim 2, wherein the plurality of body cavity images are a plurality of body cavity images extracted from body cavity images acquired by the endoscope within a predetermined time period, and wherein the adjusting the number of instruments based on the detected motion of the instruments further comprises:
and correspondingly storing other body cavity images except the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of the instruments after adjustment.
6. The method of claim 1, wherein prior to endoscopically acquiring the body lumen image, further comprising:
setting an initial number of instruments per instrument type to 0;
the method for detecting the action of the instrument according to the acquired body cavity image comprises the following steps:
detecting instrument actions and determining the instrument type corresponding to each detected instrument action according to the detected body cavity image;
the adjusting the number of instruments according to the detected instrument actions comprises:
adding 1 to the number of instruments of the instrument type corresponding to the putting-in action of each instrument when the putting-in action of each instrument is detected;
every time an instrument removal action is detected, the number of instruments of the instrument type corresponding to the instrument removal action is reduced by 1.
7. The method of any of claims 1-6, wherein after adjusting the number of instruments, further comprising:
and displaying the number of the instruments.
8. The method of claim 7, wherein the method further comprises:
displaying one body cavity image every time one body cavity image is acquired;
the displaying the number of instruments includes:
and displaying the number of instruments and the displayed body cavity image in an overlapping manner.
9. The method of claim 1, wherein after the endoscopically acquiring the body cavity image, further comprising:
determining the position information of a target frame in one body cavity image every time one body cavity image is acquired, wherein the target frame is used for indicating an area with instruments;
and correspondingly storing the body cavity image and the position information of the target frame in the body cavity image.
10. The method of claim 9, wherein determining the target frame position information in the image of the body lumen comprises:
and inputting the body cavity image into an instrument recognition model to obtain the position information of the target frame in the body cavity image, wherein the instrument recognition model is used for recognizing instruments existing in the body cavity image.
11. The method of claim 9 or 10, wherein the method further comprises:
and if the video playing instruction is detected, playing the stored body cavity image, and displaying a target frame in the body cavity image according to the target frame position information stored corresponding to the body cavity image when the body cavity image is played.
12. An instrument testing device, comprising:
the acquisition module is used for acquiring a body cavity image through an endoscope;
the detection module is used for detecting the action of the instrument according to the acquired body cavity image, wherein the action of the instrument refers to the action of taking out or putting in the instrument;
and the adjusting module is used for adjusting the number of the instruments according to the detected actions of the instruments, wherein the number of the instruments is the number of the instruments in the body.
13. The apparatus of claim 12, wherein the detection module comprises:
the detection unit is used for detecting instrument actions corresponding to a plurality of body cavity images every time the body cavity images are acquired, wherein the body cavity images are continuous body cavity images acquired by the endoscope, or the body cavity images are extracted from body cavity images acquired by the endoscope within a preset time period;
wherein the detection unit is configured to:
inputting the multiple body cavity images into a motion recognition model to obtain instrument motions corresponding to the multiple body cavity images, wherein the motion recognition model is used for recognizing instrument motions appearing in the multiple body cavity images;
wherein the plurality of body cavity images are a plurality of consecutive body cavity images acquired by the endoscope, the apparatus further comprising:
the first storage module is used for correspondingly storing other body cavity images except the last body cavity image in the body cavity images and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the body cavity images and the number of the instruments after adjustment;
wherein the plurality of body cavity images are a plurality of body cavity images extracted from body cavity images acquired by the endoscope within a preset time period, and the apparatus further comprises:
and the second storage module is used for correspondingly storing other body cavity images except the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of instruments before adjustment, and correspondingly storing the last body cavity image in the body cavity images acquired by the endoscope within the preset time length and the number of the instruments after adjustment.
14. The apparatus of claim 12, wherein the apparatus further comprises:
a setting module for setting an initial instrument number of each instrument type to 0;
the detection module is used for detecting the action of the instrument according to the detected body cavity image and determining the type of the instrument corresponding to each detected action of the instrument;
the adjusting module is used for adding 1 to the number of instruments of the instrument type corresponding to the putting-in action of the instrument when the putting-in action of the instrument is detected; every time an instrument removal action is detected, the number of instruments of the instrument type corresponding to the instrument removal action is reduced by 1.
15. The apparatus of any of claims 12-14, wherein the apparatus further comprises:
the first display module is used for displaying the number of the instruments.
The second display module is used for displaying one body cavity image when one body cavity image is acquired;
wherein the first display module comprises:
and the display unit is used for displaying the number of the instruments and the body cavity image which is displayed in an overlapping mode.
16. The apparatus of claim 12, wherein the apparatus further comprises:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining the position information of a target frame in one body cavity image every time one body cavity image is acquired, and the target frame is used for indicating an area with instruments;
the third storage module is used for correspondingly storing the body cavity image and the target frame position information in the body cavity image;
wherein the determining module comprises:
and the acquisition unit is used for inputting the body cavity image into an instrument recognition model to obtain the position information of the target frame in the body cavity image, and the instrument recognition model is used for recognizing instruments existing in the body cavity image.
17. A computer device comprising a processor and a memory, the memory storing a computer program, the processor being configured to execute the program stored in the memory to perform the steps of the method of any one of claims 1 to 11.
CN202010384325.XA 2020-05-07 2020-05-07 Instrument detection method and device and computer equipment Pending CN113627219A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010384325.XA CN113627219A (en) 2020-05-07 2020-05-07 Instrument detection method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010384325.XA CN113627219A (en) 2020-05-07 2020-05-07 Instrument detection method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN113627219A true CN113627219A (en) 2021-11-09

Family

ID=78377379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010384325.XA Pending CN113627219A (en) 2020-05-07 2020-05-07 Instrument detection method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN113627219A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494406A (en) * 2022-04-13 2022-05-13 武汉楚精灵医疗科技有限公司 Medical image processing method, device, terminal and computer readable storage medium
CN117456000A (en) * 2023-12-20 2024-01-26 杭州海康慧影科技有限公司 Focusing method and device of endoscope, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610712A (en) * 2007-03-29 2009-12-23 奥林巴斯医疗株式会社 Device for controlling position of treatment instrument for endoscope
CN103491848A (en) * 2011-12-26 2014-01-01 奥林巴斯医疗株式会社 Medical endoscope system
CN110211152A (en) * 2019-05-14 2019-09-06 华中科技大学 A kind of endoscopic instrument tracking based on machine vision
US20190336709A1 (en) * 2018-05-04 2019-11-07 Afsmedical Gmbh Medizinproduktehandel Integrated monitoring and management system for endoscopic surgery

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610712A (en) * 2007-03-29 2009-12-23 奥林巴斯医疗株式会社 Device for controlling position of treatment instrument for endoscope
CN103491848A (en) * 2011-12-26 2014-01-01 奥林巴斯医疗株式会社 Medical endoscope system
US20190336709A1 (en) * 2018-05-04 2019-11-07 Afsmedical Gmbh Medizinproduktehandel Integrated monitoring and management system for endoscopic surgery
CN110211152A (en) * 2019-05-14 2019-09-06 华中科技大学 A kind of endoscopic instrument tracking based on machine vision

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494406A (en) * 2022-04-13 2022-05-13 武汉楚精灵医疗科技有限公司 Medical image processing method, device, terminal and computer readable storage medium
CN114494406B (en) * 2022-04-13 2022-07-19 武汉楚精灵医疗科技有限公司 Medical image processing method, device, terminal and computer readable storage medium
CN117456000A (en) * 2023-12-20 2024-01-26 杭州海康慧影科技有限公司 Focusing method and device of endoscope, storage medium and electronic equipment
CN117456000B (en) * 2023-12-20 2024-03-29 杭州海康慧影科技有限公司 Focusing method and device of endoscope, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN112911182B (en) Game interaction method, device, terminal and storage medium
CN109994127B (en) Audio detection method and device, electronic equipment and storage medium
CN110650379B (en) Video abstract generation method and device, electronic equipment and storage medium
CN109300485B (en) Scoring method and device for audio signal, electronic equipment and computer storage medium
CN108965922B (en) Video cover generation method and device and storage medium
CN111382624A (en) Action recognition method, device, equipment and readable storage medium
CN109144346B (en) Song sharing method and device and storage medium
CN109635133B (en) Visual audio playing method and device, electronic equipment and storage medium
CN110659542B (en) Monitoring method and device
CN112929654B (en) Method, device and equipment for detecting sound and picture synchronization and storage medium
CN111752817A (en) Method, device and equipment for determining page loading duration and storage medium
CN112818959B (en) Surgical procedure identification method, device, system and computer readable storage medium
CN110956580A (en) Image face changing method and device, computer equipment and storage medium
CN110688082A (en) Method, device, equipment and storage medium for determining adjustment proportion information of volume
CN113627219A (en) Instrument detection method and device and computer equipment
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN112749590B (en) Object detection method, device, computer equipment and computer readable storage medium
CN112906682A (en) Method and device for controlling brightness of light source and computer storage medium
CN109005359B (en) Video recording method, apparatus and storage medium
CN111370096A (en) Interactive interface display method, device, equipment and storage medium
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN113824902A (en) Method, device, system, equipment and medium for determining time delay of infrared camera system
CN111711841B (en) Image frame playing method, device, terminal and storage medium
CN110263695B (en) Face position acquisition method and device, electronic equipment and storage medium
CN114913113A (en) Method, device and equipment for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination