CN112704566B - Surgical consumable checking method and surgical robot system - Google Patents

Surgical consumable checking method and surgical robot system Download PDF

Info

Publication number
CN112704566B
CN112704566B CN202011591856.2A CN202011591856A CN112704566B CN 112704566 B CN112704566 B CN 112704566B CN 202011591856 A CN202011591856 A CN 202011591856A CN 112704566 B CN112704566 B CN 112704566B
Authority
CN
China
Prior art keywords
identification information
surgical
information
image
consumables
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011591856.2A
Other languages
Chinese (zh)
Other versions
CN112704566A (en
Inventor
陈功
宋进
何超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202011591856.2A priority Critical patent/CN112704566B/en
Publication of CN112704566A publication Critical patent/CN112704566A/en
Application granted granted Critical
Publication of CN112704566B publication Critical patent/CN112704566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/35Surgical robots for telesurgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/90Identification means for patients or instruments, e.g. tags
    • A61B90/98Identification means for patients or instruments, e.g. tags using electromagnetic means, e.g. transponders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Robotics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Electromagnetism (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Endoscopes (AREA)

Abstract

The invention provides a method for checking surgical consumables and a surgical robot system, wherein the method for checking the surgical consumables comprises the following steps: acquiring first identification information of preoperative surgical consumables; acquiring second identification information of the postoperative surgical consumables through image identification; acquiring third identification information of surgical consumables detained in a preset area in the operation through an endoscope; and judging whether the difference between the first identification information and the second identification information is the same as that of the third identification information or not, and outputting a judgment result. The method for checking the surgical consumables and the surgical robot system can conveniently judge whether the surgical consumables are omitted in the surgical process, and facilitate consumable checking, postoperative evaluation and the like.

Description

Surgical consumable checking method and surgical robot system
Technical Field
The invention relates to the technical field of robot-assisted surgery, in particular to a method for checking surgical consumables and a surgical robot system.
Background
The micro-trauma operation is a new technology for performing operations in a human body through endoscopes such as a laparoscope and a thoracoscope, and has the advantages of small trauma, light pain, less bleeding and the like, so that the recovery time of a patient is effectively shortened, the patient is not suitable, and some harmful side effects of the traditional operations are avoided.
The minimally invasive surgery robot system enables an operator to observe tissue characteristics in a patient body through a two-dimensional or three-dimensional display device at a main console, and operates mechanical arms and surgical tool instruments on the operation robot in a remote control mode to complete operation. In the robot operation, some operation consumables or auxiliary articles, such as suture, hemostatic gauze and the like, are often required to be fed into the human body. In the current operation, the operation consumables used and the like are managed by a bedside nurse, and the intraoperative consumables are checked after the operation. This mode of operation is dependent on the bedside nurse's record and is typically performed post-operatively. Bedside nurses rely on the type and the number of consumables used in manual counting operation, and careless omission is likely to occur. Once the surgical consumables are left in the body of the patient, serious medical accidents are likely to be caused.
Disclosure of Invention
The invention aims to provide a method for checking surgical consumables and a surgical robot system, and aims to solve the problem that consumables used in the existing manual statistics operation are easy to leak.
To solve the above technical problem, according to a first aspect of the present invention, there is provided a surgical consumables checking method including:
acquiring first identification information of preoperative surgical consumables;
acquiring second identification information of postoperative surgical consumables through image identification;
acquiring third identification information of surgical consumables indwelling in the predetermined region in the operation through the endoscope;
and judging whether the difference between the first identification information and the second identification information is the same as that of the third identification information or not, and outputting a judgment result.
Optionally, the method for acquiring the first identification information of the preoperative surgical consumable includes:
acquiring fourth identification information of the surgical consumables through image identification;
determining the fourth identification information as the first identification information.
Optionally, the method for acquiring the first identification information of the preoperative surgical consumable includes:
identifying fourth identification information of the surgical consumables through the images;
acquiring fifth identification information according to input information of the surgical consumables input before the operation;
and if the fourth identification information is the same as the fifth identification information, determining the fourth identification information or the fifth identification information as the first identification information.
Optionally, the step of acquiring the second identification information or the fourth identification information includes:
acquiring image information of the surgical consumables;
extracting characteristic information of the surgical consumables according to the image information of the surgical consumables;
and obtaining the second identification information or the fourth identification information according to the characteristic information of the surgical consumables.
Optionally, the step of extracting the feature information of the surgical consumables according to the image information of the surgical consumables includes:
extracting characteristic information of the surgical consumables in the image information through a neural network;
and acquiring the second identification information or the fourth identification information of the surgical consumables through a trained classifier according to the characteristic information.
Optionally, the method for extracting feature information of the surgical instrument in the image information through the neural network includes:
preprocessing the image information to reduce the noise of the image information; and
and extracting the characteristic information of the surgical instrument in the preprocessed picture information.
Optionally, the method for preprocessing the image information to reduce the noise of the image information includes:
extracting a target region covering a target surgical instrument in the image information;
carrying out gray processing on the color image information in the target area range;
scanning the image information after graying, extracting pixel values, and judging whether the target area is uniformly illuminated: if the illumination is not uniform, performing illumination correction on the target area to enable the illumination of the target area to be uniform; if the illumination is uniform, no correction is carried out; and
and outputting the preprocessed image information.
Optionally, the surgical consumable has a unique additional feature, and the step of acquiring the second identification information or the fourth identification information includes:
acquiring image information of the surgical consumables;
acquiring additional characteristics of the surgical consumables according to the image information of the surgical consumables;
and acquiring the second identification information or the fourth identification information according to the additional features.
Optionally, the step of acquiring the sixth identification information and the seventh identification information includes:
acquiring sixth identification information when the surgical consumables enter the visual field of the endoscope and seventh identification information when the surgical consumables leave the visual field of the endoscope;
and obtaining the third identification information according to the difference between the sixth identification information and the seventh identification information.
Optionally, the step of acquiring the third identification information includes:
judging whether surgical consumables exist in the frame image of each frame according to the video information provided by the endoscope:
if yes, acquiring identification information of the surgical consumables in the frame image, and judging whether the surgical consumables exist in the frame images of the frame before and after the frame;
and if the surgical supplies are not in the frame image of the previous frame, using the identification information of the surgical supplies in the frame image of the frame as sixth identification information, and if the surgical supplies are not in the frame image of the next frame, using the identification information of the surgical supplies in the frame image of the frame as seventh identification information.
To solve the above technical problem, according to a second aspect of the present invention, there is also provided a surgical robot system including: the endoscope, the identification platform and the main controller;
the main controller comprises an endoscope visual field identification module used for acquiring third identification information of the surgical consumables according to the video provided by the endoscope;
the identification platform is used for acquiring first identification information of preoperative surgical consumables and second identification information of postoperative surgical consumables;
the main controller is further configured to determine whether a difference between the first identification information and the second identification information is the same as the third identification information, and output a determination result.
Optionally, the endoscope visual field identification module is configured to obtain sixth identification information when the surgical consumables enter the endoscope visual field and seventh identification information when the surgical consumables leave the endoscope visual field according to the video information provided by the endoscope, and obtain the third identification information according to a difference between the sixth identification information and the seventh identification information.
Optionally, the endoscope visual field identification module is configured to determine whether a frame image of each frame has a surgical consumable according to video information provided by the endoscope, and if so, acquire identification information of the surgical consumable in the frame image, and simultaneously acquire whether a frame image of a frame before the frame and a frame image of a frame after the frame have the surgical consumable, and if the frame image of the frame before the frame has no surgical consumable, the identification information of the surgical consumable in the frame image of the frame is used as sixth identification information, and if the frame image of the frame after the frame has no surgical consumable, the identification information of the surgical consumable in the frame image of the frame is used as seventh identification information.
Optionally, the endoscope visual field identification module is configured to acquire feature information of the surgical consumables according to the frame image, and further acquire identification information of the surgical consumables according to the feature information, or,
the endoscope visual field identification module is used for acquiring the unique additional features of the surgical consumables according to the frame images and acquiring the identification information of the surgical consumables according to the additional features.
Optionally, the recognition platform comprises a support table, an image acquisition module and a recognition processor;
the supporting table top is used for placing surgical consumables;
the image acquisition module is arranged above the supporting table top and is in communication connection with the identification processor; the image acquisition module is used for acquiring image information of surgical consumables placed on the supporting table top and transmitting the image information to the identification processor;
the identification processor is used for identifying and acquiring second identification information of the postoperative surgical consumables through the image information acquired by the image acquisition module.
Optionally, the recognition platform comprises a support table, an image acquisition module and a recognition processor;
the supporting table top is used for placing surgical consumables;
the image acquisition module is arranged above the supporting table top and is in communication connection with the identification processor; the image acquisition module is used for acquiring image information of surgical consumables placed on the supporting table top and transmitting the image information to the identification processor;
the identification processor is used for identifying and acquiring fourth identification information of the surgical consumables before operation through the image information acquired by the image acquisition module, and confirming the fourth identification information as the first identification information.
Optionally, the identification processor is further configured to identify and acquire fourth identification information of the surgical consumables before the operation through the image information acquired by the image acquisition module, and determine the fourth identification information as the first identification information.
Optionally, the identification processor includes an image identification module and an image processing module, the image identification module is respectively in communication connection with the image acquisition module and the image processing module, and the image identification module is configured to extract feature information of surgical consumables in the image information through a neural network according to the image information acquired by the image acquisition module, and transmit the feature information to the image processing module; and the image processing module acquires the identification information of the surgical consumables according to the characteristic information.
Optionally, the image recognition module includes: a preprocessing unit and an extraction unit;
the preprocessing unit is used for preprocessing the image information to reduce the noise of the image information;
the extraction unit is in communication connection with the preprocessing unit and is used for extracting the characteristic information of the surgical consumables in the image information preprocessed by the preprocessing unit through a trained neural network.
Optionally, the preprocessing unit includes: a characteristic selection subunit, a graying subunit and an illumination correction subunit;
the selection subunit is used for determining an image area where the characteristic information required to be extracted by the surgical consumables is located; the graying subunit is used for performing graying processing on the image of the target area when the target area is a color image; the illumination corrector subunit is used for correcting the target area image subjected to the graying processing when the illumination of the target area is not uniform.
Optionally, the image processing module includes a testing unit, and the testing unit is in communication connection with the image recognition module and is configured to obtain the recognition information from the feature information through a trained classifier.
Optionally, the surgical robot system further includes an input component, and the recognition platform includes an image acquisition module and a recognition processor;
the input assembly is in communication connection with the identification processor, the input assembly is used for inputting input information of surgical consumables, and the identification processor is used for acquiring fifth identification information of the surgical consumables according to the input information;
the image acquisition module is in communication connection with the identification processor and is used for acquiring image information of the surgical consumables and transmitting the image information to the identification processor;
the identification processor is used for acquiring fourth identification information of preoperative surgical consumables through the image information acquired by the image acquisition module, judging whether the fourth identification information is the same as the fifth identification information or not, and determining the fourth identification information or the fifth identification information as the first identification information if the fourth identification information is the same as the fifth identification information.
In summary, in the surgical consumables checking method and the surgical robot system according to the present invention, the surgical consumables checking method includes: acquiring first identification information of preoperative surgical consumables; acquiring second identification information of postoperative surgical consumables through image identification; acquiring third identification information of surgical consumables detained in a preset area in the operation through an endoscope; and judging whether the difference between the first identification information and the second identification information is the same as that of the third identification information or not, and outputting a judgment result.
The method for checking the surgical consumables and the surgical robot system can conveniently judge whether the surgical consumables are omitted in the surgical process, and facilitate consumable checking, postoperative evaluation and the like.
Drawings
It will be appreciated by those skilled in the art that the drawings are provided for a better understanding of the invention and do not constitute any limitation to the scope of the invention. Wherein:
FIG. 1 is a schematic view of a surgical robotic system according to an embodiment of the present invention;
FIG. 2 is a schematic view of a surgeon-side control of a surgical robotic system in accordance with an embodiment of the present invention;
FIG. 3 is a patient side control device of a surgical robotic system according to an embodiment of the present invention;
FIG. 4 is a schematic view of the imaging side of the surgical robotic system of an embodiment of the present invention;
FIG. 5 is a schematic diagram of an identification platform according to an embodiment of the invention;
FIG. 6 is a block diagram of an embodiment of a recognition platform;
FIG. 7 is a flowchart of a surgical consumables verification method according to one embodiment of the present invention;
FIG. 8 is a schematic view of image information of surgical consumables acquired by the image acquisition module according to an embodiment of the present invention;
FIGS. 9 a-9 d are schematic views of images of surgical consumables captured by an endoscope in accordance with one embodiment of the present invention;
FIG. 10 is a flowchart of the steps of obtaining first identification information, in accordance with one embodiment of the present invention;
FIG. 11 is a flowchart of the steps of obtaining second identification information or fourth identification information according to one embodiment of the invention;
FIG. 12 is a flowchart illustrating further steps for obtaining second identification information or fourth identification information, in accordance with one embodiment of the present invention;
FIG. 13 is a flowchart of the steps of obtaining third identification information, in accordance with one embodiment of the present invention;
fig. 14 is a flowchart of the steps of acquiring the second identification information or the fourth identification information by the additional feature according to an embodiment of the present invention.
In the drawings:
100-doctor end control device; 101-main operator; 102-an imaging device; 103-a foot-operated surgical control device;
200-a patient-side control device; 201-surgical adjustment arm; 202-a surgical working arm; 203-surgical instruments; 204-column; 205-surgical consumables;
300-image side; 301-an endoscope; 302-an endoscope processor; 303-a display device;
400-identifying a platform; 401-an image acquisition module; 402-supporting a table; 403-a display module; 404-an identification processor; 4041-image recognition module; 40411-pretreatment unit; 40412-extraction unit; 40421-a test unit; 40422-a training unit; 4042-image processing module; 405-a storage module;
500-main controller.
Detailed Description
To further clarify the objects, advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It is to be noted that the drawings are in greatly simplified form and are not to scale, but are merely intended to facilitate and clarify the explanation of the embodiments of the present invention. Further, the structures illustrated in the drawings are often part of actual structures. In particular, the drawings may have different emphasis points and may sometimes be scaled differently.
As used in this application, the singular forms "a", "an" and "the" include plural referents, the term "or" is generally employed in a sense including "and/or," the terms "a" and "an" are generally employed in a sense including "at least one," the terms "at least two" are generally employed in a sense including "two or more," and the terms "first", "second" and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit to the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include one or at least two of the features, the term "proximal" generally being the end near the operator, the term "distal" generally being the end near the patient (i.e., near the lesion), the terms "end" and "proximal" and "distal" generally referring to the corresponding two parts, including not only the end points, unless the content clearly dictates otherwise.
The invention aims to provide a method for checking surgical consumables and a surgical robot system, and aims to solve the problem that consumables used in the existing manual statistics operation are easy to leak.
The following description refers to the accompanying drawings.
Referring to fig. 1 to 14, fig. 1 is a schematic view of a surgical robot system according to an embodiment of the present invention; FIG. 2 is a schematic view of a surgeon-side control of a surgical robotic system in accordance with an embodiment of the present invention; FIG. 3 is a patient side control device of a surgical robotic system according to an embodiment of the present invention; FIG. 4 is a schematic view of the imaging side of the surgical robotic system of an embodiment of the present invention; FIG. 5 is a schematic diagram of an identification platform according to an embodiment of the invention; FIG. 6 is a block diagram of the recognition platform according to an embodiment of the present invention; FIG. 7 is a flow chart of a surgical consumables verification method according to one embodiment of the present invention; FIG. 8 is a schematic view of image information of surgical consumables acquired by the image acquisition module according to an embodiment of the present invention; FIGS. 9 a-9 d are schematic views of images of surgical consumables captured by an endoscope in accordance with one embodiment of the present invention; FIG. 10 is a flowchart of the steps for obtaining first identification information, in accordance with one embodiment of the present invention; FIG. 11 is a flowchart of the steps of obtaining second identification information or fourth identification information according to one embodiment of the invention; FIG. 12 is a flowchart illustrating further steps for obtaining second identification information or fourth identification information, in accordance with one embodiment of the present invention; FIG. 13 is a flowchart of the steps of obtaining third identification information, in accordance with one embodiment of the present invention; fig. 14 is a flowchart of a step of acquiring the second identification information or the fourth identification information by the additional feature according to an embodiment of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a surgical robot system, which includes a doctor-side control device 100, a patient-side control device 200, an image side 300, and a main controller 500. That is, the surgical robot system is a master-slave teleoperated robot, but the present invention is not limited to the master-slave teleoperated robot, and other surgical robots using an endoscope to assist the surgical operation are within the scope of the present invention.
Referring to fig. 2, the doctor end control device 100 is an operation end of a surgical robot and includes a main manipulator 101 mounted thereon. The main operator 101 is used for receiving hand motion information of an operator as a motion control signal input of the whole system. Optionally, the master controller 500 is also disposed on the physician-side control device 100. Preferably, the doctor-side control apparatus 100 further includes an imaging device 102, and the imaging device 102 can provide a stereoscopic image for an operator and provide surgical operation information for the operator to perform a surgical operation. The operation information comprises the types and the quantity of surgical instruments and surgical consumables, the pose in the abdomen, the shapes and the arrangement of organ tissues of a patient and blood vessels of surrounding organ tissues and the like. Optionally, the doctor-side control apparatus 100 further includes a foot-operated operation control device 103, and the operator can also complete input of relevant operation instructions such as electrosection, electrocoagulation and the like through the foot-operated operation control device 103.
Referring to fig. 3, the patient-side control device 200 is a specific implementation platform of the surgical robotic system and includes a column 204 and a surgical implementation component mounted thereon. The surgical performance assembly includes a robotic arm and a surgical instrument 203. In one embodiment, the robotic arm includes a surgical adjustment arm 201 and a surgical working arm 202. The surgical tool arm 202 is a mechanical fixed point mechanism for driving the surgical instrument 203 to move around the mechanical fixed point, and the surgical adjusting arm 201 is used for adjusting the position of the fixed point in the working space. In another embodiment, the robotic arm is a mechanism having a spatial configuration with at least six degrees of freedom for driving the surgical instrument 203 about an active motionless point under program control. The surgical instrument 203 is used to perform a particular surgical procedure, such as a clamp, cut, scissors, etc.
The main controller 500 is in communication connection with the doctor-side control device 100 and the patient-side control device 200, respectively, and is used for controlling the movement of the operation performing assembly according to the movement of the main manipulator 101. Specifically, the master controller includes a master-slave mapping module, and the master-slave mapping module is configured to obtain the end pose of the master manipulator 101 and a predetermined master-slave mapping relationship, obtain a desired end pose of the surgical executing component, and further control the manipulator to drive the surgical instrument 203 to move to the desired end pose. Further, the master-slave mapping module is further configured to receive an instrument function operation instruction (such as an electrical resection operation instruction, an electrocoagulation operation instruction, and the like), and control an energy driver of the instrument to release energy to implement an electrical resection operation, an electrocoagulation operation, and the like.
As shown in fig. 4, the image end 300 includes an endoscope 301 and an endoscope processor 302 communicatively connected to the endoscope 301. The endoscope 301 is used to acquire surgical operation information in a cavity (in the body cavity of a patient). The endoscope processor 302 is configured to perform imaging processing on the operation information acquired by the endoscope 301, and transmit the operation information to the imaging device 102, so that the operator can observe the operation information through the eyes. Optionally, the image terminal 300 further includes a display device 303. The display device 303 is communicatively coupled to the endoscope processor 302 for providing real-time display of surgical procedure information to an operator (e.g., a nurse).
In operation, an operator sits in front of the doctor end control device 100 outside the sterile area, observes the returned operation information through the imaging device 102, and controls the operation execution assembly and the endoscope movement through operating the main operating hand 101 to complete various operation operations.
Further, the main controller 500 further comprises an endoscope vision recognition module, which is in communication connection with the endoscope processor 302 and is used for acquiring third identification information of surgical consumables detained in a predetermined area during an operation.
Exemplarily, the endoscope visual field identification module is configured to be triggered to acquire sixth identification information when the surgical consumables enter the endoscope visual field, to be triggered to acquire seventh identification information when the surgical consumables leave the endoscope visual field, and to acquire the third identification information according to a difference between the sixth identification information and the seventh identification information. Specifically, during a surgical procedure, the surgical consumables must be present within the field of view of the endoscope 301 if they are to be used or removed. That is, when the surgical supplies are taken in or out, the operator must adjust the endoscope 301 to a state in which the use position of the surgical supplies can be clearly observed. Therefore, whether the surgical consumables enter or leave the cavity can be determined by comparing the contrast between the frame images of the preceding and following frames acquired by the endoscope 301. In this embodiment, the "preceding and succeeding frames" may be consecutive, or may be several frames apart from each other. This embodiment is not particularly limited.
As shown in fig. 9a to 9d, fig. 9a and 9b show two frame images before and after the time dimension, the first frame image shown in fig. 9a has no surgical consumables 203 in the endoscope visual field range, and the second frame image shown in fig. 9b has surgical consumables 203 in the endoscope visual field range, and it is considered that the operator has fed the surgical consumables 203 into the patient body through the surgical instrument for surgery. By comparing the frame images of the previous and subsequent frames, when the frame images shown in fig. 9a and 9b appear, the endoscope visual field recognition module is triggered to take the identification information of the surgical consumables 203 in the second frame image shown in fig. 9b as sixth identification information. Fig. 9c and 9d show two frame images before and after the time dimension, in which the surgical consumables 203 are present in the endoscope visual field in the third frame image shown in fig. 9c, and the surgical consumables 203 are not present in the endoscope visual field in the fourth frame image shown in fig. 9d, thereby determining that the surgical consumables 203 are removed from the cavity. By comparing the frame images of the previous and subsequent frames, when the frame images shown in fig. 9c and 9d appear, the endoscope visual field recognition module is triggered to take the identification information of the surgical consumables 203 in the third frame image shown in fig. 9c as seventh identification information. In some special cases, if all the surgical consumables are left in the cavity, only the sixth identification information may be acquired, but the seventh identification information of the surgical consumables that are taken out may not be acquired, and in this case, the seventh identification information may be set as null information, and the third identification information may be obtained based on the difference between the sixth identification information and the seventh identification information (null information). The endoscope 301 is used for acquiring the use information of the surgical consumables 203, only software needs to be updated, other hardware equipment does not need to be additionally arranged, and the endoscope is low in cost and high in reliability.
The present embodiment is not particularly limited to a method of acquiring identification information in the endoscope field of view. For example, the endoscope vision recognition module obtains characteristic information of the surgical consumables, such as color, texture, shape and the like, according to the frame image, and further obtains identification information of the surgical consumables according to the characteristic information of the surgical consumables, such as obtaining the identification information of the surgical consumables through artificial intelligence and the like. For another example, the surgical consumables are added with unique additional features before operation, and the endoscope visual field identification module acquires identification information of the surgical consumables according to the additional features.
Furthermore, the endoscope visual field identification module acquires video information acquired by the endoscope through the endoscope processor 302, and judges whether surgical consumables exist in frame images of each frame; if so, on one hand, acquiring the identification information of the surgical supplies, and on the other hand, judging whether the surgical supplies exist in the frame images of the frame before the frame or the frame after the frame, if the surgical supplies do not exist in the frame images of the frame before the frame, taking the identification information of the surgical supplies in the frame images of the frame as sixth identification information, and if the surgical supplies do not exist in the frame images of the frame after the frame, taking the identification information of the surgical supplies in the frame images of the frame as seventh identification information. Whether the operation consumables exist in the frame images of each frame or not is judged, wherein the situation that the operation consumables appear due to the fact that the operation consumables do not exist originally is included, and the situation that new operation consumables appear also includes the situation that the operation consumables exist originally. In order to facilitate the checking of the quantity and the type of the surgical consumables 203, the endoscope vision recognition module is used for marking the video information in a segmented manner when the sixth recognition information or the seventh recognition information is acquired. When the surgical consumables 203 are identified to enter or leave, the endoscope visual field identification module marks video information of a plurality of frames before and after the surgical consumables 203 enters or leaves, so that the information is used when the consumables are checked after operation, and an operator can check and check conveniently.
Further, the surgical robot further includes an identification platform 400, wherein the identification platform 400 is used for acquiring first identification information of the preoperative surgical consumables and second identification information of the postoperative surgical consumables. As shown in fig. 5 and 6, the recognition platform 400 includes: an image acquisition module 401, a support table 402, and an identification processor 404. Wherein, the support table 402 is used for placing surgical consumables; the image acquisition module 401 is arranged above the supporting table 402 and is in communication connection with the recognition processor 404; the image acquisition module 401 is configured to acquire image information of surgical consumables placed on the supporting table 402 and transmit the image information to the recognition processor 404; the identification processor 404 is in communication connection with the image acquisition module 401, and the identification processor 404 is configured to obtain identification information of the surgical consumables according to the image information of the surgical consumables. Thus, the identification processor 404 may obtain the second identification information of the surgical consumables after the operation or the first identification information of the surgical consumables before the operation. Preferably, the support platform 402 includes a plurality of predefined areas, each predefined area for receiving a surgical consumable 205. As shown in FIG. 8, the same type of surgical consumables 205 are placed in the same column and different surgical consumables 205 are placed in different columns. Thus, the surgical consumables 205 are arranged according to the classification of the categories, and the surgical consumables 205 are conveniently identified by the arrangement of the same categories.
Furthermore, the main controller 500 is further configured to determine whether the difference between the first identification information and the second identification information is the same as the third identification information, and output a determination result. If the difference between the first identification information and the second identification information is the same as the third identification information, the surgical consumables are used in the cavity of the human body as expected, or the surgical consumables are all taken out of the cavity of the human body; if the difference between the first identification information and the second identification information is different from the third identification information, the surgical consumables may be omitted or abnormally consumed. Further, the main controller 500 may optionally control the alarm device to output a first alarm signal to remind an operator to perform the checking. The first alarm signal is not particularly limited in the present invention, and a person skilled in the art can set the first alarm signal according to the prior art, such as a user error prompt through sound, light or through a user interaction interface.
In this embodiment, the recognition processor 404 and the main controller 500 are separately provided. In other embodiments, the identification processor 404 is designed integrally with the main controller 500, for example, the respective functions of the identification processor 404 are integrated as functional modules in the main controller 500 together with other modules (e.g., master-slave mapping module) in the main controller 500.
It should be noted that the present embodiment does not specifically limit the type of the surgical consumables, such as sutures, medical sponges, hemostatic gauze, etc. Also, the present embodiment is not particularly limited as to the type of surgical instrument, such as a scalpel, scissors, needle, or the like. Preferably, the identification information includes identification feature classifications of the surgical consumables, and the number of surgical consumables under each of the identification feature classifications. Correspondingly, the first identification information of the surgical consumables comprises identification feature classifications of the preoperative surgical consumables and the number of the surgical consumables under each identification feature classification; the second identification information includes identification feature classifications of the postoperative surgical consumables and a number of surgical consumables under each identification feature classification. Further, the identification feature classification includes the category and specification of the surgical consumables, such as different types of sponges.
In some embodiments, the support platform 402 includes a plurality of predefined areas, each predefined area for housing a surgical consumable 205. As shown in FIG. 7, the same type of surgical consumables 205 are placed in the same column and different surgical consumables 205 are placed in different columns. Thus, the surgical consumables 205 are arranged according to the classification of the categories, and the surgical consumables 205 are conveniently identified by the arrangement of the same categories.
In some embodiments, the image acquisition module 401 is disposed above the support table 402 by an arm and is communicatively coupled to the recognition processor 404. The image acquisition module 401 is configured to acquire image information of surgical consumables placed on the supporting table, and transmit the image information to the recognition processor 404.
In some embodiments, the recognition processor 404 includes an image recognition module 4041 and an image processing module 4042. Image recognition module 4041 respectively with image acquisition module 401 with image processing module 4042 communication connection, image recognition module 4041 is used for according to the image information that image acquisition module 401 obtained, through neural network, draws the characteristic information of operation consumptive material among the image information, and will characteristic information transmission extremely image processing module 4042. The image processing module 4042 obtains the identification information of the surgical consumables according to the feature information. Optionally, the feature information includes: at least one of a shape, texture, and color of a corresponding surgical consumable in the image information.
Optionally, the image recognition module 4041 includes a preprocessing unit 40411 and an extraction unit 40412. The preprocessing unit 40411 is configured to preprocess the image information to reduce noise of the image information, and please refer to the following detailed description; the extraction unit 40412 is in communication connection with the preprocessing unit 40411 and is used for extracting feature information of the surgical consumables in the image information preprocessed by the preprocessing unit 40411 through the trained neural network.
Optionally, the image processing module 4042 includes a test unit 40421. The testing unit 40421 is in communication connection with the image recognition module 4041, and is configured to acquire the feature information from the trained classifier to obtain the recognition information. In the present embodiment, there is no particular limitation on the specific type of the classifier, such as a Support Vector Machine (SVM), a bayesian classifier, a KNN algorithm, an adaboost method, a rochio algorithm, and the like. Further, the image processing module 4042 further comprises a training unit 40422 for training the classifier. For example, the classifier is a support vector machine, the nonlinear training set is converted into a linear training set in a high-dimensional space through a suitable kernel function, such as a gaussian radial basis function, and then the linear training set is introduced into the support vector machine, and data evaluation optimization is performed to obtain a trained support vector machine.
Optionally, the preprocessing unit 40412 includes a feature selection subunit, a graying subunit, and an illumination syndrome subunit. The selection subunit is used for determining an image area where characteristic information required to be extracted by the surgical consumables is located; the graying subunit is used for performing graying processing on the image of the target area when the target area is a color image; the illumination corrector subunit is used for correcting the target area image subjected to the graying processing when the illumination of the target area is not uniform. The present embodiment is not particularly limited to a specific method for realizing image graying. For example, for an image in RGB format, a G value that is sensitive to human eyes may be taken as a gray value, and a R value, a G value, and a B value may be weighted to obtain a gray value; for YUV encoding or YC b C r And directly taking the Y value of each pixel point as a gray value of the coded image. Also, the present embodiment does not particularly limit the method of implementing the illumination unevenness correction. For example, an algorithm based on Retinex theory, a Histogram Equalization (HE) method, an unsharp mask method, a morphological filtering method, a method based on a spatial illuminance map, or an adaptive correction method based on a two-dimensional gamma function, and the like.
Further, in some embodiments, the identification processor 404 is further configured to identify and acquire fourth identification information of the surgical consumables before the operation through the image information acquired by the image acquisition module 401, and determine the fourth identification information as the first identification information. In some embodiments, the fourth identification information is not verified by the identification processor 404, and the identification processor 404 directly transmits the fourth identification information as the first identification information to the master controller 500. In other embodiments, the identification processor 404 may also check the fourth identification information, and then transmit the fourth identification information as the first identification information to the master controller 500.
Specifically, the recognition platform 400 also includes an input component. The input component is communicatively coupled to the identification processor 404, and is configured to input information for the surgical consumables. The input information is associated with the fifth identification information of the surgical consumable, and may be identification information of at least a part of the surgical consumable or information attached to the surgical consumable, such as a two-dimensional code or a bar code. The identification processor 404 is configured to obtain fifth identification information of the surgical consumables according to the input information, and in an alternative embodiment, the input component includes a microphone, a mouse, a keyboard, a scan gun or nfc card reader, etc. communicatively connected to the identification processor 404. The image processing module 4042 obtains the fifth identification information of the surgical consumable through the input information of the input component. Of course, in other embodiments, the identification processor 404 may not acquire the fourth identification information through the image acquisition module 401, but directly transmit the input fifth identification information as the first identification information to the main controller 500.
More specifically, the robot is preset with the association relationship between the type and specification of the surgical consumables and the input information. Before the operation, the operator inputs the input information of the surgical consumables through the input component, and the identification processor 404 obtains the identification feature classification and number (i.e., the fifth identification information) of the required surgical consumables. Then, the operator places the surgical consumables on the support table 402, and the image capturing module 401 classifies and identifies the identification features (i.e., the fourth identification information) of the surgical consumables actually placed on the support table 402. By comparing the fourth identification information with the fifth identification information, the comparison condition between the recorded surgical consumables and the surgical consumables identified by the images can be known. If the fourth identification information is identical to the fifth identification information, the fourth identification information is verified successfully, and the identification processor 404 determines the fourth identification information or the fifth identification information as the first identification information of the surgical consumable before the operation. Further, if there is a difference between the fourth identification information and the fifth identification information, the identification processor 404 controls the alarm device to output a second alarm signal. The second alarm signal is not particularly limited in the present invention, and the second alarm signal can be implemented in a specific form with reference to the first alarm signal, but should be distinguished from the first alarm signal.
Preferably, the surgical robot further comprises a display module 403. The display module 403 is in communication connection with the main controller, and is configured to display at least one of the first identification information, the second identification information, the third identification information, and the determination result. Further, the input assembly includes a touch member disposed on the display module 403, and both form a touch screen on which an operator can directly input information.
In an exemplary embodiment, when the difference between the first identification information and the second identification information is matched with the third identification information, the system prompts to pass, green or check the square root on the display module, and after the operator confirms that no error exists, the operator can confirm that the operation is finished.
When the difference between the first identification information and the second identification information is inconsistent with the third identification information, the system prompts that the surgical consumables are not passed, red or crossed x is displayed on the display module, an operator needs to search for the reason, if the surgical consumables are confirmed to be consumed or used in the human body, manual intervention can be carried out, and a recording and treating scheme is well prepared.
In an alternative embodiment, the surgical robot further comprises a memory module. The storage module is in communication connection with the endoscope visual field identification module and is used for storing video information of a plurality of frames before and after the surgical consumables 203 enter or leave at the moment. The storage module is also in communication connection with the endoscope, the image acquisition module 401 and the recognition processor 404, and is used for storing information such as video information adopted by the endoscope, image information acquired by the image acquisition module 401, and preprocessed images after preprocessing.
Referring to fig. 7, based on the above configuration, an embodiment of the present invention provides a method for checking surgical consumables, which includes:
step SA1: acquiring first identification information of preoperative surgical consumables;
step SA2: acquiring second identification information of the postoperative surgical consumables through image identification;
step SA3: acquiring third identification information of surgical consumables detained in a preset area in the operation through an endoscope;
step SA4: and judging whether the difference between the first identification information and the second identification information is the same as that of the third identification information or not, and outputting a judgment result.
The identification information of the surgical consumables comprises first identification information obtained before surgery and second identification information obtained after surgery. In some embodiments, the first identification information may be obtained by means of image recognition.
Optionally, the method for acquiring the first identification information of the surgical consumables before the operation in step SA1 includes:
step SA11a: fourth identification information of the surgical consumables acquired through image recognition; in this step, if the surgical consumables are placed on the supporting table 402 before the operation, the specific identification method of the surgical consumables placed on the supporting table 402 may refer to the foregoing, and the description is not repeated here;
step SA12b: determining the fourth identification information as the first identification information.
Optionally, referring to fig. 10, the method for acquiring the first identification information of the surgical consumables before the operation in step SA1 includes:
step SA11b: acquiring fourth identification information of the surgical consumables through image identification;
step SA12b: acquiring fifth identification information of the surgical consumables input before the operation;
step SA13b: judging whether the fourth identification information is the same as the fifth identification information;
if the fourth identification information is the same as the fifth identification information, determining the fourth identification information or the fifth identification information as the first identification information;
and if the fourth identification information is different from the fifth identification information, triggering a second alarm signal.
Preferably, as shown in fig. 11, the step of acquiring the second identification information or the fourth identification information of the surgical consumables through image recognition includes:
step SA21a: acquiring image information of the surgical consumables;
step SA22a: extracting characteristic information of the surgical consumables according to the image information of the surgical consumables;
step SA23a: and obtaining the second identification information or the fourth identification information according to the characteristic information of the surgical consumables.
Further, referring to fig. 12, the step of extracting the feature information of the surgical consumables according to the image information of the surgical consumables includes:
step SA25: extracting characteristic information of the surgical consumables in the image information through a neural network;
step SA26: and acquiring the second identification information or the fourth identification information of the surgical consumables through a trained classifier according to the characteristic information.
Preferably, the method for extracting the characteristic information of the surgical consumables in the image information through the neural network comprises the following steps:
step SC21: preprocessing the image information to reduce noise of the image information;
step SC22: and extracting the characteristic information of the surgical consumables in the preprocessed image information.
In the step of extracting and identifying the feature information of the image information through the neural network, the preprocessing unit 40411 first preprocesses the image information of the surgical consumables acquired by the image acquisition module 401 to reduce the noise of the image; furthermore, the characteristic information of the surgical consumables in the image is preferably extracted, and the prominent characteristic information (such as color, shape and texture) is preferably extracted.
Preferably, before the feature information of the surgical consumables in the image information is extracted in step SA23, the method for identifying the surgical consumables further includes:
step SC231: extracting an image area which contains the target surgical consumables from the image information, namely extracting the target area from the image information; the target area may be, for example, an image area where all or part of a certain surgical consumable in the acquired image of the surgical consumable is located. On the one hand, the items resting on the support table 402 include not only the surgical consumables but also possibly other items, so the target area is set; on the other hand, the surgical consumables may include a plurality of features, not all of which are helpful for acquiring the identification information, so that a part of the features need to be eliminated to reduce the calculation amount of subsequent identification. In this embodiment, there is no particular limitation on the specific method for extracting the image region that covers the target surgical consumables from the image information, for example, the target region may also be determined by a rule set manually or may also be determined manually.
Step SC232: and carrying out gray processing on the color image information in the target area range.
Step SC233: scanning the image information after graying, extracting pixel values, judging whether the target area is uniformly illuminated, if not, performing illumination correction on the target area to ensure that the target area is uniformly illuminated, and if so, not performing correction, namely, not performing correction. And further, judging whether the target area is uniformly illuminated or not again for the image information subjected to illumination correction, and if the illumination is not uniform, performing illumination correction again on the target area until the illumination of the target area is uniform.
Step SC234: and outputting the preprocessed image information.
Preferably, in step SC26, the feature information is obtained by a trained classifier to obtain the identification information. Further, the classifier is a support vector machine, the nonlinear training set is converted into a linear training set in a high-dimensional space through a kernel function, such as a gaussian radial basis function, and then the linear training set is introduced into the support vector machine and subjected to data evaluation optimization to obtain the trained support vector machine.
As shown in fig. 14, if the surgical consumable includes an additional feature having uniqueness, the step of acquiring the second identification information or the fourth identification information based on the surgical consumable having the additional feature having uniqueness includes:
step SB21b: acquiring image information of the surgical consumables;
step SB22b: acquiring additional characteristics of the surgical consumables according to the image information of the surgical consumables;
step SB23b: and acquiring the second identification information or the fourth identification information according to the additional features.
In an exemplary embodiment, as shown in fig. 13, the step of acquiring the third identification information in step SA3 includes:
step SA31: acquiring sixth identification information when the surgical consumables enter the visual field of the endoscope 301 and seventh identification information when the surgical consumables leave the visual field of the endoscope 301;
step SA32: and obtaining the third identification information according to the difference between the sixth identification information and the seventh identification information.
Further, the step of acquiring the sixth identification information and the seventh identification information includes:
judging whether surgical consumables exist in the frame image of each frame according to the video information provided by the endoscope:
if so, acquiring identification information of the surgical consumables in the frame image, and judging whether the surgical consumables exist in the frame images of the frame before and after the frame;
and if the frame image of the previous frame has no surgical consumables, taking the identification information of the surgical consumables in the frame image of the frame as sixth identification information, and if the frame image of the subsequent frame has no surgical consumables, taking the identification information of the surgical consumables in the frame image of the frame as seventh identification information.
In summary, in the surgical consumables checking method and the surgical robot system according to the present invention, the surgical consumables checking method includes: acquiring first identification information of preoperative surgical consumables; acquiring second identification information of postoperative surgical consumables through image identification; acquiring third identification information of surgical consumables detained in a preset area in the operation through an endoscope; and judging whether the difference between the first identification information and the second identification information is the same as that of the third identification information or not, and outputting a judgment result.
The method for checking the surgical consumables and the surgical robot system can conveniently judge whether the surgical consumables are omitted in the surgical process, and facilitate consumable checking, postoperative evaluation and the like.
The above description is only for the purpose of describing the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention, and any variations and modifications made by those skilled in the art based on the above disclosure are within the scope of the appended claims.

Claims (19)

1. A method of surgical consumable verification, comprising:
acquiring first identification information of preoperative surgical consumables;
acquiring second identification information of the postoperative surgical consumables through image identification;
acquiring third identification information of surgical consumables detained in a preset area in the operation through an endoscope;
judging whether the difference between the first identification information and the second identification information is the same as that of the third identification information or not, and outputting a judgment result;
the step of acquiring the third identification information includes:
acquiring sixth identification information when the surgical consumables enter the visual field of the endoscope and seventh identification information when the surgical consumables leave the visual field of the endoscope;
obtaining the third identification information according to the difference between the sixth identification information and the seventh identification information;
the sixth identification information and the seventh identification information are determined according to whether surgical consumables exist in frame images of a current frame, a frame before the current frame, and a frame after the current frame of the video of the endoscope.
2. The surgical consumables checking method according to claim 1, wherein the method of acquiring the first identification information of the surgical consumables before the operation includes:
acquiring fourth identification information of the surgical consumables through image identification;
determining the fourth identification information as the first identification information.
3. The method of checking surgical consumables according to claim 1, wherein the method of acquiring first identification information of the surgical consumables before the operation includes:
identifying fourth identification information of the surgical consumables through the images;
acquiring fifth identification information according to input information of the surgical consumables input before the operation;
if the fourth identification information is the same as the fifth identification information, determining the fourth identification information or the fifth identification information as the first identification information.
4. The method according to claim 2 or 3, wherein the step of acquiring the second identification information or the fourth identification information includes:
acquiring image information of the surgical consumables;
extracting characteristic information of the surgical consumables according to the image information of the surgical consumables;
and obtaining the second identification information or the fourth identification information according to the characteristic information of the surgical consumables.
5. The method for checking surgical consumables according to claim 4, wherein the step of extracting the characteristic information of the surgical consumables according to the image information of the surgical consumables includes:
extracting characteristic information of the surgical consumables in the image information through a neural network;
and acquiring the second identification information or the fourth identification information of the surgical consumables through a trained classifier according to the characteristic information.
6. The method for checking surgical consumables according to claim 5, wherein the method for extracting feature information of a surgical instrument from the image information through a neural network includes:
preprocessing the image information to reduce noise of the image information; and
and extracting the characteristic information of the surgical instrument in the preprocessed image information.
7. The method of claim 6, wherein the method of preprocessing the image information to reduce noise of the image information comprises:
extracting a target region covering a target surgical instrument in the image information;
carrying out gray processing on the color image information in the target area range;
scanning the grayed image information, extracting a pixel value, and judging whether the target area is uniformly illuminated: if the illumination is not uniform, performing illumination correction on the target area to enable the illumination of the target area to be uniform; if the illumination is uniform, no correction is carried out; and
and outputting the preprocessed image information.
8. The method for checking surgical consumables according to claim 2, wherein the surgical consumables have a unique additional feature, and the step of acquiring the second identification information or the fourth identification information includes:
acquiring image information of the surgical consumables;
acquiring additional characteristics of the surgical consumables according to the image information of the surgical consumables;
and acquiring the second identification information or the fourth identification information according to the additional features.
9. The surgical consumable checking method according to claim 1, wherein the step of acquiring the sixth identification information and the seventh identification information includes:
judging whether surgical consumables exist in the frame image of each frame according to the video information provided by the endoscope:
if yes, acquiring identification information of the surgical consumables in the frame image, and judging whether the surgical consumables exist in the frame images of the frame before and after the frame;
and if the surgical supplies are not in the frame image of the previous frame, using the identification information of the surgical supplies in the frame image of the previous frame as sixth identification information, and if the surgical supplies are not in the frame image of the next frame, using the identification information of the surgical supplies in the frame image of the next frame as seventh identification information.
10. A surgical robotic system, comprising: the endoscope, the identification platform and the main controller;
the main controller comprises an endoscope visual field identification module used for acquiring third identification information of the surgical consumables according to the video provided by the endoscope;
the identification platform is used for acquiring first identification information of preoperative surgical consumables and second identification information of postoperative surgical consumables;
the main controller is further configured to determine whether a difference between the first identification information and the second identification information is the same as the third identification information, and output a determination result;
the endoscope visual field identification module is used for acquiring sixth identification information when surgical consumables enter the endoscope visual field and seventh identification information when the surgical consumables leave the endoscope visual field according to video information provided by the endoscope, and acquiring third identification information according to the difference between the sixth identification information and the seventh identification information;
the sixth identification information and the seventh identification information are determined according to whether surgical consumables exist in frame images of a current frame, a frame before the current frame, and a frame after the current frame of the video of the endoscope.
11. The surgical robotic system of claim 10,
the endoscope visual field identification module is used for judging whether surgical consumables exist in each frame of frame images according to video information provided by the endoscope, if yes, the identification information of the surgical consumables in the frame images is obtained, meanwhile, whether surgical consumables exist in the frame images of the frame before and the frame after the frame is obtained, if no surgical consumables exist in the frame images of the frame before, the identification information of the surgical consumables in the frame images of the frame before is used as sixth identification information, and if no surgical consumables exist in the frame images of the frame after, the identification information of the surgical consumables in the frame images of the frame after is used as seventh identification information.
12. The surgical robotic system of claim 11,
the endoscope visual field identification module is used for acquiring the characteristic information of the surgical consumables according to the frame image and further acquiring the identification information of the surgical consumables according to the characteristic information, or,
the endoscope visual field identification module is used for acquiring the unique additional features of the surgical consumables according to the frame images and acquiring the identification information of the surgical consumables according to the additional features.
13. The surgical robotic system of claim 10,
the identification platform comprises a supporting table surface, an image acquisition module and an identification processor;
the supporting table top is used for placing surgical consumables;
the image acquisition module is arranged above the supporting table top and is in communication connection with the identification processor; the image acquisition module is used for acquiring image information of surgical consumables placed on the supporting table top and transmitting the image information to the identification processor;
the identification processor is used for identifying and acquiring second identification information of the postoperative surgical consumables through the image information acquired by the image acquisition module.
14. The surgical robotic system of claim 10,
the identification platform comprises a supporting table surface, an image acquisition module and an identification processor;
the supporting table top is used for placing surgical consumables;
the image acquisition module is arranged above the supporting table top and is in communication connection with the identification processor; the image acquisition module is used for acquiring image information of surgical consumables placed on the supporting table top and transmitting the image information to the identification processor;
the identification processor is used for identifying and acquiring fourth identification information of the surgical consumables before operation through the image information acquired by the image acquisition module, and confirming the fourth identification information as the first identification information.
15. The surgical robot system according to claim 13 or 14, wherein the recognition processor comprises an image recognition module and an image processing module, the image recognition module is respectively in communication connection with the image acquisition module and the image processing module, and the image recognition module is configured to extract feature information of surgical consumables in the image information through a neural network according to the image information acquired by the image acquisition module and transmit the feature information to the image processing module; and the image processing module acquires the identification information of the surgical consumables according to the characteristic information.
16. The surgical robotic system of claim 15, wherein the image recognition module comprises: a preprocessing unit and an extraction unit;
the preprocessing unit is used for preprocessing the image information to reduce the noise of the image information;
the extraction unit is in communication connection with the preprocessing unit and is used for extracting the characteristic information of the surgical consumables in the image information preprocessed by the preprocessing unit through a trained neural network.
17. The surgical robotic system as claimed in claim 16, wherein the pre-processing unit includes: a characteristic selection subunit, a graying subunit and an illumination correction subunit;
the selection subunit is used for determining an image area where the characteristic information required to be extracted by the surgical consumables is located; the graying subunit is used for performing graying processing on the image of the target area when the target area is a color image; the illumination corrector subunit is used for correcting the target area image subjected to the graying processing when the illumination of the target area is not uniform.
18. The surgical robotic system of claim 15, wherein the image processing module includes a testing unit communicatively coupled to the image recognition module for obtaining the feature information via a trained classifier to identify the surgical consumables.
19. A surgical robotic system as claimed in claim 10, further comprising an input component, the recognition platform including an image acquisition module and a recognition processor;
the input assembly is in communication connection with the identification processor, the input assembly is used for inputting input information of surgical consumables, and the identification processor is used for acquiring fifth identification information of the surgical consumables according to the input information;
the image acquisition module is in communication connection with the identification processor and is used for acquiring image information of the surgical consumables and transmitting the image information to the identification processor;
the identification processor is used for acquiring fourth identification information of preoperative surgical consumables through the image information acquired by the image acquisition module, judging whether the fourth identification information is the same as the fifth identification information or not, and determining the fourth identification information or the fifth identification information as the first identification information if the fourth identification information is the same as the fifth identification information.
CN202011591856.2A 2020-12-29 2020-12-29 Surgical consumable checking method and surgical robot system Active CN112704566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011591856.2A CN112704566B (en) 2020-12-29 2020-12-29 Surgical consumable checking method and surgical robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011591856.2A CN112704566B (en) 2020-12-29 2020-12-29 Surgical consumable checking method and surgical robot system

Publications (2)

Publication Number Publication Date
CN112704566A CN112704566A (en) 2021-04-27
CN112704566B true CN112704566B (en) 2022-11-25

Family

ID=75546176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011591856.2A Active CN112704566B (en) 2020-12-29 2020-12-29 Surgical consumable checking method and surgical robot system

Country Status (1)

Country Link
CN (1) CN112704566B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116035510B (en) * 2023-03-07 2023-07-11 杭州康基医疗器械有限公司 Medical endoscope camera shooting system and method capable of identifying surgical instruments
CN117373628A (en) * 2023-10-24 2024-01-09 山东第一医科大学附属眼科研究所(山东省眼科研究所、山东第一医科大学附属青岛眼科医院) High-value medical consumable management system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571908A (en) * 2008-04-29 2009-11-04 伊西康内外科公司 RFID to prevent reprocessing
CN202036416U (en) * 2010-12-31 2011-11-16 稳健实业(深圳)有限公司 Safe surgical dressing and safe surgical dressing identification device
CN204950887U (en) * 2015-09-24 2016-01-13 郑州人民医院 A device is got in visual spy for abdominal cavity operation
CN111161875A (en) * 2019-12-31 2020-05-15 上海市肺科医院 Intelligent checking system for surgical auxiliary instrument

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7180014B2 (en) * 2003-03-20 2007-02-20 Boris Farber Method and equipment for automated tracking and identification of nonuniform items
US9867669B2 (en) * 2008-12-31 2018-01-16 Intuitive Surgical Operations, Inc. Configuration marker design and detection for instrument tracking
CN101086748A (en) * 2006-06-10 2007-12-12 上海岱嘉医学信息系统有限公司 Management method of biological material products transplanted into physical body
EP2143038A4 (en) * 2007-02-20 2011-01-26 Philip L Gildenberg Videotactic and audiotactic assisted surgical methods and procedures
US9168104B2 (en) * 2008-06-23 2015-10-27 John Richard Dein Intra-operative system for identifying and tracking surgical sharp objects, instruments, and sponges
EP2169576A1 (en) * 2008-09-29 2010-03-31 BrainLAB AG Method for updating a status of an object used in medicine
DE102011016663A1 (en) * 2011-04-05 2012-10-11 How To Organize (H2O) Gmbh Device and method for identifying instruments
US20130113929A1 (en) * 2011-11-08 2013-05-09 Mary Maitland DeLAND Systems and methods for surgical procedure safety
FR3004330B1 (en) * 2013-04-10 2016-08-19 Analytic - Tracabilite Hospitaliere TRACEABILITY OF SURGICAL INSTRUMENTS IN A HOSPITAL ENCLOSURE

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571908A (en) * 2008-04-29 2009-11-04 伊西康内外科公司 RFID to prevent reprocessing
CN202036416U (en) * 2010-12-31 2011-11-16 稳健实业(深圳)有限公司 Safe surgical dressing and safe surgical dressing identification device
CN204950887U (en) * 2015-09-24 2016-01-13 郑州人民医院 A device is got in visual spy for abdominal cavity operation
CN111161875A (en) * 2019-12-31 2020-05-15 上海市肺科医院 Intelligent checking system for surgical auxiliary instrument

Also Published As

Publication number Publication date
CN112704566A (en) 2021-04-27

Similar Documents

Publication Publication Date Title
US20210157403A1 (en) Operating room and surgical site awareness
CN110215279B (en) Augmented surgical reality environment for robotic surgical system
CN106663318B (en) Augmenting surgical reality environment
JP4220780B2 (en) Surgery system
JP4822634B2 (en) A method for obtaining coordinate transformation for guidance of an object
JP2019010513A (en) System and method for glass state view in real-time three-dimensional (3d) cardiac imaging
JP2003265408A (en) Endoscope guide device and method
US20080255442A1 (en) Registration system and method
CN112704566B (en) Surgical consumable checking method and surgical robot system
CN111067468B (en) Method, apparatus, and storage medium for controlling endoscope system
CN111374688B (en) System and method for correcting medical scans
CN112712016B (en) Surgical instrument identification method, identification platform and medical robot system
EP3643265A1 (en) Loose mode for robot
CN114502092A (en) Physical medical element sizing system and method
CA3239159A1 (en) Surgery assisting system, surgery assisting method, and surgery assisting program
CN110573107B (en) Medical system and related method
CN116423547A (en) Surgical robot pedal control system, method, readable medium and surgical robot
US11185388B2 (en) Surgical video production system and surgical video production method
CN107496029B (en) Intelligent minimally invasive surgery system
CN116077087A (en) System and method for enabling ultrasound association of artificial intelligence
KR20180100831A (en) Method for controlling view point of surgical robot camera and apparatus using the same
CN114727860A (en) Physical medical element placement system
US20200034564A1 (en) Medical image processing apparatus and medical image processing method
CN208426174U (en) Intelligent Minimally Invasive Surgery device
CN115908349B (en) Automatic endoscope parameter adjusting method and device based on tissue identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant