WO2023065988A1 - Procédé et appareil de détection de collision, dispositif, et support de stockage lisible - Google Patents

Procédé et appareil de détection de collision, dispositif, et support de stockage lisible Download PDF

Info

Publication number
WO2023065988A1
WO2023065988A1 PCT/CN2022/121629 CN2022121629W WO2023065988A1 WO 2023065988 A1 WO2023065988 A1 WO 2023065988A1 CN 2022121629 W CN2022121629 W CN 2022121629W WO 2023065988 A1 WO2023065988 A1 WO 2023065988A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
target
position information
medical
target object
Prior art date
Application number
PCT/CN2022/121629
Other languages
English (en)
Chinese (zh)
Inventor
苗燕楠
彭晓宁
李自汉
江磊
王家寅
Original Assignee
上海微创医疗机器人(集团)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202111229335.7A external-priority patent/CN115998439A/zh
Priority claimed from CN202111667608.6A external-priority patent/CN114224512B/zh
Application filed by 上海微创医疗机器人(集团)股份有限公司 filed Critical 上海微创医疗机器人(集团)股份有限公司
Publication of WO2023065988A1 publication Critical patent/WO2023065988A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/35Surgical robots for telesurgery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Definitions

  • the present application relates to the field of intelligent medical technology, in particular to a collision detection method, device, equipment, and a readable storage medium.
  • the OBB bounding box is used to detect the collision between the robot arm and the arm, and between the joints of the robotic arm.
  • This collision detection method is mainly based on the establishment of a bounding box for the robotic arm, and is realized by detecting the collision between the bounding boxes.
  • OBB bounding box collision detection is mainly aimed at collision detection between rigid body structures with regular shapes of convex polyhedrons, and is not suitable for collision detection between instrument ends and soft tissues.
  • the six-dimensional torque sensor or joint torque sensor is bulky and generally difficult to install at the end of the device. It is often used in the collision detection between collaborative robots and the external environment, and it is not suitable for the collision detection of narrow lumens in the body.
  • the present application provides a collision detection method, the method comprising: acquiring at least two image information of a target object under different viewing angles; obtaining the spatial position of the target object according to the at least two image information; Obtaining position information of the medical instrument connected to the end of the instrument arm of the surgical robot; determining a collision between the medical instrument and the target object according to the spatial position of the target object and the position information of the medical instrument.
  • the present application provides another collision detection method, the method comprising: obtaining the current medical image; obtaining the position information of the target part of the medical device; obtaining a pre-processed standard medical image, the standard medical image carrying The position information of the target object; according to the first matching relationship between the current medical image and the standard medical image, the position information of the target object in the standard medical image is converted into the target medical image, and according to the current medical image and the The second matching relationship of the position information of the target part of the medical device is used to convert the position information of the target part of the medical device into the target medical image; according to the position information of the target object in the target medical image and the Collision detection is performed based on the position information of the target part of the medical device.
  • the present application provides a collision detection system.
  • the collision detection system includes a processor, a medical mirror, and a medical device.
  • the medical device is provided with a sensor, and the sensor is used to collect the location information, and send the location information of the target part of the collected medical device to the processor; the medical mirror is used to collect the current medical image, and send the current medical image to the processor; the The processor is used to execute the above collision detection method.
  • the present application provides a collision detection device, which includes: a current medical image acquisition module, used to acquire the current medical image; a position information acquisition module, used to acquire the position information of the target part of the medical device;
  • the image acquisition module is used to acquire the pre-processed standard medical image, the standard medical image carries the position information of the target object;
  • the matching relationship calculation module is used to calculate the current medical image and the first one of the standard medical image A matching relationship, calculating a second matching relationship between the current medical image and the position information of the target part of the medical device;
  • a conversion module configured to convert the position information of the target object in the standard medical image according to the first matching relationship Converting to the target medical image, converting the position information of the target part of the medical device into the target medical image according to the second matching relationship;
  • a collision detection module configured to Collision detection is performed on the position information of the object and the position information of the target part of the medical instrument.
  • the present application provides a computer device, including a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the steps of the method involved in any of the above-mentioned embodiments are implemented .
  • the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method involved in any one of the above-mentioned embodiments are implemented.
  • the above-mentioned collision detection method, device, equipment, and readable storage medium carry the position information of the target object in the standard medical image obtained by preprocessing, so that the target object in the standard medical image is The position information of the object is converted into the target medical image, and the position information of the target part of the medical device is converted into the target medical image according to the second matching relationship between the current medical image and the position information of the target part of the medical device, so that in the target Collision detection in medical images expands the scope of application of collision detection and is more intelligent, capable of detecting collisions between microsurgical instruments and soft tissues.
  • Fig. 1 is a system diagram of a collision detection system in an embodiment
  • Fig. 2 is a schematic diagram of the application scenario of the surgical robot system involved in the present application
  • FIG. 3 is a schematic diagram of a master device according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a slave device according to an embodiment of the present application.
  • Figure 5a is a schematic diagram of a surgical instrument according to an embodiment of the present application.
  • Figure 5b is an enlarged view of part A of Figure 5a;
  • Fig. 6 is a schematic diagram of an operation field image of an embodiment of the present application.
  • FIG. 7 is a flowchart of a collision detection method for a surgical robot according to an embodiment of the present application.
  • Fig. 8 is a flow chart of the marking process of feature points in the embodiment of the present application.
  • Fig. 9 is a schematic diagram of the binocular vision of the embodiment of the present application.
  • Fig. 10 is a calculation principle diagram of binocular vision in the embodiment of the present application.
  • Fig. 11 is a schematic diagram of master-slave operation control of a robot of the present application.
  • Fig. 12 is a schematic flow chart of a collision detection method in an embodiment
  • Fig. 13 is a schematic diagram of a scene of standard medical image acquisition in an embodiment
  • Fig. 14 is an embodiment of an endoscopic image, a preoperative medical image and an endoscopic image, and the standard medical image is a fusion flow chart of the preoperative medical image;
  • Fig. 15 is a schematic diagram of a collision between a medical device and a target object in one embodiment
  • Fig. 16 is a schematic diagram of the spatial position of the medical device in the target operation area in one embodiment
  • Fig. 17a is a schematic diagram of the spatial relationship between the surgical instrument and the tissue according to the embodiment of the present application.
  • Figure 17b is an enlarged view of part B of Figure 17a;
  • Fig. 18 is a flow chart of the collision detection of the embodiment of the present application.
  • Fig. 19 is a schematic diagram of a collision safety visual warning according to an embodiment of the present application.
  • Fig. 20 is a schematic diagram of a sound and light warning for collision safety in an embodiment of the present application.
  • Fig. 21 is a schematic diagram of a ray collision detection method in an embodiment
  • Fig. 23 is a schematic diagram of a convex polygon collision detection method in an embodiment
  • Fig. 24 is a flowchart of a convex polygon collision detection method in an embodiment
  • Fig. 25 is a schematic diagram of a linear projection collision detection method in an embodiment
  • Fig. 26 is a schematic diagram of a sound warning for collision safety in an embodiment
  • Fig. 27 is a structural block diagram of a collision detection device in an embodiment
  • Figure 28 is a diagram of the internal structure of a computer device in one embodiment.
  • proximal end is usually the end close to the operator
  • distal end is usually the end close to the patient, that is, the end close to the lesion.
  • One end and “the other end” as well as “proximal end” and “distal end” usually refer to the corresponding two parts, which not only include the end point, the term “installation” , “connected” and “connected” should be understood in a broad sense.
  • connection can be fixed connection, detachable connection, or integrated; it can be mechanical connection or electrical connection; it can be direct connection or through
  • the intermediary is indirectly connected, which can be the internal communication of two elements or the interaction relationship between two elements.
  • an element is arranged on another element, usually only means that there is a connection, coupling, cooperation or transmission relationship between the two elements, and the relationship between the two elements can be direct or indirect through an intermediate element.
  • connection, coupling, fit or transmission but cannot be understood as indicating or implying the positional information relationship between two elements, that is, one element can be in any orientation such as inside, outside, above, below or on one side of another element, unless the content Also clearly point out.
  • the collision detection method provided in the embodiment of the present application may be applied to the collision detection system shown in FIG. 1 .
  • the collision detection system includes a processor, a medical mirror, and a medical device.
  • the medical device is equipped with a sensor, and the sensor is used to collect the position information of the target part of the medical device, and send the collected position information of the target part of the medical device to the processor.
  • the medical mirror is used to collect the current medical image and send the current medical image to the processor; the processor is used to execute the collision detection method below.
  • Figure 2 shows an application scenario of a surgical robot system, which includes a master-slave teleoperated surgical robot, that is, the surgical robot system includes a master device 100 (ie, a doctor-side control device), a slave device 200 (ie, the patient-side control device), a main controller, and a supporting device 400 (eg, an operating bed) for supporting the surgical object for surgery.
  • the supporting device 400 may also be replaced by other surgical operation platforms, which is not limited in the present application.
  • the master device 100 is an operation end of a teleoperated surgical robot, and includes a master operator 101 installed thereon.
  • the main operating hand 101 is used to receive the operator's hand movement information, which can be input as the movement control signal of the whole system.
  • the master controller is also disposed on the master device 100 .
  • the master device 100 further includes an imaging device 102, the imaging device 102 can provide a stereoscopic image for the operator, and provide an image of the surgical field for the operator to perform a surgical operation.
  • the image of the surgical field includes the type and quantity of surgical instruments, their posture in the abdomen, the shape and arrangement of blood vessels in the patient's organs and tissues, and the surrounding organs and tissues.
  • the master device 100 also includes a foot-operated surgical control device 103 , through which the operator can complete input of relevant operation instructions such as electrocution and electrocoagulation.
  • the slave device 200 is a specific execution platform of a teleoperated surgical robot, and includes a base 201 and a surgical execution component installed thereon.
  • the operation execution assembly includes an instrument arm 210 and instruments, and the instruments are hung or connected to the end of the instrument arm 210 .
  • the instruments include surgical instruments 221 (such as high-frequency electric knife, etc.) for specific surgical operations and endoscopes 222 for auxiliary observation; correspondingly, the instruments for connecting or mounting the endoscope 222 Arm 210 may be referred to as a mirror arm.
  • the instrument arm 210 includes an adjustment arm and a working arm.
  • the tool arm is a mechanical fixed point mechanism, which is used to drive the instrument to move around the mechanical fixed point, so as to perform minimally invasive surgical treatment or photographing on the patient 410 on the supporting device 400 .
  • the adjustment arm is used to adjust the pose of the mechanical fixed point in the working space.
  • the instrument arm 210 is a spatially configured mechanism with at least six degrees of freedom, which is used to drive the surgical instrument 221 to move around an active fixed point under program control.
  • the surgical instrument 221 is used to perform specific surgical operations, such as clamping, cutting, scissors and other operations, please refer to FIG. 5a and FIG. Agency 242.
  • the surgical instrument 221 can telescopically move along the axial direction of the instrument shaft 241; it can perform autorotation around the circumference of the instrument shaft 241, that is, form an autorotation joint 251; the operating mechanism 242 can perform pitching, yaw, and opening and closing movements, Pitch joints 252, yaw joints 253, and opening and closing joints 254 are respectively formed so as to realize various applications in surgical operations.
  • the surgical instrument 221 and the endoscope 222 have a certain volume in practice, the above-mentioned "fixed point” should be understood as a fixed area. Of course, those skilled in the art can understand the "fixed point" according to the prior art.
  • the master controller is connected to the master device 100 and the slave device 200 respectively, and is used to control the movement of the operation execution component according to the movement of the master operator 101.
  • the master controller includes a master-slave mapping module, the The master-slave mapping module is used to obtain the terminal pose of the master operator 101 and the predetermined master-slave mapping relationship to obtain the desired terminal pose of the surgical actuator, and then control the instrument arm 210 to drive the instrument to the desired terminal pose .
  • the master-slave mapping module is also used to receive instrument function operation instructions (such as electric cutting, electrocoagulation and other related operation instructions), and control the energy driver of the surgical instrument 221 to release energy to implement electric cutting, electrocoagulation and other surgical operations.
  • the main controller also accepts the force information received by the surgical execution component (for example, the force information of the human tissue and organ on the surgical instrument), and feeds back the force information received by the surgical execution component to the main operator 101 , so that the operator can feel the feedback force of the surgical operation more intuitively.
  • the surgical execution component for example, the force information of the human tissue and organ on the surgical instrument
  • the medical robot system also includes an image trolley 300 .
  • the image trolley 300 includes: an image processing unit (not shown) communicated with the endoscope 222 .
  • the endoscope 222 is used to acquire intracavity (referring to the patient's body cavity) surgical field images.
  • the image processing unit is used to perform image processing on the image of the operative field acquired by the endoscope 222 and transmit it to the imaging device 102 so that the operator can observe the image of the operative field.
  • the image trolley 300 further includes a display device 302 .
  • the display device 302 is connected in communication with the image processing unit, and is used to provide an auxiliary operator (such as a nurse) with real-time display of an operation field image or other auxiliary display information.
  • the surgical robot system also includes auxiliary components such as a ventilator, an anesthesia machine 500 and an instrument table 600 for use in surgery.
  • auxiliary components such as a ventilator, an anesthesia machine 500 and an instrument table 600 for use in surgery.
  • FIG. 6 shows a surgical operation space scene.
  • 3 to 4 surgical holes can be drilled on the body surface of the patient 410, and a poking card with a through hole can be installed and fixed.
  • the surgical instrument 221 and the inner The speculum 222 respectively enters the surgical operation space in the body through the poking holes.
  • the operator controls the end pose of the surgical instrument 221 through master-slave teleoperation under the guidance of the surgical field image.
  • the operator sits in front of the main device 100 located outside the sterile field, observes the image of the surgical field returned by the imaging device 102, and controls the movement of the surgical execution component by operating the main operating hand 101 to complete various operations.
  • Surgical operation During the operation, the posture of the operation hole will remain still (i.e. form a fixed point) to avoid crushing and damaging the surrounding tissues.
  • the operator The instrument 221 cuts, cuts, electrocoagulates, and sutures the lesion tissue to complete the predetermined surgical goal.
  • a plurality of surgical instruments 221 and endoscopes 222 penetrate into the narrow space of the human body through poking holes; the endoscope 222 can provide real-time feedback of the surgical instruments 221 and tissue image information.
  • the surgical instrument 221 is likely to collide with vulnerable key tissue parts (ie, target objects), such as arteries, heart valves, and the like.
  • This embodiment provides a collision detection method for a surgical robot, which includes:
  • Step S1 Obtain at least two image information of the target object in the surgical environment under different viewing angles
  • Step S2 Obtain the spatial position of the target object according to the at least two image information
  • Step S3 Obtain the position information of the surgical instrument connected to the end of the instrument arm of the surgical robot;
  • Step S4 Determine the collision between the surgical instrument and the target object according to the spatial position of the target object and the position information of the surgical instrument.
  • the surgical robot system includes at least two image acquisition units and a collision processing unit, the at least two image acquisition units are used to acquire at least two image information from different viewing angles, the instrument arm 210 and at least two image acquisition units
  • the units are respectively connected in communication with the collision processing unit; the collision processing unit is used to execute the above-mentioned collision detection method of the surgical robot.
  • the collision processing unit may be set on the slave device 200, or may be set on the master device 100, or be set independently.
  • the present application makes no particular limitation on the specific location of the collision processing unit.
  • the collision detection between the surgical instrument 221 and the target object can be realized, which can effectively improve the safety of the surgical operation and avoid accidentally injuring surrounding normal tissues , blood vessels and nerves to ensure the safety of surgical operations. Furthermore, due to the use of visual processing technology, the demand for sensing equipment is reduced, and the structure of the system is simplified.
  • the target object has feature points, and the feature points are determined by modeling based on medical images.
  • the medical image can be obtained by scanning the patient with a medical image scanning device (such as CT or MRI, etc.) before the operation.
  • image processing algorithms are used to complete the tissue modeling in the operation space and complete the three-dimensional reconstruction of the operation scene. According to the situation in the abdominal cavity, the operator can determine the outline of the key tissues that need special attention, and determine and mark the feature points before the operation.
  • step S1 the marking process of feature points may include:
  • Step S11 Obtain medical images of organs and tissues in the operating space; use endoscope, CT or MRI and other medical image scanning devices to scan and obtain medical images of organs and tissues in the operating space in the preoperative preparation stage;
  • Step S12 Tissue modeling and image calibration; establish a tissue model through a visual processing algorithm to obtain spatial position information of organs and tissues, and further complete image calibration through specific calibration points;
  • Step S13 Mark the feature points of key tissues; identify the important key tissue parts that need special attention through the machine learning algorithm or the doctor, so as to determine the outline and feature points of the target object that needs to be detected for collision, and mark them for intraoperative
  • the position of the feature point of the target object can be updated in real time.
  • special attention needs to be paid to vulnerable target parts, such as arteries, heart valves, etc.
  • vulnerable target parts such as arteries, heart valves, etc.
  • this part of the organization can be excluded from the target object without special attention.
  • the images P1 and P2 presented by two image acquisition units (respectively the first image acquisition unit 701 and the second image acquisition unit 702 in FIG. 9) to the same object P (in FIG.
  • the size of the parallax corresponds to the distance between the object and the two image acquisition units.
  • the three-dimensional coordinate information of the point P on the measured object in the coordinate system of the image acquisition device can be obtained.
  • the three-dimensional coordinate information of any feature point on the measured object in the coordinate system of the binocular image acquisition device can be obtained, and then the three-dimensional model of the measured object can be constructed.
  • the real-time 3D model of the target object it can be registered with the preoperative tissue model according to the common registration method in the field, so as to obtain the real-time position information of the feature points in the coordinate system of the image acquisition device , so that the spatial position of the target object can be obtained.
  • the image acquisition units are disposed on the endoscope 222, that is, the endoscope 222 is a 3D endoscope with at least two cameras.
  • the coordinate system of the image acquisition device is the endoscope coordinate system.
  • two 2D endoscopes with monocular cameras can be used as image acquisition units respectively.
  • the image acquisition unit may also be independent from the endoscope 222, which is not limited in this embodiment.
  • the operator controls the position and posture of the end of the instrument through master-slave teleoperation under the guidance of the endoscopic image.
  • the position of the end of the instrument includes the translational movement of the instrument along the X, Y, and Z directions, and the attitude includes the pitch, yaw, and rotation of the end of the instrument.
  • the present application also provides a collision detection method, which is illustrated in FIG. 12 by taking the method applied to the processor in FIG. 1 as an example, including the following steps:
  • a medical device refers to a surgical operation device
  • its target site may refer to an end of the surgical operation device.
  • the acquisition of the position information of the target part of the medical device may be obtained through kinematics calculation. For example, during master-slave surgery, based on the Cartesian end position and velocity of the master end, the response command Cartesian position and command velocity of the slave end instrument in the robot coordinate system are calculated.
  • S706 Convert the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, and according to the second matching relationship between the current medical image and the position information of the target part of the medical device The position information of the target part of the medical device is converted into the target medical image.
  • S707 Perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
  • the first matching relationship is the matching relationship between the current medical image and the standard medical image, which mainly completes the matching between the current medical image and the standard medical image by adapting the positions of key tissue labeling points in the current medical image.
  • the second matching relationship is the matching between the current medical image and the position information of the target part of the medical device, which is mainly established through the derivation of robot kinematics.
  • the movement position information of the instrument arm and the endoscope arm including the joints
  • the position and speed information of the robot through the kinematics of the robot and the kinematics mapping relationship, calculate the second matching relationship between the pre-medical image and the position information of the target part of the medical device.
  • the first matching relationship before converting the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, it also includes: identifying the target object in the current medical image Processing objects: matching and fusing the target object in the standard medical image with the object to be processed to obtain a first matching relationship, the first matching relationship is used to convert the position information of the target object in the standard medical image to the target medical image.
  • the position information of the target part of the medical device into the target medical image according to the second matching relationship between the current medical image and the position information of the target part of the medical device it further includes:
  • the motion information of the medical mirror of the medical image and the motion information of the target part of the medical device are calculated by kinematic mapping to obtain the second matching relationship, and the second matching relationship is used to convert the position information of the target part of the medical device into the target medical image .
  • a three-dimensional intra-abdominal model is first established through preoperative medical images, and the spatial position information of key tissues is marked, and then the positions of key tissue marking points under the 3D endoscopic image are adapted to complete the registration of the two coordinate systems Fusion realizes the fusion of preoperative medical images and intraoperative 3D endoscopic images. Then, the motion position information of the instrument arm and the endoscope arm, including the position and speed information of each joint, is calculated through the robot kinematics and kinematics mapping relationship to obtain the position system of the instrument endoscope image coordinate system, realizing intraoperative Fusion of 3D endoscopic images with robot coordinate system. Based on the above two steps, the fusion of preoperative medical images, intraoperative 3D endoscopic images, and robot coordinate systems is realized, and the three-dimensional spatial position of surgical instruments in the surgical area is calculated in real time.
  • S707 Perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
  • the distance between the position information of the target object and the position information of the target part of the medical device is calculated to determine whether a collision occurs, and if not, continue the detection for a next cycle.
  • Figure 15 is a schematic diagram of a collision between a medical instrument and a target object in an embodiment
  • Figure 16 is a schematic diagram of the spatial position of a medical instrument in a target operating area in an embodiment, in which
  • the doctor sees the real-time scene of the operation area through the endoscopic image.
  • the doctor can see the surgical lesion tissue (prostate as an example), and can also see some sensitive tissues and vascular tissues, but there are also other sensitive tissues and vascular and nerve tissues that are not within the field of view of the endoscope image. These sensitive tissues are easily injured if the instrument moves beyond the field of view of the endoscope.
  • collisions between surgical instruments moving beyond the field of view of the endoscope and invisible tissues and blood vessels can be effectively identified.
  • the collision between the surgical instrument and sensitive tissues, blood vessels and nerves, etc. is detected in real time.
  • a collision warning is issued and the position of the collision point in the abdominal cavity is fed back.
  • the pre-processed standard medical image carries the position information of the target object, so that the first matching relationship between the current medical image and the standard medical image is calculated, and the relationship between the current medical image and the position information of the target part of the medical device is calculated.
  • the second matching relationship converting the position information of the target object in the standard medical image into the target medical image according to the first matching relationship, and converting the position information of the target part of the medical device into the target medical image according to the second matching relationship, In this way, the collision detection in the target medical image can improve the detection accuracy and be more intelligent.
  • the collision detection is performed according to the position information of the target object in the target medical image and the position information of the target part of the medical device, including at least one of the following: according to the position information of the target object in the target medical image and the position information of the medical device The position information of the target part is based on the ray collision detection method for collision detection; according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on the convex polygon collision detection method; according to the target medical image The position information of the target object and the position information of the target part of the medical device are subjected to collision detection based on a linear projection collision detection method.
  • the target object includes four feature points O1, O2, O3, and O4.
  • the slave device 200 includes three instrument arms 210 (respectively instrument arms 210-1, instrument arm 210-2 and instrument arm 210-3), two surgical instruments 221 (surgical instrument 221a and surgical instrument 221b respectively), and an endoscope 222 with two image acquisition units.
  • the surgical instrument 221a has two instrument marking points T1 and T2
  • the surgical instrument 221b has two instrument marking points T3 and T4.
  • the image information of the tissue in the operation space can be obtained in real time by the endoscope 222, and updated in real time by the image processing unit, and the position information of each feature point in the endoscope coordinate system ⁇ Oe ⁇ can be obtained;
  • the endoscope 222 and the surgical instruments 221a and 221b are installed on the instrument arm 210, the base coordinate system of the base 201 of the slave device 200 is set to ⁇ Ob ⁇ , and the instrument arm base coordinate system of the instrument arm 210-1 is ⁇ Ob1 ⁇ , the instrument arm base coordinate system of the instrument arm 210-2 is ⁇ Ob2 ⁇ , and the instrument arm base coordinate system of the instrument arm 210-3 (ie, the scope arm) is ⁇ Ob3 ⁇ .
  • the position information of the feature point O1 in the base coordinate system ⁇ Ob ⁇ can be obtained from the kinematic relationship
  • the motion information of the marker points of the instrument includes velocity information, acceleration information, direction information, and the like.
  • the dotted line uses speed information as an example for illustration.
  • Velocity information of instrument marking points can be obtained by differential kinematics:
  • J is the Jacobian matrix of the equipment markers relative to the base coordinate system of the equipment arm; Indicates the influence matrix of joint i on the linear velocity of the instrument marker point; Represents the influence matrix of joint i on the angular velocity of the instrument marker point; Indicates the joint velocity of each joint in the instrument arm; v e indicates the velocity of the instrument marker.
  • the possible expected position information of the device marker point in the subsequent certain period of time can be obtained by integration:
  • p 0 represents the current position of the instrument marking point; Indicates the expected position of the instrument marker after time t n elapses.
  • the expected position information of the surgical instrument can be obtained, so as to determine the collision situation.
  • the collision situation includes the current collision situation and the expected collision situation.
  • step of determining the collision situation in step S4 includes:
  • Step S41 Take the feature point O o as the center of the sphere, the position of O o is (xo 0 , y o0 , z o0 ), and establish a sphere Co with Ro as the radius:
  • Step S42 Take the instrument marking point T o on the surgical instrument as the center of the sphere, the position of T o is (x t0 , y t0 , z t0 ), and use Rt as the radius to establish a sphere Ct:
  • the collision rule for the target object can be formulated according to the actual setting of the threshold P.
  • the target object includes M feature points, wherein N feature points are in contact with the instrument marker point, N is a natural number, M is a natural number not less than N, if the ratio of N to M is greater than Threshold P, P ⁇ (0,1], then it is determined that the surgical instrument 221 will collide with the target object.
  • the values of M, N, and P can be set according to the actual situation, and the smaller the P value, the The collision detection of the target object is more important.
  • step S4 the step of starting the security protection mechanism includes:
  • Step S44 setting a virtual boundary for the movement of the surgical instrument 221 according to the collision situation, and restricting the surgical instrument 221 from entering the range of the virtual boundary.
  • a virtual boundary limit is set according to the pre-collision information to prevent the surgical instrument 221 from moving to collide with the target object.
  • step S4 the step of alarming or prompting includes:
  • Step S45 In the imaging device 102 and/or the display device 302, add a text prompt of the collision information, and emphatically display the collision part between the surgical instrument 221 and the target object, such as red highlighting, etc., to help doctors and assistants Alarm or reminder is given by means of image display, as shown in Figure 19.
  • Step S46 flashing the warning light and/or prompting by sound.
  • a warning light is set at the instrument outer end of the tool arm of the instrument arm 210, if the surgical instrument 221 mounted or connected to the instrument arm 210 collides (that is, the current collision situation is a surgical instrument 221 collides with the target object), then flicker at a higher frequency, such as a 2Hz yellow light flicker. If the surgical instrument 221 mounted or connected to the instrument arm 210 is about to collide (that is, the expected collision situation is that the surgical instrument 221 collides with the target object), then flash at a lower frequency, such as a 1 Hz yellow light. Further, the instrument arm 210 can also be provided with an alarm sound prompting device to provide different levels of sound prompts, such as a 2 Hz sound prompt if a collision occurs, and a 1 Hz sound prompt if a collision is about to occur.
  • the collision detection is performed based on the ray collision detection method, including: generating the origin according to the position information of the target part of the medical device in the target medical image, and using the origin As the starting point, a ray is sent in the direction of movement of the medical device; according to the ray and the position information of the target object in the target medical image, it is judged whether the ray intersects the target object; when the ray intersects the target object, and the position of the intersection point and the position of the origin When the distance satisfies the preset condition, it is determined that the target object collides with the medical device.
  • Figure 21 is a schematic diagram of a ray collision detection method in an embodiment
  • Figure 22 is a flowchart of a ray collision detection method in an embodiment, in this embodiment, as shown in Figure 21 , select a position as the origin, launch a ray from the origin in a certain direction, calculate whether the ray path intersects with the surface of the object, and if there is an intersection point, calculate the distance between the intersection point and the origin.
  • the end of the instrument is taken as the origin, a ray is emitted toward the movement direction of the end of the instrument, and then the intersection point of the ray and the tissue surface is calculated and the distance between the intersection point and the origin is given.
  • intersection point means that there is no collision between the instrument and the tissue.
  • the distance between two points in three-dimensional space is calculated by the formula: Where (x 1 , y 1 , z 1 ) is the coordinates of point 1, and (x 2 , y 2 , z 2 ) is the coordinates of point 2.
  • the calculation of the intersection point of the ray and the object can be solved by establishing the parametric equations of the straight line and the geometric body and solving the equation system formed by them simultaneously. If the equation system has no solution, then there is no intersection point. If there is a solution, all the solutions correspond to all of their intersection points. coordinate.
  • the collision detection is performed based on a convex polygon collision detection method, including: according to the position information of the target part of the medical device in the target medical image Generate the first geometric body based on the position information; generate the second geometric body according to the position information of the target object in the target medical image; calculate the Minkowski difference between the first geometric body and the second geometric body; judge whether the target object collides with the medical device according to the Minkowski difference .
  • FIG. 23 is a schematic diagram of a convex polygon collision detection method in an embodiment
  • FIG. 24 is a flow chart of a convex polygon collision detection method in an embodiment.
  • the Minkowski difference between two geometries to be collided with is judged whether there is a collision between the two geometries according to whether the difference contains the origin (0,0).
  • S1 and S2 have collision overlapping parts, so the S3 geometry generated by their Minkowski difference includes the origin (0,0).
  • the Minkowski difference is calculated by taking the difference between the coordinates of a point in one geometry and all points in another geometry.
  • the cylinder at the end of the instrument is taken as one geometry, and the sensitive tissue is taken as another geometry, and then the Minkowski difference of the two geometry is calculated, if the origin is included, there is a collision, and if the origin is not included, there is no collision.
  • the collision detection is performed based on the linear projection collision detection method, including: according to the position information of the target part of the medical device in the target medical image Generating the first geometric body based on the position information; generating the second geometric body according to the position information of the target object in the target medical image; taking the moving direction of the medical device as the projection direction, calculating whether the projection parts of the first geometric body and the second geometric body overlap; when overlapping, then Determine the collision between the target object and the medical device.
  • FIG. 25 is a schematic diagram of a linear projection collision detection method in an embodiment.
  • it is judged by calculating whether the projections of two geometric bodies to be collision detected on a straight line overlap. Whether the geometry collides in the direction pointed by the projected line.
  • the end of the instrument can be regarded as a geometric body, and the sensitive tissue can be regarded as another geometric body, and then the instruction speed direction of the instrument is used as the direction of the projected line to project and calculate whether the projected parts of the two geometric bodies overlap. If so, then Indicates that the instrument collides with the tissue according to the current motion trend.
  • the collision detection is performed according to the position information of the target object in the target medical image and the position information of the target part of the medical device, including: when the target object collides with the medical device, outputting a perceivable
  • the collision warning includes at least one of a display reminder and a sound reminder.
  • the above collision detection method further includes: displaying the current medical image and/or the target medical image through a display device and/or an augmented reality device.
  • the display device may refer to a display device on a doctor's console, through which a current medical image and/or a target medical image may be displayed.
  • the system may also introduce an augmented reality device, through which the current medical image and/or the target medical image are displayed.
  • a three-dimensional digital scene inside the abdominal cavity is reconstructed by obtaining preoperative CT images/MR images, and the tissues in the surgical area, hidden nerves, blood vessels and sensitive tissues are marked, and preoperative medical images, endoscopic
  • the endoscope vision image and the position of the instrument in the robot coordinate system are fused to determine the spatial position and posture of the surgical instrument in the 3D surgical digital scene, so as to obtain the 3D spatial position relationship of the instrument, 3D endoscope, and human tissue and intuitively display it on the 3D surgical scene.
  • the surgical instrument collides with the sensitive tissue in the operation area, there is a risk of poking the sensitive tissue, especially when sensitive nerves, blood vessels, and tissues hidden in the invisible area of the endoscopic visual image, through the virtual surgical visual image (including AR ⁇ VR) prompts the risk of collision and prompts the collision and depth of the instrument and tissue through the gradual color change of the visual image and the sound, and feeds back the collision force between the instrument and the tissue to the main operating terminal to ensure the safety of the surgical operation.
  • the virtual surgical visual image including AR ⁇ VR
  • the master operator forms a master-slave control relationship with the robotic arm 201 and surgical instruments, thereby forming a master-slave mapping, and then calculates the three-dimensional model of the abdominal cavity, as well as the matching of preoperative medical images and endoscopic images. relationship, so as to fuse the three of them to determine whether the position between the current position of the end of the device and the sensitive tissue is less than the threshold value, if so, the device collides with the tissue, and the processing is performed according to the above process to issue an alarm. If the position between the current position of the end of the device and the sensitive tissue is not less than the threshold, then the device has not collided with the tissue, and the next cycle of detection continues.
  • the method further includes: acquiring the visual space coordinate system of the augmented reality device; calculating a third matching relationship between the visual space coordinate system of the augmented reality device and the image space coordinate system of the current medical image; according to the third The matching relationship displays the current medical image in the visual space of the augmented reality device; and/or displays the target medical image in the visual space of the augmented reality device according to the first matching relationship, the second matching relationship and the third matching relationship.
  • the visual space coordinate system is the coordinate system of the display space of the augmented display device.
  • the visual space coordinate system of the augmented reality device and the current medical image are first calculated.
  • the third matching relationship between the image space coordinate systems and then display the current medical image in the visual space of the augmented reality device according to the third matching relationship; and/or according to the first matching relationship, the second matching relationship and the third matching relationship Display the target medical image in the visual space of the augmented reality device.
  • the target medical image is displayed, so that the doctor can be prompted where the collision occurred by displaying the fused image; when the target object and the medical device do not collide, the current medical image is displayed .
  • steps in the flow charts involved in the above embodiments are shown sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in the flow charts involved in the above embodiments may include multiple steps or stages, and these steps or stages are not necessarily executed at the same time, but may be executed at different times, The execution order of these steps or stages is not necessarily performed sequentially, but may be performed in turn or alternately with other steps or at least a part of steps or stages in other steps.
  • an embodiment of the present application further provides a collision detection device for implementing the above-mentioned collision detection method.
  • the solution to the problem provided by the device is similar to the implementation described in the above method, so for the specific limitations in one or more embodiments of the collision detection device provided below, please refer to the definition of the collision detection method above, I won't repeat them here.
  • a collision detection device including: a current medical image acquisition module 1901 , a location information acquisition module 1902 , a standard medical image acquisition module 1903 , a conversion module 1904 and a collision detection module 1905 ,in:
  • a current medical image acquisition module 1901 configured to acquire a current medical image
  • a location information acquisition module 1902 configured to acquire the location information of the target part of the medical device
  • a standard medical image acquisition module 1903 configured to acquire a pre-processed standard medical image, where the standard medical image carries the position information of the target object;
  • the conversion module 1904 is configured to convert the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, and according to the position information of the current medical image and the target part of the medical device
  • the second matching relationship converts the position information of the target part of the medical device into the target medical image
  • the collision detection module 1905 is configured to perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
  • the collision detection module 1905 is configured to perform collision detection according to at least one of the following: according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision is performed based on the ray collision detection method Detection; according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on the convex polygon collision detection method; according to the position information of the target object in the target medical image and the position of the target part of the medical device Information, based on the linear projection collision detection method for collision detection.
  • the above-mentioned collision detection module 1905 includes: a ray generation unit, configured to generate an origin according to the position information of the target part of the medical device in the target medical image, and use the origin as a starting point to send a ray toward the moving direction of the medical device;
  • the intersection judging unit is used to judge whether the ray intersects the target object according to the ray and the position information of the target object in the target medical image;
  • the first collision result output unit is used for when the ray intersects the target object, and the position of the intersection point is the same as the origin
  • the distance between the positions satisfies the preset condition
  • the collision detection module 1905 includes: a first geometry generation unit, configured to generate a first geometry according to the position information of the target part of the medical device in the target medical image; a second geometry generation unit, configured to generate a first geometry according to the target The position information of the target object in the medical image generates a second geometric body; the Minkowski difference calculation unit is used to calculate the Minkowski difference between the first geometric body and the second geometric body; the second collision result output unit is used to calculate the Minkowski difference according to the Minkowski difference Determine whether the target object collides with the medical device.
  • the collision detection module 1905 includes: a third geometry generation unit, configured to generate the first geometry according to the position information of the target part of the medical device in the target medical image; a fourth geometry generation unit, configured to generate the first geometry according to the target The position information of the target object in the medical image generates a second geometric body; the overlapping judging unit is used to use the moving direction of the medical device as the projection direction, and calculate whether the projection parts of the first geometric body and the second geometric body overlap;
  • the third collision result output unit is configured to determine that the target object collides with the medical instrument when they overlap.
  • the above device further includes: an alarm module, configured to output a perceivable collision alarm when the target object collides with the medical device.
  • the above-mentioned device further includes: an initial medical image acquisition module, configured to acquire an initial medical image through a medical image scanning device; a standard medical image generation module, configured to identify the target object in the initial medical image, and mark the target The position of the object is used to obtain the position information of the target object, and the standard medical image is obtained according to the position information of the target object and the initial medical image.
  • an initial medical image acquisition module configured to acquire an initial medical image through a medical image scanning device
  • a standard medical image generation module configured to identify the target object in the initial medical image, and mark the target The position of the object is used to obtain the position information of the target object, and the standard medical image is obtained according to the position information of the target object and the initial medical image.
  • the above device further includes: an identification unit, configured to identify the object to be processed in the current medical image; a first matching relationship generating unit, configured to match the target object in the standard medical image with the object to be processed Fusion obtains the first matching relationship.
  • the above-mentioned device includes: a second matching relationship generating unit, configured to perform kinematic mapping calculation to obtain the second matching relationship.
  • the above-mentioned device further includes: a first display module, configured to display the current medical image and/or the target medical image through a display device and/or an augmented reality device.
  • the above device further includes: a first coordinate system acquisition module, configured to acquire the visual space coordinate system of the augmented reality device; a third matching relationship generation module, configured to calculate the visual space coordinate system of the augmented reality device and The third matching relationship between the image space coordinate systems of the current medical image; the second display module is used to display the current medical image in the visual space of the augmented reality device according to the third matching relationship; and/or according to the first matching relationship, The second matching relationship and the third matching relationship display the target medical image in the visual space of the augmented reality device.
  • the above device further includes: a third display module, configured to display the target medical image when the target object collides with the medical instrument; and display the current medical image when the target object does not collide with the medical instrument.
  • Each module in the above-mentioned collision detection device can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure may be as shown in FIG. 28 .
  • the computer device includes a processor, a memory, a communication interface, a display screen and an input device connected through a system bus. Wherein, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the communication interface of the computer device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies.
  • a collision detection method is realized.
  • the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen
  • the input device of the computer device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the casing of the computer device , and can also be an external keyboard, touchpad, or mouse.
  • Figure 28 is only a block diagram of a part of the structure related to the solution of this application, and does not constitute a limitation on the computer equipment on which the solution of this application is applied.
  • the specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
  • a computer device including a memory and a processor, where a computer program is stored in the memory, and the processor implements the steps in the above method embodiments when executing the computer program.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented.
  • Non-volatile memory can include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive variable memory (ReRAM), magnetic variable memory (Magnetoresistive Random Access Memory, MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (Phase Change Memory, PCM), graphene memory, etc.
  • the volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory, etc.
  • RAM Random Access Memory
  • RAM Random Access Memory
  • RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).
  • the databases involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database.
  • the non-relational database may include a blockchain-based distributed database, etc., but is not limited thereto.
  • the processors involved in the various embodiments provided by this application can be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, data processing logic devices based on quantum computing, etc., and are not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Robotics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

La présente invention concerne un procédé et un appareil de détection de collision, un dispositif, et un support de stockage lisible. Le procédé comprend : l'acquisition d'une image médicale actuelle (S703) ; l'acquisition d'informations de position d'une partie cible d'un instrument médical (S704) ; l'acquisition d'une image médicale standard obtenue au moyen d'un prétraitement, l'image médicale standard transportant des informations de position d'un objet cible (S705) ; selon une première relation de correspondance entre l'image médicale actuelle et l'image médicale standard, la conversion des informations de position de l'objet cible sur l'image médicale standard en une image médicale cible, et selon une seconde relation de correspondance entre l'image médicale actuelle et les informations de position de la partie cible de l'instrument médical, la conversion des informations de position de la partie cible de l'instrument médical en l'image médicale cible (S706) ; et la réalisation d'une détection de collision en fonction des informations de position de l'objet cible et des informations de position de la partie cible de l'instrument médical sur l'image médicale cible (S707). Ceci peut améliorer la précision de détection, et est plus intelligent.
PCT/CN2022/121629 2021-10-21 2022-09-27 Procédé et appareil de détection de collision, dispositif, et support de stockage lisible WO2023065988A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111229335.7A CN115998439A (zh) 2021-10-21 2021-10-21 手术机器人的碰撞检测方法、可读存储介质及手术机器人
CN202111229335.7 2021-10-21
CN202111667608.6 2021-12-30
CN202111667608.6A CN114224512B (zh) 2021-12-30 2021-12-30 碰撞检测方法、装置、设备、可读存储介质和程序产品

Publications (1)

Publication Number Publication Date
WO2023065988A1 true WO2023065988A1 (fr) 2023-04-27

Family

ID=86057909

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/121629 WO2023065988A1 (fr) 2021-10-21 2022-09-27 Procédé et appareil de détection de collision, dispositif, et support de stockage lisible

Country Status (1)

Country Link
WO (1) WO2023065988A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117179680A (zh) * 2023-09-11 2023-12-08 首都医科大学宣武医院 内镜导航系统

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102448680A (zh) * 2009-03-31 2012-05-09 直观外科手术操作公司 外科手术机器人的合成表征
WO2013101273A1 (fr) * 2011-12-30 2013-07-04 St. Jude Medical, Atrial Fibrillation Division, Inc. Système et procédé de détection et d'évitement des collisions pour robots médicaux
US20140243596A1 (en) * 2013-02-28 2014-08-28 Samsung Electronics Co., Ltd. Endoscope system and control method thereof
CN106426186A (zh) * 2016-12-14 2017-02-22 国网江苏省电力公司常州供电公司 一种基于多传感器信息融合的带电作业机器人自主作业方法
US20180357825A1 (en) * 2017-06-09 2018-12-13 Siemens Healthcare Gmbh Output of position information of a medical instrument
WO2020190832A1 (fr) * 2019-03-20 2020-09-24 Covidien Lp Systèmes de détection de collision chirurgicale robotique
CN112704564A (zh) * 2020-12-22 2021-04-27 上海微创医疗机器人(集团)股份有限公司 手术机器人系统、碰撞检测方法、系统及可读存储介质
CN112773506A (zh) * 2021-01-27 2021-05-11 哈尔滨思哲睿智能医疗设备有限公司 一种腹腔镜微创手术机器人的碰撞检测方法及装置
US20220071716A1 (en) * 2020-09-08 2022-03-10 Verb Surgical Inc. 3d visualization enhancement for depth perception and collision avoidance
CN114224512A (zh) * 2021-12-30 2022-03-25 上海微创医疗机器人(集团)股份有限公司 碰撞检测方法、装置、设备、可读存储介质和程序产品
CN114494602A (zh) * 2022-02-10 2022-05-13 苏州微创畅行机器人有限公司 碰撞检测方法、系统、计算机设备和存储介质

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102448680A (zh) * 2009-03-31 2012-05-09 直观外科手术操作公司 外科手术机器人的合成表征
WO2013101273A1 (fr) * 2011-12-30 2013-07-04 St. Jude Medical, Atrial Fibrillation Division, Inc. Système et procédé de détection et d'évitement des collisions pour robots médicaux
US20140243596A1 (en) * 2013-02-28 2014-08-28 Samsung Electronics Co., Ltd. Endoscope system and control method thereof
CN106426186A (zh) * 2016-12-14 2017-02-22 国网江苏省电力公司常州供电公司 一种基于多传感器信息融合的带电作业机器人自主作业方法
US20180357825A1 (en) * 2017-06-09 2018-12-13 Siemens Healthcare Gmbh Output of position information of a medical instrument
WO2020190832A1 (fr) * 2019-03-20 2020-09-24 Covidien Lp Systèmes de détection de collision chirurgicale robotique
US20220071716A1 (en) * 2020-09-08 2022-03-10 Verb Surgical Inc. 3d visualization enhancement for depth perception and collision avoidance
CN112704564A (zh) * 2020-12-22 2021-04-27 上海微创医疗机器人(集团)股份有限公司 手术机器人系统、碰撞检测方法、系统及可读存储介质
CN112773506A (zh) * 2021-01-27 2021-05-11 哈尔滨思哲睿智能医疗设备有限公司 一种腹腔镜微创手术机器人的碰撞检测方法及装置
CN114224512A (zh) * 2021-12-30 2022-03-25 上海微创医疗机器人(集团)股份有限公司 碰撞检测方法、装置、设备、可读存储介质和程序产品
CN114494602A (zh) * 2022-02-10 2022-05-13 苏州微创畅行机器人有限公司 碰撞检测方法、系统、计算机设备和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117179680A (zh) * 2023-09-11 2023-12-08 首都医科大学宣武医院 内镜导航系统

Similar Documents

Publication Publication Date Title
US20150320514A1 (en) Surgical robots and control methods thereof
JP5707449B2 (ja) コンピュータディスプレイ画面の境界領域に表示されるツール位置および識別指標
Hayashibe et al. Laser-scan endoscope system for intraoperative geometry acquisition and surgical robot safety management
EP3613547B1 (fr) Représentation de synthèse d'un robot chirurgical
JPH11309A (ja) 画像処理装置
CN111317568B (zh) 胸部成像、距离测量、手术感知以及通知系统和方法
JP7469120B2 (ja) ロボット手術支援システム、ロボット手術支援システムの作動方法、及びプログラム
US20230172679A1 (en) Systems and methods for guided port placement selection
US20230372014A1 (en) Surgical robot and motion error detection method and detection device therefor
JP2010200894A (ja) 手術支援システム及び手術ロボットシステム
WO2016195919A1 (fr) Positionnement d'instrument tridimensionnel précis
US11944395B2 (en) 3D visualization enhancement for depth perception and collision avoidance
CN114224512B (zh) 碰撞检测方法、装置、设备、可读存储介质和程序产品
WO2023065988A1 (fr) Procédé et appareil de détection de collision, dispositif, et support de stockage lisible
Mathur et al. A semi-autonomous robotic system for remote trauma assessment
US20180249953A1 (en) Systems and methods for surgical tracking and visualization of hidden anatomical features
US11771508B2 (en) Robotically-assisted surgical device, robotically-assisted surgery method, and system
CN115252140A (zh) 手术器械导引方法、手术机器人和介质
CN115998439A (zh) 手术机器人的碰撞检测方法、可读存储介质及手术机器人
CN114533263B (zh) 机械臂碰撞提示方法、可读存储介质、手术机器人及系统
US20210298854A1 (en) Robotically-assisted surgical device, robotically-assisted surgical method, and system
WO2023066019A1 (fr) Système robot chirurgical, procédé de commande de sécurité, dispositif esclave, et support lisible
US20240070875A1 (en) Systems and methods for tracking objects crossing body wallfor operations associated with a computer-assisted system
WO2023018685A1 (fr) Systèmes et procédés pour environnement d'interaction différencié
WO2023018684A1 (fr) Systèmes et procédés de mesure basée sur la profondeur dans une vue tridimensionnelle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882599

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE