WO2023065988A1 - Collision detection method and apparatus, device, and readable storage medium - Google Patents

Collision detection method and apparatus, device, and readable storage medium Download PDF

Info

Publication number
WO2023065988A1
WO2023065988A1 PCT/CN2022/121629 CN2022121629W WO2023065988A1 WO 2023065988 A1 WO2023065988 A1 WO 2023065988A1 CN 2022121629 W CN2022121629 W CN 2022121629W WO 2023065988 A1 WO2023065988 A1 WO 2023065988A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
target
position information
medical
target object
Prior art date
Application number
PCT/CN2022/121629
Other languages
French (fr)
Chinese (zh)
Inventor
苗燕楠
彭晓宁
李自汉
江磊
王家寅
Original Assignee
上海微创医疗机器人(集团)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202111229335.7A external-priority patent/CN115998439A/en
Priority claimed from CN202111667608.6A external-priority patent/CN114224512B/en
Application filed by 上海微创医疗机器人(集团)股份有限公司 filed Critical 上海微创医疗机器人(集团)股份有限公司
Publication of WO2023065988A1 publication Critical patent/WO2023065988A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/35Surgical robots for telesurgery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Definitions

  • the present application relates to the field of intelligent medical technology, in particular to a collision detection method, device, equipment, and a readable storage medium.
  • the OBB bounding box is used to detect the collision between the robot arm and the arm, and between the joints of the robotic arm.
  • This collision detection method is mainly based on the establishment of a bounding box for the robotic arm, and is realized by detecting the collision between the bounding boxes.
  • OBB bounding box collision detection is mainly aimed at collision detection between rigid body structures with regular shapes of convex polyhedrons, and is not suitable for collision detection between instrument ends and soft tissues.
  • the six-dimensional torque sensor or joint torque sensor is bulky and generally difficult to install at the end of the device. It is often used in the collision detection between collaborative robots and the external environment, and it is not suitable for the collision detection of narrow lumens in the body.
  • the present application provides a collision detection method, the method comprising: acquiring at least two image information of a target object under different viewing angles; obtaining the spatial position of the target object according to the at least two image information; Obtaining position information of the medical instrument connected to the end of the instrument arm of the surgical robot; determining a collision between the medical instrument and the target object according to the spatial position of the target object and the position information of the medical instrument.
  • the present application provides another collision detection method, the method comprising: obtaining the current medical image; obtaining the position information of the target part of the medical device; obtaining a pre-processed standard medical image, the standard medical image carrying The position information of the target object; according to the first matching relationship between the current medical image and the standard medical image, the position information of the target object in the standard medical image is converted into the target medical image, and according to the current medical image and the The second matching relationship of the position information of the target part of the medical device is used to convert the position information of the target part of the medical device into the target medical image; according to the position information of the target object in the target medical image and the Collision detection is performed based on the position information of the target part of the medical device.
  • the present application provides a collision detection system.
  • the collision detection system includes a processor, a medical mirror, and a medical device.
  • the medical device is provided with a sensor, and the sensor is used to collect the location information, and send the location information of the target part of the collected medical device to the processor; the medical mirror is used to collect the current medical image, and send the current medical image to the processor; the The processor is used to execute the above collision detection method.
  • the present application provides a collision detection device, which includes: a current medical image acquisition module, used to acquire the current medical image; a position information acquisition module, used to acquire the position information of the target part of the medical device;
  • the image acquisition module is used to acquire the pre-processed standard medical image, the standard medical image carries the position information of the target object;
  • the matching relationship calculation module is used to calculate the current medical image and the first one of the standard medical image A matching relationship, calculating a second matching relationship between the current medical image and the position information of the target part of the medical device;
  • a conversion module configured to convert the position information of the target object in the standard medical image according to the first matching relationship Converting to the target medical image, converting the position information of the target part of the medical device into the target medical image according to the second matching relationship;
  • a collision detection module configured to Collision detection is performed on the position information of the object and the position information of the target part of the medical instrument.
  • the present application provides a computer device, including a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the steps of the method involved in any of the above-mentioned embodiments are implemented .
  • the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method involved in any one of the above-mentioned embodiments are implemented.
  • the above-mentioned collision detection method, device, equipment, and readable storage medium carry the position information of the target object in the standard medical image obtained by preprocessing, so that the target object in the standard medical image is The position information of the object is converted into the target medical image, and the position information of the target part of the medical device is converted into the target medical image according to the second matching relationship between the current medical image and the position information of the target part of the medical device, so that in the target Collision detection in medical images expands the scope of application of collision detection and is more intelligent, capable of detecting collisions between microsurgical instruments and soft tissues.
  • Fig. 1 is a system diagram of a collision detection system in an embodiment
  • Fig. 2 is a schematic diagram of the application scenario of the surgical robot system involved in the present application
  • FIG. 3 is a schematic diagram of a master device according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a slave device according to an embodiment of the present application.
  • Figure 5a is a schematic diagram of a surgical instrument according to an embodiment of the present application.
  • Figure 5b is an enlarged view of part A of Figure 5a;
  • Fig. 6 is a schematic diagram of an operation field image of an embodiment of the present application.
  • FIG. 7 is a flowchart of a collision detection method for a surgical robot according to an embodiment of the present application.
  • Fig. 8 is a flow chart of the marking process of feature points in the embodiment of the present application.
  • Fig. 9 is a schematic diagram of the binocular vision of the embodiment of the present application.
  • Fig. 10 is a calculation principle diagram of binocular vision in the embodiment of the present application.
  • Fig. 11 is a schematic diagram of master-slave operation control of a robot of the present application.
  • Fig. 12 is a schematic flow chart of a collision detection method in an embodiment
  • Fig. 13 is a schematic diagram of a scene of standard medical image acquisition in an embodiment
  • Fig. 14 is an embodiment of an endoscopic image, a preoperative medical image and an endoscopic image, and the standard medical image is a fusion flow chart of the preoperative medical image;
  • Fig. 15 is a schematic diagram of a collision between a medical device and a target object in one embodiment
  • Fig. 16 is a schematic diagram of the spatial position of the medical device in the target operation area in one embodiment
  • Fig. 17a is a schematic diagram of the spatial relationship between the surgical instrument and the tissue according to the embodiment of the present application.
  • Figure 17b is an enlarged view of part B of Figure 17a;
  • Fig. 18 is a flow chart of the collision detection of the embodiment of the present application.
  • Fig. 19 is a schematic diagram of a collision safety visual warning according to an embodiment of the present application.
  • Fig. 20 is a schematic diagram of a sound and light warning for collision safety in an embodiment of the present application.
  • Fig. 21 is a schematic diagram of a ray collision detection method in an embodiment
  • Fig. 23 is a schematic diagram of a convex polygon collision detection method in an embodiment
  • Fig. 24 is a flowchart of a convex polygon collision detection method in an embodiment
  • Fig. 25 is a schematic diagram of a linear projection collision detection method in an embodiment
  • Fig. 26 is a schematic diagram of a sound warning for collision safety in an embodiment
  • Fig. 27 is a structural block diagram of a collision detection device in an embodiment
  • Figure 28 is a diagram of the internal structure of a computer device in one embodiment.
  • proximal end is usually the end close to the operator
  • distal end is usually the end close to the patient, that is, the end close to the lesion.
  • One end and “the other end” as well as “proximal end” and “distal end” usually refer to the corresponding two parts, which not only include the end point, the term “installation” , “connected” and “connected” should be understood in a broad sense.
  • connection can be fixed connection, detachable connection, or integrated; it can be mechanical connection or electrical connection; it can be direct connection or through
  • the intermediary is indirectly connected, which can be the internal communication of two elements or the interaction relationship between two elements.
  • an element is arranged on another element, usually only means that there is a connection, coupling, cooperation or transmission relationship between the two elements, and the relationship between the two elements can be direct or indirect through an intermediate element.
  • connection, coupling, fit or transmission but cannot be understood as indicating or implying the positional information relationship between two elements, that is, one element can be in any orientation such as inside, outside, above, below or on one side of another element, unless the content Also clearly point out.
  • the collision detection method provided in the embodiment of the present application may be applied to the collision detection system shown in FIG. 1 .
  • the collision detection system includes a processor, a medical mirror, and a medical device.
  • the medical device is equipped with a sensor, and the sensor is used to collect the position information of the target part of the medical device, and send the collected position information of the target part of the medical device to the processor.
  • the medical mirror is used to collect the current medical image and send the current medical image to the processor; the processor is used to execute the collision detection method below.
  • Figure 2 shows an application scenario of a surgical robot system, which includes a master-slave teleoperated surgical robot, that is, the surgical robot system includes a master device 100 (ie, a doctor-side control device), a slave device 200 (ie, the patient-side control device), a main controller, and a supporting device 400 (eg, an operating bed) for supporting the surgical object for surgery.
  • the supporting device 400 may also be replaced by other surgical operation platforms, which is not limited in the present application.
  • the master device 100 is an operation end of a teleoperated surgical robot, and includes a master operator 101 installed thereon.
  • the main operating hand 101 is used to receive the operator's hand movement information, which can be input as the movement control signal of the whole system.
  • the master controller is also disposed on the master device 100 .
  • the master device 100 further includes an imaging device 102, the imaging device 102 can provide a stereoscopic image for the operator, and provide an image of the surgical field for the operator to perform a surgical operation.
  • the image of the surgical field includes the type and quantity of surgical instruments, their posture in the abdomen, the shape and arrangement of blood vessels in the patient's organs and tissues, and the surrounding organs and tissues.
  • the master device 100 also includes a foot-operated surgical control device 103 , through which the operator can complete input of relevant operation instructions such as electrocution and electrocoagulation.
  • the slave device 200 is a specific execution platform of a teleoperated surgical robot, and includes a base 201 and a surgical execution component installed thereon.
  • the operation execution assembly includes an instrument arm 210 and instruments, and the instruments are hung or connected to the end of the instrument arm 210 .
  • the instruments include surgical instruments 221 (such as high-frequency electric knife, etc.) for specific surgical operations and endoscopes 222 for auxiliary observation; correspondingly, the instruments for connecting or mounting the endoscope 222 Arm 210 may be referred to as a mirror arm.
  • the instrument arm 210 includes an adjustment arm and a working arm.
  • the tool arm is a mechanical fixed point mechanism, which is used to drive the instrument to move around the mechanical fixed point, so as to perform minimally invasive surgical treatment or photographing on the patient 410 on the supporting device 400 .
  • the adjustment arm is used to adjust the pose of the mechanical fixed point in the working space.
  • the instrument arm 210 is a spatially configured mechanism with at least six degrees of freedom, which is used to drive the surgical instrument 221 to move around an active fixed point under program control.
  • the surgical instrument 221 is used to perform specific surgical operations, such as clamping, cutting, scissors and other operations, please refer to FIG. 5a and FIG. Agency 242.
  • the surgical instrument 221 can telescopically move along the axial direction of the instrument shaft 241; it can perform autorotation around the circumference of the instrument shaft 241, that is, form an autorotation joint 251; the operating mechanism 242 can perform pitching, yaw, and opening and closing movements, Pitch joints 252, yaw joints 253, and opening and closing joints 254 are respectively formed so as to realize various applications in surgical operations.
  • the surgical instrument 221 and the endoscope 222 have a certain volume in practice, the above-mentioned "fixed point” should be understood as a fixed area. Of course, those skilled in the art can understand the "fixed point" according to the prior art.
  • the master controller is connected to the master device 100 and the slave device 200 respectively, and is used to control the movement of the operation execution component according to the movement of the master operator 101.
  • the master controller includes a master-slave mapping module, the The master-slave mapping module is used to obtain the terminal pose of the master operator 101 and the predetermined master-slave mapping relationship to obtain the desired terminal pose of the surgical actuator, and then control the instrument arm 210 to drive the instrument to the desired terminal pose .
  • the master-slave mapping module is also used to receive instrument function operation instructions (such as electric cutting, electrocoagulation and other related operation instructions), and control the energy driver of the surgical instrument 221 to release energy to implement electric cutting, electrocoagulation and other surgical operations.
  • the main controller also accepts the force information received by the surgical execution component (for example, the force information of the human tissue and organ on the surgical instrument), and feeds back the force information received by the surgical execution component to the main operator 101 , so that the operator can feel the feedback force of the surgical operation more intuitively.
  • the surgical execution component for example, the force information of the human tissue and organ on the surgical instrument
  • the medical robot system also includes an image trolley 300 .
  • the image trolley 300 includes: an image processing unit (not shown) communicated with the endoscope 222 .
  • the endoscope 222 is used to acquire intracavity (referring to the patient's body cavity) surgical field images.
  • the image processing unit is used to perform image processing on the image of the operative field acquired by the endoscope 222 and transmit it to the imaging device 102 so that the operator can observe the image of the operative field.
  • the image trolley 300 further includes a display device 302 .
  • the display device 302 is connected in communication with the image processing unit, and is used to provide an auxiliary operator (such as a nurse) with real-time display of an operation field image or other auxiliary display information.
  • the surgical robot system also includes auxiliary components such as a ventilator, an anesthesia machine 500 and an instrument table 600 for use in surgery.
  • auxiliary components such as a ventilator, an anesthesia machine 500 and an instrument table 600 for use in surgery.
  • FIG. 6 shows a surgical operation space scene.
  • 3 to 4 surgical holes can be drilled on the body surface of the patient 410, and a poking card with a through hole can be installed and fixed.
  • the surgical instrument 221 and the inner The speculum 222 respectively enters the surgical operation space in the body through the poking holes.
  • the operator controls the end pose of the surgical instrument 221 through master-slave teleoperation under the guidance of the surgical field image.
  • the operator sits in front of the main device 100 located outside the sterile field, observes the image of the surgical field returned by the imaging device 102, and controls the movement of the surgical execution component by operating the main operating hand 101 to complete various operations.
  • Surgical operation During the operation, the posture of the operation hole will remain still (i.e. form a fixed point) to avoid crushing and damaging the surrounding tissues.
  • the operator The instrument 221 cuts, cuts, electrocoagulates, and sutures the lesion tissue to complete the predetermined surgical goal.
  • a plurality of surgical instruments 221 and endoscopes 222 penetrate into the narrow space of the human body through poking holes; the endoscope 222 can provide real-time feedback of the surgical instruments 221 and tissue image information.
  • the surgical instrument 221 is likely to collide with vulnerable key tissue parts (ie, target objects), such as arteries, heart valves, and the like.
  • This embodiment provides a collision detection method for a surgical robot, which includes:
  • Step S1 Obtain at least two image information of the target object in the surgical environment under different viewing angles
  • Step S2 Obtain the spatial position of the target object according to the at least two image information
  • Step S3 Obtain the position information of the surgical instrument connected to the end of the instrument arm of the surgical robot;
  • Step S4 Determine the collision between the surgical instrument and the target object according to the spatial position of the target object and the position information of the surgical instrument.
  • the surgical robot system includes at least two image acquisition units and a collision processing unit, the at least two image acquisition units are used to acquire at least two image information from different viewing angles, the instrument arm 210 and at least two image acquisition units
  • the units are respectively connected in communication with the collision processing unit; the collision processing unit is used to execute the above-mentioned collision detection method of the surgical robot.
  • the collision processing unit may be set on the slave device 200, or may be set on the master device 100, or be set independently.
  • the present application makes no particular limitation on the specific location of the collision processing unit.
  • the collision detection between the surgical instrument 221 and the target object can be realized, which can effectively improve the safety of the surgical operation and avoid accidentally injuring surrounding normal tissues , blood vessels and nerves to ensure the safety of surgical operations. Furthermore, due to the use of visual processing technology, the demand for sensing equipment is reduced, and the structure of the system is simplified.
  • the target object has feature points, and the feature points are determined by modeling based on medical images.
  • the medical image can be obtained by scanning the patient with a medical image scanning device (such as CT or MRI, etc.) before the operation.
  • image processing algorithms are used to complete the tissue modeling in the operation space and complete the three-dimensional reconstruction of the operation scene. According to the situation in the abdominal cavity, the operator can determine the outline of the key tissues that need special attention, and determine and mark the feature points before the operation.
  • step S1 the marking process of feature points may include:
  • Step S11 Obtain medical images of organs and tissues in the operating space; use endoscope, CT or MRI and other medical image scanning devices to scan and obtain medical images of organs and tissues in the operating space in the preoperative preparation stage;
  • Step S12 Tissue modeling and image calibration; establish a tissue model through a visual processing algorithm to obtain spatial position information of organs and tissues, and further complete image calibration through specific calibration points;
  • Step S13 Mark the feature points of key tissues; identify the important key tissue parts that need special attention through the machine learning algorithm or the doctor, so as to determine the outline and feature points of the target object that needs to be detected for collision, and mark them for intraoperative
  • the position of the feature point of the target object can be updated in real time.
  • special attention needs to be paid to vulnerable target parts, such as arteries, heart valves, etc.
  • vulnerable target parts such as arteries, heart valves, etc.
  • this part of the organization can be excluded from the target object without special attention.
  • the images P1 and P2 presented by two image acquisition units (respectively the first image acquisition unit 701 and the second image acquisition unit 702 in FIG. 9) to the same object P (in FIG.
  • the size of the parallax corresponds to the distance between the object and the two image acquisition units.
  • the three-dimensional coordinate information of the point P on the measured object in the coordinate system of the image acquisition device can be obtained.
  • the three-dimensional coordinate information of any feature point on the measured object in the coordinate system of the binocular image acquisition device can be obtained, and then the three-dimensional model of the measured object can be constructed.
  • the real-time 3D model of the target object it can be registered with the preoperative tissue model according to the common registration method in the field, so as to obtain the real-time position information of the feature points in the coordinate system of the image acquisition device , so that the spatial position of the target object can be obtained.
  • the image acquisition units are disposed on the endoscope 222, that is, the endoscope 222 is a 3D endoscope with at least two cameras.
  • the coordinate system of the image acquisition device is the endoscope coordinate system.
  • two 2D endoscopes with monocular cameras can be used as image acquisition units respectively.
  • the image acquisition unit may also be independent from the endoscope 222, which is not limited in this embodiment.
  • the operator controls the position and posture of the end of the instrument through master-slave teleoperation under the guidance of the endoscopic image.
  • the position of the end of the instrument includes the translational movement of the instrument along the X, Y, and Z directions, and the attitude includes the pitch, yaw, and rotation of the end of the instrument.
  • the present application also provides a collision detection method, which is illustrated in FIG. 12 by taking the method applied to the processor in FIG. 1 as an example, including the following steps:
  • a medical device refers to a surgical operation device
  • its target site may refer to an end of the surgical operation device.
  • the acquisition of the position information of the target part of the medical device may be obtained through kinematics calculation. For example, during master-slave surgery, based on the Cartesian end position and velocity of the master end, the response command Cartesian position and command velocity of the slave end instrument in the robot coordinate system are calculated.
  • S706 Convert the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, and according to the second matching relationship between the current medical image and the position information of the target part of the medical device The position information of the target part of the medical device is converted into the target medical image.
  • S707 Perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
  • the first matching relationship is the matching relationship between the current medical image and the standard medical image, which mainly completes the matching between the current medical image and the standard medical image by adapting the positions of key tissue labeling points in the current medical image.
  • the second matching relationship is the matching between the current medical image and the position information of the target part of the medical device, which is mainly established through the derivation of robot kinematics.
  • the movement position information of the instrument arm and the endoscope arm including the joints
  • the position and speed information of the robot through the kinematics of the robot and the kinematics mapping relationship, calculate the second matching relationship between the pre-medical image and the position information of the target part of the medical device.
  • the first matching relationship before converting the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, it also includes: identifying the target object in the current medical image Processing objects: matching and fusing the target object in the standard medical image with the object to be processed to obtain a first matching relationship, the first matching relationship is used to convert the position information of the target object in the standard medical image to the target medical image.
  • the position information of the target part of the medical device into the target medical image according to the second matching relationship between the current medical image and the position information of the target part of the medical device it further includes:
  • the motion information of the medical mirror of the medical image and the motion information of the target part of the medical device are calculated by kinematic mapping to obtain the second matching relationship, and the second matching relationship is used to convert the position information of the target part of the medical device into the target medical image .
  • a three-dimensional intra-abdominal model is first established through preoperative medical images, and the spatial position information of key tissues is marked, and then the positions of key tissue marking points under the 3D endoscopic image are adapted to complete the registration of the two coordinate systems Fusion realizes the fusion of preoperative medical images and intraoperative 3D endoscopic images. Then, the motion position information of the instrument arm and the endoscope arm, including the position and speed information of each joint, is calculated through the robot kinematics and kinematics mapping relationship to obtain the position system of the instrument endoscope image coordinate system, realizing intraoperative Fusion of 3D endoscopic images with robot coordinate system. Based on the above two steps, the fusion of preoperative medical images, intraoperative 3D endoscopic images, and robot coordinate systems is realized, and the three-dimensional spatial position of surgical instruments in the surgical area is calculated in real time.
  • S707 Perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
  • the distance between the position information of the target object and the position information of the target part of the medical device is calculated to determine whether a collision occurs, and if not, continue the detection for a next cycle.
  • Figure 15 is a schematic diagram of a collision between a medical instrument and a target object in an embodiment
  • Figure 16 is a schematic diagram of the spatial position of a medical instrument in a target operating area in an embodiment, in which
  • the doctor sees the real-time scene of the operation area through the endoscopic image.
  • the doctor can see the surgical lesion tissue (prostate as an example), and can also see some sensitive tissues and vascular tissues, but there are also other sensitive tissues and vascular and nerve tissues that are not within the field of view of the endoscope image. These sensitive tissues are easily injured if the instrument moves beyond the field of view of the endoscope.
  • collisions between surgical instruments moving beyond the field of view of the endoscope and invisible tissues and blood vessels can be effectively identified.
  • the collision between the surgical instrument and sensitive tissues, blood vessels and nerves, etc. is detected in real time.
  • a collision warning is issued and the position of the collision point in the abdominal cavity is fed back.
  • the pre-processed standard medical image carries the position information of the target object, so that the first matching relationship between the current medical image and the standard medical image is calculated, and the relationship between the current medical image and the position information of the target part of the medical device is calculated.
  • the second matching relationship converting the position information of the target object in the standard medical image into the target medical image according to the first matching relationship, and converting the position information of the target part of the medical device into the target medical image according to the second matching relationship, In this way, the collision detection in the target medical image can improve the detection accuracy and be more intelligent.
  • the collision detection is performed according to the position information of the target object in the target medical image and the position information of the target part of the medical device, including at least one of the following: according to the position information of the target object in the target medical image and the position information of the medical device The position information of the target part is based on the ray collision detection method for collision detection; according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on the convex polygon collision detection method; according to the target medical image The position information of the target object and the position information of the target part of the medical device are subjected to collision detection based on a linear projection collision detection method.
  • the target object includes four feature points O1, O2, O3, and O4.
  • the slave device 200 includes three instrument arms 210 (respectively instrument arms 210-1, instrument arm 210-2 and instrument arm 210-3), two surgical instruments 221 (surgical instrument 221a and surgical instrument 221b respectively), and an endoscope 222 with two image acquisition units.
  • the surgical instrument 221a has two instrument marking points T1 and T2
  • the surgical instrument 221b has two instrument marking points T3 and T4.
  • the image information of the tissue in the operation space can be obtained in real time by the endoscope 222, and updated in real time by the image processing unit, and the position information of each feature point in the endoscope coordinate system ⁇ Oe ⁇ can be obtained;
  • the endoscope 222 and the surgical instruments 221a and 221b are installed on the instrument arm 210, the base coordinate system of the base 201 of the slave device 200 is set to ⁇ Ob ⁇ , and the instrument arm base coordinate system of the instrument arm 210-1 is ⁇ Ob1 ⁇ , the instrument arm base coordinate system of the instrument arm 210-2 is ⁇ Ob2 ⁇ , and the instrument arm base coordinate system of the instrument arm 210-3 (ie, the scope arm) is ⁇ Ob3 ⁇ .
  • the position information of the feature point O1 in the base coordinate system ⁇ Ob ⁇ can be obtained from the kinematic relationship
  • the motion information of the marker points of the instrument includes velocity information, acceleration information, direction information, and the like.
  • the dotted line uses speed information as an example for illustration.
  • Velocity information of instrument marking points can be obtained by differential kinematics:
  • J is the Jacobian matrix of the equipment markers relative to the base coordinate system of the equipment arm; Indicates the influence matrix of joint i on the linear velocity of the instrument marker point; Represents the influence matrix of joint i on the angular velocity of the instrument marker point; Indicates the joint velocity of each joint in the instrument arm; v e indicates the velocity of the instrument marker.
  • the possible expected position information of the device marker point in the subsequent certain period of time can be obtained by integration:
  • p 0 represents the current position of the instrument marking point; Indicates the expected position of the instrument marker after time t n elapses.
  • the expected position information of the surgical instrument can be obtained, so as to determine the collision situation.
  • the collision situation includes the current collision situation and the expected collision situation.
  • step of determining the collision situation in step S4 includes:
  • Step S41 Take the feature point O o as the center of the sphere, the position of O o is (xo 0 , y o0 , z o0 ), and establish a sphere Co with Ro as the radius:
  • Step S42 Take the instrument marking point T o on the surgical instrument as the center of the sphere, the position of T o is (x t0 , y t0 , z t0 ), and use Rt as the radius to establish a sphere Ct:
  • the collision rule for the target object can be formulated according to the actual setting of the threshold P.
  • the target object includes M feature points, wherein N feature points are in contact with the instrument marker point, N is a natural number, M is a natural number not less than N, if the ratio of N to M is greater than Threshold P, P ⁇ (0,1], then it is determined that the surgical instrument 221 will collide with the target object.
  • the values of M, N, and P can be set according to the actual situation, and the smaller the P value, the The collision detection of the target object is more important.
  • step S4 the step of starting the security protection mechanism includes:
  • Step S44 setting a virtual boundary for the movement of the surgical instrument 221 according to the collision situation, and restricting the surgical instrument 221 from entering the range of the virtual boundary.
  • a virtual boundary limit is set according to the pre-collision information to prevent the surgical instrument 221 from moving to collide with the target object.
  • step S4 the step of alarming or prompting includes:
  • Step S45 In the imaging device 102 and/or the display device 302, add a text prompt of the collision information, and emphatically display the collision part between the surgical instrument 221 and the target object, such as red highlighting, etc., to help doctors and assistants Alarm or reminder is given by means of image display, as shown in Figure 19.
  • Step S46 flashing the warning light and/or prompting by sound.
  • a warning light is set at the instrument outer end of the tool arm of the instrument arm 210, if the surgical instrument 221 mounted or connected to the instrument arm 210 collides (that is, the current collision situation is a surgical instrument 221 collides with the target object), then flicker at a higher frequency, such as a 2Hz yellow light flicker. If the surgical instrument 221 mounted or connected to the instrument arm 210 is about to collide (that is, the expected collision situation is that the surgical instrument 221 collides with the target object), then flash at a lower frequency, such as a 1 Hz yellow light. Further, the instrument arm 210 can also be provided with an alarm sound prompting device to provide different levels of sound prompts, such as a 2 Hz sound prompt if a collision occurs, and a 1 Hz sound prompt if a collision is about to occur.
  • the collision detection is performed based on the ray collision detection method, including: generating the origin according to the position information of the target part of the medical device in the target medical image, and using the origin As the starting point, a ray is sent in the direction of movement of the medical device; according to the ray and the position information of the target object in the target medical image, it is judged whether the ray intersects the target object; when the ray intersects the target object, and the position of the intersection point and the position of the origin When the distance satisfies the preset condition, it is determined that the target object collides with the medical device.
  • Figure 21 is a schematic diagram of a ray collision detection method in an embodiment
  • Figure 22 is a flowchart of a ray collision detection method in an embodiment, in this embodiment, as shown in Figure 21 , select a position as the origin, launch a ray from the origin in a certain direction, calculate whether the ray path intersects with the surface of the object, and if there is an intersection point, calculate the distance between the intersection point and the origin.
  • the end of the instrument is taken as the origin, a ray is emitted toward the movement direction of the end of the instrument, and then the intersection point of the ray and the tissue surface is calculated and the distance between the intersection point and the origin is given.
  • intersection point means that there is no collision between the instrument and the tissue.
  • the distance between two points in three-dimensional space is calculated by the formula: Where (x 1 , y 1 , z 1 ) is the coordinates of point 1, and (x 2 , y 2 , z 2 ) is the coordinates of point 2.
  • the calculation of the intersection point of the ray and the object can be solved by establishing the parametric equations of the straight line and the geometric body and solving the equation system formed by them simultaneously. If the equation system has no solution, then there is no intersection point. If there is a solution, all the solutions correspond to all of their intersection points. coordinate.
  • the collision detection is performed based on a convex polygon collision detection method, including: according to the position information of the target part of the medical device in the target medical image Generate the first geometric body based on the position information; generate the second geometric body according to the position information of the target object in the target medical image; calculate the Minkowski difference between the first geometric body and the second geometric body; judge whether the target object collides with the medical device according to the Minkowski difference .
  • FIG. 23 is a schematic diagram of a convex polygon collision detection method in an embodiment
  • FIG. 24 is a flow chart of a convex polygon collision detection method in an embodiment.
  • the Minkowski difference between two geometries to be collided with is judged whether there is a collision between the two geometries according to whether the difference contains the origin (0,0).
  • S1 and S2 have collision overlapping parts, so the S3 geometry generated by their Minkowski difference includes the origin (0,0).
  • the Minkowski difference is calculated by taking the difference between the coordinates of a point in one geometry and all points in another geometry.
  • the cylinder at the end of the instrument is taken as one geometry, and the sensitive tissue is taken as another geometry, and then the Minkowski difference of the two geometry is calculated, if the origin is included, there is a collision, and if the origin is not included, there is no collision.
  • the collision detection is performed based on the linear projection collision detection method, including: according to the position information of the target part of the medical device in the target medical image Generating the first geometric body based on the position information; generating the second geometric body according to the position information of the target object in the target medical image; taking the moving direction of the medical device as the projection direction, calculating whether the projection parts of the first geometric body and the second geometric body overlap; when overlapping, then Determine the collision between the target object and the medical device.
  • FIG. 25 is a schematic diagram of a linear projection collision detection method in an embodiment.
  • it is judged by calculating whether the projections of two geometric bodies to be collision detected on a straight line overlap. Whether the geometry collides in the direction pointed by the projected line.
  • the end of the instrument can be regarded as a geometric body, and the sensitive tissue can be regarded as another geometric body, and then the instruction speed direction of the instrument is used as the direction of the projected line to project and calculate whether the projected parts of the two geometric bodies overlap. If so, then Indicates that the instrument collides with the tissue according to the current motion trend.
  • the collision detection is performed according to the position information of the target object in the target medical image and the position information of the target part of the medical device, including: when the target object collides with the medical device, outputting a perceivable
  • the collision warning includes at least one of a display reminder and a sound reminder.
  • the above collision detection method further includes: displaying the current medical image and/or the target medical image through a display device and/or an augmented reality device.
  • the display device may refer to a display device on a doctor's console, through which a current medical image and/or a target medical image may be displayed.
  • the system may also introduce an augmented reality device, through which the current medical image and/or the target medical image are displayed.
  • a three-dimensional digital scene inside the abdominal cavity is reconstructed by obtaining preoperative CT images/MR images, and the tissues in the surgical area, hidden nerves, blood vessels and sensitive tissues are marked, and preoperative medical images, endoscopic
  • the endoscope vision image and the position of the instrument in the robot coordinate system are fused to determine the spatial position and posture of the surgical instrument in the 3D surgical digital scene, so as to obtain the 3D spatial position relationship of the instrument, 3D endoscope, and human tissue and intuitively display it on the 3D surgical scene.
  • the surgical instrument collides with the sensitive tissue in the operation area, there is a risk of poking the sensitive tissue, especially when sensitive nerves, blood vessels, and tissues hidden in the invisible area of the endoscopic visual image, through the virtual surgical visual image (including AR ⁇ VR) prompts the risk of collision and prompts the collision and depth of the instrument and tissue through the gradual color change of the visual image and the sound, and feeds back the collision force between the instrument and the tissue to the main operating terminal to ensure the safety of the surgical operation.
  • the virtual surgical visual image including AR ⁇ VR
  • the master operator forms a master-slave control relationship with the robotic arm 201 and surgical instruments, thereby forming a master-slave mapping, and then calculates the three-dimensional model of the abdominal cavity, as well as the matching of preoperative medical images and endoscopic images. relationship, so as to fuse the three of them to determine whether the position between the current position of the end of the device and the sensitive tissue is less than the threshold value, if so, the device collides with the tissue, and the processing is performed according to the above process to issue an alarm. If the position between the current position of the end of the device and the sensitive tissue is not less than the threshold, then the device has not collided with the tissue, and the next cycle of detection continues.
  • the method further includes: acquiring the visual space coordinate system of the augmented reality device; calculating a third matching relationship between the visual space coordinate system of the augmented reality device and the image space coordinate system of the current medical image; according to the third The matching relationship displays the current medical image in the visual space of the augmented reality device; and/or displays the target medical image in the visual space of the augmented reality device according to the first matching relationship, the second matching relationship and the third matching relationship.
  • the visual space coordinate system is the coordinate system of the display space of the augmented display device.
  • the visual space coordinate system of the augmented reality device and the current medical image are first calculated.
  • the third matching relationship between the image space coordinate systems and then display the current medical image in the visual space of the augmented reality device according to the third matching relationship; and/or according to the first matching relationship, the second matching relationship and the third matching relationship Display the target medical image in the visual space of the augmented reality device.
  • the target medical image is displayed, so that the doctor can be prompted where the collision occurred by displaying the fused image; when the target object and the medical device do not collide, the current medical image is displayed .
  • steps in the flow charts involved in the above embodiments are shown sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in the flow charts involved in the above embodiments may include multiple steps or stages, and these steps or stages are not necessarily executed at the same time, but may be executed at different times, The execution order of these steps or stages is not necessarily performed sequentially, but may be performed in turn or alternately with other steps or at least a part of steps or stages in other steps.
  • an embodiment of the present application further provides a collision detection device for implementing the above-mentioned collision detection method.
  • the solution to the problem provided by the device is similar to the implementation described in the above method, so for the specific limitations in one or more embodiments of the collision detection device provided below, please refer to the definition of the collision detection method above, I won't repeat them here.
  • a collision detection device including: a current medical image acquisition module 1901 , a location information acquisition module 1902 , a standard medical image acquisition module 1903 , a conversion module 1904 and a collision detection module 1905 ,in:
  • a current medical image acquisition module 1901 configured to acquire a current medical image
  • a location information acquisition module 1902 configured to acquire the location information of the target part of the medical device
  • a standard medical image acquisition module 1903 configured to acquire a pre-processed standard medical image, where the standard medical image carries the position information of the target object;
  • the conversion module 1904 is configured to convert the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, and according to the position information of the current medical image and the target part of the medical device
  • the second matching relationship converts the position information of the target part of the medical device into the target medical image
  • the collision detection module 1905 is configured to perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
  • the collision detection module 1905 is configured to perform collision detection according to at least one of the following: according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision is performed based on the ray collision detection method Detection; according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on the convex polygon collision detection method; according to the position information of the target object in the target medical image and the position of the target part of the medical device Information, based on the linear projection collision detection method for collision detection.
  • the above-mentioned collision detection module 1905 includes: a ray generation unit, configured to generate an origin according to the position information of the target part of the medical device in the target medical image, and use the origin as a starting point to send a ray toward the moving direction of the medical device;
  • the intersection judging unit is used to judge whether the ray intersects the target object according to the ray and the position information of the target object in the target medical image;
  • the first collision result output unit is used for when the ray intersects the target object, and the position of the intersection point is the same as the origin
  • the distance between the positions satisfies the preset condition
  • the collision detection module 1905 includes: a first geometry generation unit, configured to generate a first geometry according to the position information of the target part of the medical device in the target medical image; a second geometry generation unit, configured to generate a first geometry according to the target The position information of the target object in the medical image generates a second geometric body; the Minkowski difference calculation unit is used to calculate the Minkowski difference between the first geometric body and the second geometric body; the second collision result output unit is used to calculate the Minkowski difference according to the Minkowski difference Determine whether the target object collides with the medical device.
  • the collision detection module 1905 includes: a third geometry generation unit, configured to generate the first geometry according to the position information of the target part of the medical device in the target medical image; a fourth geometry generation unit, configured to generate the first geometry according to the target The position information of the target object in the medical image generates a second geometric body; the overlapping judging unit is used to use the moving direction of the medical device as the projection direction, and calculate whether the projection parts of the first geometric body and the second geometric body overlap;
  • the third collision result output unit is configured to determine that the target object collides with the medical instrument when they overlap.
  • the above device further includes: an alarm module, configured to output a perceivable collision alarm when the target object collides with the medical device.
  • the above-mentioned device further includes: an initial medical image acquisition module, configured to acquire an initial medical image through a medical image scanning device; a standard medical image generation module, configured to identify the target object in the initial medical image, and mark the target The position of the object is used to obtain the position information of the target object, and the standard medical image is obtained according to the position information of the target object and the initial medical image.
  • an initial medical image acquisition module configured to acquire an initial medical image through a medical image scanning device
  • a standard medical image generation module configured to identify the target object in the initial medical image, and mark the target The position of the object is used to obtain the position information of the target object, and the standard medical image is obtained according to the position information of the target object and the initial medical image.
  • the above device further includes: an identification unit, configured to identify the object to be processed in the current medical image; a first matching relationship generating unit, configured to match the target object in the standard medical image with the object to be processed Fusion obtains the first matching relationship.
  • the above-mentioned device includes: a second matching relationship generating unit, configured to perform kinematic mapping calculation to obtain the second matching relationship.
  • the above-mentioned device further includes: a first display module, configured to display the current medical image and/or the target medical image through a display device and/or an augmented reality device.
  • the above device further includes: a first coordinate system acquisition module, configured to acquire the visual space coordinate system of the augmented reality device; a third matching relationship generation module, configured to calculate the visual space coordinate system of the augmented reality device and The third matching relationship between the image space coordinate systems of the current medical image; the second display module is used to display the current medical image in the visual space of the augmented reality device according to the third matching relationship; and/or according to the first matching relationship, The second matching relationship and the third matching relationship display the target medical image in the visual space of the augmented reality device.
  • the above device further includes: a third display module, configured to display the target medical image when the target object collides with the medical instrument; and display the current medical image when the target object does not collide with the medical instrument.
  • Each module in the above-mentioned collision detection device can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure may be as shown in FIG. 28 .
  • the computer device includes a processor, a memory, a communication interface, a display screen and an input device connected through a system bus. Wherein, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the communication interface of the computer device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies.
  • a collision detection method is realized.
  • the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen
  • the input device of the computer device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the casing of the computer device , and can also be an external keyboard, touchpad, or mouse.
  • Figure 28 is only a block diagram of a part of the structure related to the solution of this application, and does not constitute a limitation on the computer equipment on which the solution of this application is applied.
  • the specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
  • a computer device including a memory and a processor, where a computer program is stored in the memory, and the processor implements the steps in the above method embodiments when executing the computer program.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented.
  • Non-volatile memory can include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive variable memory (ReRAM), magnetic variable memory (Magnetoresistive Random Access Memory, MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (Phase Change Memory, PCM), graphene memory, etc.
  • the volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory, etc.
  • RAM Random Access Memory
  • RAM Random Access Memory
  • RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).
  • the databases involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database.
  • the non-relational database may include a blockchain-based distributed database, etc., but is not limited thereto.
  • the processors involved in the various embodiments provided by this application can be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, data processing logic devices based on quantum computing, etc., and are not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Robotics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

A collision detection method and apparatus, a device, and a readable storage medium. The method comprises: acquiring a current medical image (S703); acquiring position information of a target part of a medical instrument (S704); acquiring a standard medical image obtained by means of pre-processing, the standard medical image carrying position information of a target object (S705); according to a first matching relationship between the current medical image and the standard medical image, converting the position information of the target object in the standard medical image into a target medical image, and according to a second matching relationship between the current medical image and the position information of the target part of the medical instrument, converting the position information of the target part of the medical instrument into the target medical image (S706); and performing collision detection according to the position information of the target object and the position information of the target part of the medical instrument in the target medical image (S707). This can improve detection accuracy, and is more intelligent.

Description

碰撞检测方法、装置、设备、可读存储介质Collision detection method, device, equipment, readable storage medium 技术领域technical field
本申请涉及智能医疗技术领域,特别是涉及碰撞检测方法、装置、设备、可读存储介质。The present application relates to the field of intelligent medical technology, in particular to a collision detection method, device, equipment, and a readable storage medium.
背景技术Background technique
随着机器人的不断发展,越来越多的机器人被应用于手术领域。虽然传统的刚性机构机器人已经广泛应用于医疗领域各类手术,但是它的适应性、安全性以及灵活性都相对较差,容易在手术过程中对人体内部组织造成伤害。碰撞检测技术在机器人领域中拥有广泛的应用,现有的碰撞检测技术主要包括两种:With the continuous development of robots, more and more robots are used in the field of surgery. Although the traditional rigid mechanism robot has been widely used in various operations in the medical field, its adaptability, safety and flexibility are relatively poor, and it is easy to cause damage to the internal tissues of the human body during the operation. Collision detection technology has a wide range of applications in the field of robotics. Existing collision detection technologies mainly include two types:
1)采用OBB包围盒进行机器人臂与臂之间、机械臂关节之间的碰撞检测,这种碰撞检测方法主要是基于对机械臂建立包围盒,通过检测包围盒之间的碰撞来实现。1) The OBB bounding box is used to detect the collision between the robot arm and the arm, and between the joints of the robotic arm. This collision detection method is mainly based on the establishment of a bounding box for the robotic arm, and is realized by detecting the collision between the bounding boxes.
2)采用机器人末端六维力矩传感或关节力矩传感的外力碰撞检测,这种碰撞检测方式基于力矩传感器反馈的力矩信息,计算出机器人末端受到的外力从而可以检测到碰撞发生时产生的外力变化。2) External force collision detection using six-dimensional torque sensing or joint torque sensing at the end of the robot. This collision detection method is based on the torque information fed back by the torque sensor, and calculates the external force on the end of the robot so that the external force generated when the collision occurs can be detected. Variety.
然而,OBB包围盒碰撞检测主要针对凸多面体规则外形的刚体结构之间的碰撞检测,不适于器械末端与软体组织的碰撞检测。六维力矩传感器或者关节力矩传感器体积大,一般很难安装到器械末端,常见于协作机器人与外部环境的碰撞检测,其也不适用于体内狭窄腔道的碰撞检测。However, OBB bounding box collision detection is mainly aimed at collision detection between rigid body structures with regular shapes of convex polyhedrons, and is not suitable for collision detection between instrument ends and soft tissues. The six-dimensional torque sensor or joint torque sensor is bulky and generally difficult to install at the end of the device. It is often used in the collision detection between collaborative robots and the external environment, and it is not suitable for the collision detection of narrow lumens in the body.
综上,现有的碰撞检测技术不能有效用于微型手术器械与软体组织之间的碰撞检测,尤其是内窥镜图像后面的不可见的血管、神经与敏感组织。In summary, existing collision detection techniques cannot be effectively used for collision detection between microsurgical instruments and soft tissues, especially invisible blood vessels, nerves, and sensitive tissues behind endoscopic images.
发明内容Contents of the invention
基于此,有必要针对上述技术问题,提供一种能够扩大碰撞检测的适用范围,能够对微型手术器械与软体组织之间碰撞进行检测的碰撞检测方法、装置、设备、可读存储介质。Based on this, it is necessary to address the above technical problems and provide a collision detection method, device, equipment, and readable storage medium that can expand the scope of application of collision detection and detect collisions between microsurgical instruments and soft tissues.
第一方面,本申请提供一种碰撞检测方法,所述方法包括:获取目标对象于不同视角下的至少两个图像信息;根据所述至少两个图像信息,获得所述目标对象的空间位置;获取手术机器人的器械臂的末端所连接的医疗器械的位置信息;根据所述目标对象的空间位置和所述医疗器械的位置信息确定所述医疗器械与所述目标对象的碰撞情况。In a first aspect, the present application provides a collision detection method, the method comprising: acquiring at least two image information of a target object under different viewing angles; obtaining the spatial position of the target object according to the at least two image information; Obtaining position information of the medical instrument connected to the end of the instrument arm of the surgical robot; determining a collision between the medical instrument and the target object according to the spatial position of the target object and the position information of the medical instrument.
第二方面,本申请提供另一种碰撞检测方法,所述方法包括:获取当前医学图像;获取医疗器械的目标部位的位置信息;获取预先处理得到的标准医学图像,所述标准医学图像携带有目标对象的位置信息;根据所述当前医学图像和所述标准医学图像的第一匹配关系将所述标准医学图像中目标对象的位置信息转换至目标医学图像中,根据所述当前医学图像与所述医疗器械的目标部位的位置信息的第二匹配关系将所述医疗器械的目标部位的位置信息转换至所述目标医学图像中;根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息进行碰撞检测。In the second aspect, the present application provides another collision detection method, the method comprising: obtaining the current medical image; obtaining the position information of the target part of the medical device; obtaining a pre-processed standard medical image, the standard medical image carrying The position information of the target object; according to the first matching relationship between the current medical image and the standard medical image, the position information of the target object in the standard medical image is converted into the target medical image, and according to the current medical image and the The second matching relationship of the position information of the target part of the medical device is used to convert the position information of the target part of the medical device into the target medical image; according to the position information of the target object in the target medical image and the Collision detection is performed based on the position information of the target part of the medical device.
第三方面,本申请提供一种碰撞检测系统,所述碰撞检测系统包括处理器、医疗用镜以及医疗器械,所述医疗器械上设置有传感器,所述传感器用于采集医疗器械的目标部位的位置信息,并将所采集医疗器械的目标部位的位置信息发送至所述处理器;所述医疗用镜用于采集当前医学图像,并将所述当前医学图像发送至所述处理器;所述处理器用于执行上述的碰撞检测方法。In a third aspect, the present application provides a collision detection system. The collision detection system includes a processor, a medical mirror, and a medical device. The medical device is provided with a sensor, and the sensor is used to collect the location information, and send the location information of the target part of the collected medical device to the processor; the medical mirror is used to collect the current medical image, and send the current medical image to the processor; the The processor is used to execute the above collision detection method.
第四方面,本申请提供一种碰撞检测装置,所述装置包括:当前医学图像获取模块,用于获取当前医学图像;位置信息获取模块,用于获取医疗器械的目标部位的位置信息;标准医学图像获取模块,用于获取预先处理得到的标准医学图像,所述标准医学图像携带有目标对象的位置信息;匹配关系计算模块,用于计算所述当前医学图像和所述标准医学图像的第一匹配关系,计算所述当前医学图像与所述医疗器械的目标部位的位置信息的第二匹配关系;转换模块,用于根据所述第一匹配关系将所述标准医学图像中目标对象的位置信息转换至目标医学图像中,根据所述第二匹配关系将所述医疗器械的目标部位的位置信息转换至所述目标医学图像中;碰撞检测模块,用于根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息进行碰撞检测。In a fourth aspect, the present application provides a collision detection device, which includes: a current medical image acquisition module, used to acquire the current medical image; a position information acquisition module, used to acquire the position information of the target part of the medical device; The image acquisition module is used to acquire the pre-processed standard medical image, the standard medical image carries the position information of the target object; the matching relationship calculation module is used to calculate the current medical image and the first one of the standard medical image A matching relationship, calculating a second matching relationship between the current medical image and the position information of the target part of the medical device; a conversion module, configured to convert the position information of the target object in the standard medical image according to the first matching relationship Converting to the target medical image, converting the position information of the target part of the medical device into the target medical image according to the second matching relationship; a collision detection module, configured to Collision detection is performed on the position information of the object and the position information of the target part of the medical instrument.
第五方面,本申请提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述的任意一个实施例中所涉及的方法的步骤。In a fifth aspect, the present application provides a computer device, including a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the steps of the method involved in any of the above-mentioned embodiments are implemented .
第六方面,本申请提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的任意一个实施例中所涉及的方法的步骤。In a sixth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method involved in any one of the above-mentioned embodiments are implemented.
上述碰撞检测方法、装置、设备、可读存储介质,预先处理得到的标准医学图像中携带有目标对象 的位置信息,这样根据当前医学图像和标准医学图像的第一匹配关系将标准医学图像中目标对象的位置信息转换至目标医学图像中,根据当前医学图像与医疗器械的目标部位的位置信息的第二匹配关系将所述医疗器械的目标部位的位置信息转换至目标医学图像中,这样在目标医学图像中进行碰撞检测,扩大碰撞检测的适用范围,更加智能,能够对微型手术器械与软体组织之间碰撞进行检测。The above-mentioned collision detection method, device, equipment, and readable storage medium carry the position information of the target object in the standard medical image obtained by preprocessing, so that the target object in the standard medical image is The position information of the object is converted into the target medical image, and the position information of the target part of the medical device is converted into the target medical image according to the second matching relationship between the current medical image and the position information of the target part of the medical device, so that in the target Collision detection in medical images expands the scope of application of collision detection and is more intelligent, capable of detecting collisions between microsurgical instruments and soft tissues.
附图说明Description of drawings
图1为一个实施例中碰撞检测系统的系统图;Fig. 1 is a system diagram of a collision detection system in an embodiment;
图2是本申请涉及的手术机器人系统的应用场景的示意图;Fig. 2 is a schematic diagram of the application scenario of the surgical robot system involved in the present application;
图3是本申请实施例的主端装置的示意图;FIG. 3 is a schematic diagram of a master device according to an embodiment of the present application;
图4是本申请实施例的从端装置的示意图;FIG. 4 is a schematic diagram of a slave device according to an embodiment of the present application;
图5a是本申请实施例的手术器械的示意图;Figure 5a is a schematic diagram of a surgical instrument according to an embodiment of the present application;
图5b是图5a的A部放大图;Figure 5b is an enlarged view of part A of Figure 5a;
图6是本申请实施例的术野图像的示意图;Fig. 6 is a schematic diagram of an operation field image of an embodiment of the present application;
图7是本申请实施例的手术机器人的碰撞检测方法的流程图;7 is a flowchart of a collision detection method for a surgical robot according to an embodiment of the present application;
图8是本申请实施例的特征点的标记过程的流程图;Fig. 8 is a flow chart of the marking process of feature points in the embodiment of the present application;
图9是本申请实施例的双目视觉的示意图;Fig. 9 is a schematic diagram of the binocular vision of the embodiment of the present application;
图10是本申请实施例的双目视觉的计算原理图;Fig. 10 is a calculation principle diagram of binocular vision in the embodiment of the present application;
图11是本申请一种机器人主从操作控制示意图;Fig. 11 is a schematic diagram of master-slave operation control of a robot of the present application;
图12为一个实施例中碰撞检测方法的流程示意图;Fig. 12 is a schematic flow chart of a collision detection method in an embodiment;
图13为一个实施例中的标准医学图像的获取的场景示意图;Fig. 13 is a schematic diagram of a scene of standard medical image acquisition in an embodiment;
图14为一个实施例中的内窥镜图像、术前医学影像以及内窥镜图像,标准医学图像为术前医学影像的融合流程图;Fig. 14 is an embodiment of an endoscopic image, a preoperative medical image and an endoscopic image, and the standard medical image is a fusion flow chart of the preoperative medical image;
图15为一个实施例中医疗器械与目标对象碰撞的示意图;Fig. 15 is a schematic diagram of a collision between a medical device and a target object in one embodiment;
图16为一个实施例中医疗器械在目标手术区域的空间位置的示意图;Fig. 16 is a schematic diagram of the spatial position of the medical device in the target operation area in one embodiment;
图17a是本申请实施例的手术器械与组织的位置空间关系的示意图;Fig. 17a is a schematic diagram of the spatial relationship between the surgical instrument and the tissue according to the embodiment of the present application;
图17b是图17a的B部放大图;Figure 17b is an enlarged view of part B of Figure 17a;
图18是本申请实施例的碰撞检测的流程图;Fig. 18 is a flow chart of the collision detection of the embodiment of the present application;
图19是本申请实施例的碰撞安全视觉告警的示意图;Fig. 19 is a schematic diagram of a collision safety visual warning according to an embodiment of the present application;
图20是本申请实施例的碰撞安全声光告警的示意图;Fig. 20 is a schematic diagram of a sound and light warning for collision safety in an embodiment of the present application;
图21为一个实施例中的射线碰撞检测方法的示意图;Fig. 21 is a schematic diagram of a ray collision detection method in an embodiment;
图22为一个实施例中的射线碰撞检测方法的流程图;Fig. 22 is a flowchart of a ray collision detection method in an embodiment;
图23为一个实施例中的凸多边形碰撞检测方法的示意图;Fig. 23 is a schematic diagram of a convex polygon collision detection method in an embodiment;
图24为一个实施例中的凸多边形碰撞检测方法的流程图;Fig. 24 is a flowchart of a convex polygon collision detection method in an embodiment;
图25为一个实施例中的直线投影碰撞检测方法的示意图;Fig. 25 is a schematic diagram of a linear projection collision detection method in an embodiment;
图26为一个实施例中的碰撞安全声音告警示意图;Fig. 26 is a schematic diagram of a sound warning for collision safety in an embodiment;
图27为一个实施例中碰撞检测装置的结构框图;Fig. 27 is a structural block diagram of a collision detection device in an embodiment;
图28为一个实施例中计算机设备的内部结构图。Figure 28 is a diagram of the internal structure of a computer device in one embodiment.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solution and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.
如在本申请中所使用的,单数形式“一”、“一个”以及“该”包括复数对象,术语“或”通常是以包括“和/或”的含义而进行使用的,术语“若干”通常是以包括“至少一个”的含义而进行使用的,术语“至少两个”通常是以包括“两个或两个以上”的含义而进行使用的,此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”、“第三”的特征可以明示或者隐含地包括一个或者至少两个该特征,术语“近端”通常是靠近操作者的一端,术语“远端”通常是靠近患者即靠近病灶的一端,“一端”与“另一端”以及“近端”与“远端”通常是指相对应的两部分,其不仅包括端点,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个 元件的相互作用关系。此外,如在本申请中所使用的,一元件设置于另一元件,通常仅表示两元件之间存在连接、耦合、配合或传动关系,且两元件之间可以是直接的或通过中间元件间接的连接、耦合、配合或传动,而不能理解为指示或暗示两元件之间的位置信息关系,即一元件可以在另一元件的内部、外部、上方、下方或一侧等任意方位,除非内容另外明确指出外。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。As used in this application, the singular forms "a", "an" and "the" include plural objects, the term "or" is generally used in the sense of including "and/or", and the term "several" Usually, the term "at least one" is used in the meaning of "at least one", and the term "at least two" is usually used in the meaning of "two or more". In addition, the terms "first", "second "Two" and "third" are used for descriptive purposes only, and should not be understood as indicating or implying relative importance or implicitly indicating the quantity of the indicated technical features. Thus, a feature defined as "first", "second", and "third" may explicitly or implicitly include one or at least two of these features, the term "proximal end" is usually the end close to the operator, the term The "distal end" is usually the end close to the patient, that is, the end close to the lesion. "One end" and "the other end" as well as "proximal end" and "distal end" usually refer to the corresponding two parts, which not only include the end point, the term "installation" , "connected" and "connected" should be understood in a broad sense. For example, it can be fixed connection, detachable connection, or integrated; it can be mechanical connection or electrical connection; it can be direct connection or through The intermediary is indirectly connected, which can be the internal communication of two elements or the interaction relationship between two elements. In addition, as used in this application, an element is arranged on another element, usually only means that there is a connection, coupling, cooperation or transmission relationship between the two elements, and the relationship between the two elements can be direct or indirect through an intermediate element. connection, coupling, fit or transmission, but cannot be understood as indicating or implying the positional information relationship between two elements, that is, one element can be in any orientation such as inside, outside, above, below or on one side of another element, unless the content Also clearly point out. Those of ordinary skill in the art can understand the specific meanings of the above terms in this application according to specific situations.
本申请实施例提供的碰撞检测方法,可以应用于如图1所示的碰撞检测系统中。碰撞检测系统包括处理器、医疗用镜以及医疗器械,医疗器械上设置有传感器,传感器用于采集医疗器械的目标部位的位置信息,并将所采集医疗器械的目标部位的位置信息发送至处理器;医疗用镜用于采集当前医学图像,并将当前医学图像发送至处理器;处理器用于执行下文中的碰撞检测方法。The collision detection method provided in the embodiment of the present application may be applied to the collision detection system shown in FIG. 1 . The collision detection system includes a processor, a medical mirror, and a medical device. The medical device is equipped with a sensor, and the sensor is used to collect the position information of the target part of the medical device, and send the collected position information of the target part of the medical device to the processor. ; The medical mirror is used to collect the current medical image and send the current medical image to the processor; the processor is used to execute the collision detection method below.
图2示出了一个手术机器人系统的应用场景,所述手术机器人系统包括主从式遥操作的手术机器人,即所述手术机器人系统包括主端装置100(即医生端控制装置)、从端装置200(即患者端控制装置)、主控制器以及用于支撑手术对象进行手术的支撑装置400(例如,手术床)。需要说明的,在一些实施例中,支撑装置400也可替换为其它的手术操作平台,本申请对此不限。Figure 2 shows an application scenario of a surgical robot system, which includes a master-slave teleoperated surgical robot, that is, the surgical robot system includes a master device 100 (ie, a doctor-side control device), a slave device 200 (ie, the patient-side control device), a main controller, and a supporting device 400 (eg, an operating bed) for supporting the surgical object for surgery. It should be noted that, in some embodiments, the supporting device 400 may also be replaced by other surgical operation platforms, which is not limited in the present application.
请参考图3,所述主端装置100为遥操作手术机器人的操作端,并包含安装其上的主操作手101。所述主操作手101用于接收操作者的手部运动信息,以作为整个系统的运动控制信号输入。可选的,所述主控制器亦设置在所述主端装置100上。优选的,主端装置100还包括成像设备102,所述成像设备102可为操作者提供立体图像,为操作者进行手术操作提供术野图像。所述术野图像包括手术器械类型、数量、在腹中的位姿,病患器官组织以及周围器官组织血管的形态、布置等。可选的,主端装置100还包括脚踏手术控制设备103,操作者还可通过脚踏手术控制设备103,完成电切、电凝等相关操作指令的输入。Please refer to FIG. 3 , the master device 100 is an operation end of a teleoperated surgical robot, and includes a master operator 101 installed thereon. The main operating hand 101 is used to receive the operator's hand movement information, which can be input as the movement control signal of the whole system. Optionally, the master controller is also disposed on the master device 100 . Preferably, the master device 100 further includes an imaging device 102, the imaging device 102 can provide a stereoscopic image for the operator, and provide an image of the surgical field for the operator to perform a surgical operation. The image of the surgical field includes the type and quantity of surgical instruments, their posture in the abdomen, the shape and arrangement of blood vessels in the patient's organs and tissues, and the surrounding organs and tissues. Optionally, the master device 100 also includes a foot-operated surgical control device 103 , through which the operator can complete input of relevant operation instructions such as electrocution and electrocoagulation.
请参考图4,从端装置200为遥操作手术机器人的具体执行平台,并包括底座201及安装于其上的手术执行组件。所述手术执行组件包括器械臂210和器械,所述器械挂载或连接于器械臂210的末端。进一步的,器械包括用于具体执行手术操作的手术器械221(如高频电刀等)以及用于辅助观察的内窥镜222等;相应的,用于连接或挂载内窥镜222的器械臂210可称为持镜臂。Please refer to FIG. 4 , the slave device 200 is a specific execution platform of a teleoperated surgical robot, and includes a base 201 and a surgical execution component installed thereon. The operation execution assembly includes an instrument arm 210 and instruments, and the instruments are hung or connected to the end of the instrument arm 210 . Further, the instruments include surgical instruments 221 (such as high-frequency electric knife, etc.) for specific surgical operations and endoscopes 222 for auxiliary observation; correspondingly, the instruments for connecting or mounting the endoscope 222 Arm 210 may be referred to as a mirror arm.
在一个实施例中,所述器械臂210包括调整臂和工作臂。所述工具臂为机械不动点机构,用于驱动器械围绕机械不动点运动,以实现对支撑装置400上的患者410进行微创伤手术治疗或拍摄操作。所述调整臂用于调整机械不动点在工作空间的位姿。在另外一个实施例中,所述器械臂210为一个至少具有六个自由度的空间构型的机构,用于在程序控制下驱动手术器械221围绕一主动不动点运动。所述手术器械221用于执行具体的手术操作,如夹、切、剪等操作,请参考图5a和图5b,在一个示范例中,手术器械221包括:传动机构240、器械杆241以及操作机构242。该手术器械221可沿器械杆241的轴向进行伸缩移动;可绕器械杆241的周向进行自转运动,即形成自转关节251;操作机构242可进行俯仰运动、偏摆运动和开合运动,分别形成俯仰关节252、偏摆关节253和开合关节254,以便实现手术操作中的各种应用。需要说明的,由于实际中手术器械221和内窥镜222有一定的体积,上述的“不动点”应理解为一个不动区域。当然本领域技术人员可根据现有技术对“不动点”进行理解。In one embodiment, the instrument arm 210 includes an adjustment arm and a working arm. The tool arm is a mechanical fixed point mechanism, which is used to drive the instrument to move around the mechanical fixed point, so as to perform minimally invasive surgical treatment or photographing on the patient 410 on the supporting device 400 . The adjustment arm is used to adjust the pose of the mechanical fixed point in the working space. In another embodiment, the instrument arm 210 is a spatially configured mechanism with at least six degrees of freedom, which is used to drive the surgical instrument 221 to move around an active fixed point under program control. The surgical instrument 221 is used to perform specific surgical operations, such as clamping, cutting, scissors and other operations, please refer to FIG. 5a and FIG. Agency 242. The surgical instrument 221 can telescopically move along the axial direction of the instrument shaft 241; it can perform autorotation around the circumference of the instrument shaft 241, that is, form an autorotation joint 251; the operating mechanism 242 can perform pitching, yaw, and opening and closing movements, Pitch joints 252, yaw joints 253, and opening and closing joints 254 are respectively formed so as to realize various applications in surgical operations. It should be noted that since the surgical instrument 221 and the endoscope 222 have a certain volume in practice, the above-mentioned "fixed point" should be understood as a fixed area. Of course, those skilled in the art can understand the "fixed point" according to the prior art.
主控制器分别与主端装置100、从端装置200通信连接,用于根据主操作手101的运动控制手术执行组件的运动,具体而言,所述主控制器包括主从映射模块,所述主从映射模块用于获取所述主操作手101的末端位姿,以及预定的主从映射关系,获得手术执行组件的期望末端位姿,进而控制器械臂210驱动器械运动到期望的末端位姿。进一步,所述主从映射模块还用于接收器械功能操作指令(如电切、电凝等相关操作指令),控制手术器械221的能量驱动器,以释放能量实现电切、电凝等手术操作。一些实施例中,主控制器还接受手术执行组件所受到的作用力信息(例如人体组织器官对手术器械的作用力信息),并将手术执行组件所受到的作用力信息反馈给主操作手101,以使操作者能够更加直观地感受手术操作的反馈力。The master controller is connected to the master device 100 and the slave device 200 respectively, and is used to control the movement of the operation execution component according to the movement of the master operator 101. Specifically, the master controller includes a master-slave mapping module, the The master-slave mapping module is used to obtain the terminal pose of the master operator 101 and the predetermined master-slave mapping relationship to obtain the desired terminal pose of the surgical actuator, and then control the instrument arm 210 to drive the instrument to the desired terminal pose . Further, the master-slave mapping module is also used to receive instrument function operation instructions (such as electric cutting, electrocoagulation and other related operation instructions), and control the energy driver of the surgical instrument 221 to release energy to implement electric cutting, electrocoagulation and other surgical operations. In some embodiments, the main controller also accepts the force information received by the surgical execution component (for example, the force information of the human tissue and organ on the surgical instrument), and feeds back the force information received by the surgical execution component to the main operator 101 , so that the operator can feel the feedback force of the surgical operation more intuitively.
进一步,所述医疗机器人系统还包括图像台车300。所述图像台车300包括:与所述内窥镜222通信连接图像处理单元(未图示)。所述内窥镜222用于获取腔内(指患者的体腔内)的术野图像。所述图像处理单元用于对所述内窥镜222所获取的术野图像进行图像化处理,并传输至所述成像设备102,以便于操作者观察到术野图像。可选的,所述图像台车300还包括显示设备302。所述显示设备302与所述图像处理单元通信连接,用于为辅助操作者(例如护士)实时提供显示术野图像或其它的辅助显示信息。Further, the medical robot system also includes an image trolley 300 . The image trolley 300 includes: an image processing unit (not shown) communicated with the endoscope 222 . The endoscope 222 is used to acquire intracavity (referring to the patient's body cavity) surgical field images. The image processing unit is used to perform image processing on the image of the operative field acquired by the endoscope 222 and transmit it to the imaging device 102 so that the operator can observe the image of the operative field. Optionally, the image trolley 300 further includes a display device 302 . The display device 302 is connected in communication with the image processing unit, and is used to provide an auxiliary operator (such as a nurse) with real-time display of an operation field image or other auxiliary display information.
可选的,在一些手术的应用场景中,手术机器人系统还包括呼吸机和麻醉机500以及器械台600等辅助部件,以用于供手术中使用。本领域技术人员可根据现有技术对这些辅助部件进行选择和配置, 这里不再展开描述。Optionally, in some surgical application scenarios, the surgical robot system also includes auxiliary components such as a ventilator, an anesthesia machine 500 and an instrument table 600 for use in surgery. Those skilled in the art can select and configure these auxiliary components according to the prior art, and no further description will be given here.
请参考图6,其示出了一个手术操作空间场景,在一个示范例中,可在患者410体表打3~4个手术孔,并安装固定具有通孔的戳卡,手术器械221与内窥镜222分别通过戳卡孔进入体内的手术操作空间。Please refer to FIG. 6 , which shows a surgical operation space scene. In an example, 3 to 4 surgical holes can be drilled on the body surface of the patient 410, and a poking card with a through hole can be installed and fixed. The surgical instrument 221 and the inner The speculum 222 respectively enters the surgical operation space in the body through the poking holes.
在正常手术操作时,操作者(例如,主操作医生)在术野图像的引导下,通过主从遥操作,对手术器械221的末端位姿进行控制。手术中,操作者坐在位于无菌区之外的主端装置100前,通过成像设备102观察传回的术野图像,并通过操作主操作手101来控制手术执行组件运动,以完成各种手术操作。在手术操作时,手术孔的位姿会保持不动(即形成不动点),以避免挤压损伤周围组织,操作者在内窥镜222所拍摄的术野图像的指引下,通过操作手术器械221,对病灶组织进行切割、电切电凝、缝合,来完成既定手术目标。在手术空间中,如图6所示,多个手术器械221和内窥镜222分别通过戳卡孔深入到人体内的狭窄空间中;内窥镜222可实时反馈术野的手术器械221和组织的图像信息。在术中,手术器械221容易与易受损伤的关键组织部位(即目标对象),如动脉血管、心脏瓣膜等产生碰撞。During normal surgical operations, the operator (for example, the main operating doctor) controls the end pose of the surgical instrument 221 through master-slave teleoperation under the guidance of the surgical field image. During the operation, the operator sits in front of the main device 100 located outside the sterile field, observes the image of the surgical field returned by the imaging device 102, and controls the movement of the surgical execution component by operating the main operating hand 101 to complete various operations. Surgical operation. During the operation, the posture of the operation hole will remain still (i.e. form a fixed point) to avoid crushing and damaging the surrounding tissues. Under the guidance of the operation field image taken by the endoscope 222, the operator The instrument 221 cuts, cuts, electrocoagulates, and sutures the lesion tissue to complete the predetermined surgical goal. In the surgical space, as shown in Figure 6, a plurality of surgical instruments 221 and endoscopes 222 penetrate into the narrow space of the human body through poking holes; the endoscope 222 can provide real-time feedback of the surgical instruments 221 and tissue image information. During the operation, the surgical instrument 221 is likely to collide with vulnerable key tissue parts (ie, target objects), such as arteries, heart valves, and the like.
为了解决手术器械221与目标对象的碰撞问题,请参考图7,本实施例提供一种手术机器人的碰撞检测方法,其包括:In order to solve the collision problem between the surgical instrument 221 and the target object, please refer to FIG. 7. This embodiment provides a collision detection method for a surgical robot, which includes:
步骤S1:获取手术环境中目标对象于不同视角下的至少两个图像信息;Step S1: Obtain at least two image information of the target object in the surgical environment under different viewing angles;
步骤S2:根据所述至少两个图像信息,获得所述目标对象的空间位置;Step S2: Obtain the spatial position of the target object according to the at least two image information;
步骤S3:获取手术机器人的器械臂的末端所连接的手术器械的位置信息;Step S3: Obtain the position information of the surgical instrument connected to the end of the instrument arm of the surgical robot;
步骤S4:根据所述目标对象的空间位置和所述手术器械的位置信息确定所述手术器械与所述目标对象的碰撞情况。Step S4: Determine the collision between the surgical instrument and the target object according to the spatial position of the target object and the position information of the surgical instrument.
可选的,所述手术机器人系统包括至少两个图像采集单元和碰撞处理单元,至少两个图像采集单元用于获取不同视角的至少两个图像信息,所述器械臂210和至少两个图像采集单元分别与所述碰撞处理单元通信连接;所述碰撞处理单元用于执行如上所述的手术机器人的碰撞检测方法。碰撞处理单元如可设置在从端装置200上,也可以设置在主端装置100上,或独立设置。本申请对碰撞处理单元的具体设置位置不作特别的限定。Optionally, the surgical robot system includes at least two image acquisition units and a collision processing unit, the at least two image acquisition units are used to acquire at least two image information from different viewing angles, the instrument arm 210 and at least two image acquisition units The units are respectively connected in communication with the collision processing unit; the collision processing unit is used to execute the above-mentioned collision detection method of the surgical robot. For example, the collision processing unit may be set on the slave device 200, or may be set on the master device 100, or be set independently. The present application makes no particular limitation on the specific location of the collision processing unit.
如此配置,基于不同视角的至少两个图像信息,以及获取的手术器械221的位置信息,可实现手术器械221与目标对象间的碰撞检测,可有效提升手术操作的安全性,避免误伤周围正常组织、血管与神经,保证手术操作安全。进一步的,由于使用视觉处理技术,降低了对传感设备的需求,简化了系统的结构。With such a configuration, based on at least two image information from different viewing angles and the acquired position information of the surgical instrument 221, the collision detection between the surgical instrument 221 and the target object can be realized, which can effectively improve the safety of the surgical operation and avoid accidentally injuring surrounding normal tissues , blood vessels and nerves to ensure the safety of surgical operations. Furthermore, due to the use of visual processing technology, the demand for sensing equipment is reduced, and the structure of the system is simplified.
可选的,所述目标对象具有特征点,所述特征点基于医学影像通过建模来确定。请参考图13,在一个可替代的示范例中,医学影像如可在术前通过医学影像扫描装置(如CT或MRI等)对患者扫描来获取。在一个可选的实施例中,在得到医学影像后,经图像处理算法完成手术空间内的组织建模,并完成手术操作场景的三维重建。操作者在术前可根据腹腔内的情况,确定需要特别关注的关键组织的轮廓,并确定和标记特征点。Optionally, the target object has feature points, and the feature points are determined by modeling based on medical images. Please refer to FIG. 13 , in an alternative example, the medical image can be obtained by scanning the patient with a medical image scanning device (such as CT or MRI, etc.) before the operation. In an optional embodiment, after obtaining the medical images, image processing algorithms are used to complete the tissue modeling in the operation space and complete the three-dimensional reconstruction of the operation scene. According to the situation in the abdominal cavity, the operator can determine the outline of the key tissues that need special attention, and determine and mark the feature points before the operation.
进一步的,请参考图8,在步骤S1中,特征点的标记过程如可包括:Further, please refer to FIG. 8, in step S1, the marking process of feature points may include:
步骤S11:获取手术空间内的器官组织的医学影像;术前准备阶段使用内窥镜、CT或MRI等医学影像扫描装置扫描获得手术空间内的器官组织的医学影像;Step S11: Obtain medical images of organs and tissues in the operating space; use endoscope, CT or MRI and other medical image scanning devices to scan and obtain medical images of organs and tissues in the operating space in the preoperative preparation stage;
步骤S12:组织建模和图像校准;经由视觉处理算法建立组织模型,即得到器官组织的空间位置信息,并可进一步经特定校准点完成图像校准;Step S12: Tissue modeling and image calibration; establish a tissue model through a visual processing algorithm to obtain spatial position information of organs and tissues, and further complete image calibration through specific calibration points;
步骤S13:标记关键组织的特征点;经由机器学习算法或医生明确需要特别关注的重要的关键组织部位,从而确定需要进行碰撞检测的目标对象的轮廓和特征点,并进行标记,以便在术中可实时更新目标对象的特征点的位置。在一个应用场景下,对于手术空间的器官组织而言,需要特别关注易受损伤的目标对象部位,如动脉血管、心脏瓣膜等,对于一般组织,即使与手术器械发生碰撞,也不会造成危害,可将这部分组织剔除出目标对象,而不进行特别关注。Step S13: Mark the feature points of key tissues; identify the important key tissue parts that need special attention through the machine learning algorithm or the doctor, so as to determine the outline and feature points of the target object that needs to be detected for collision, and mark them for intraoperative The position of the feature point of the target object can be updated in real time. In one application scenario, for the organs and tissues in the surgical space, special attention needs to be paid to vulnerable target parts, such as arteries, heart valves, etc. For general tissues, even if they collide with surgical instruments, they will not cause harm , this part of the organization can be excluded from the target object without special attention.
步骤S2中,根据所述至少两个图像信息,获得所述目标对象的空间位置的步骤包括:根据所述至少两个图像信息,建立所述目标对象的实时的三维模型,经与所述组织模型配准后,实时得到所述特征点在图像采集装置坐标系中的实时的位置信息,从而得到目标对象的空间位置。In step S2, the step of obtaining the spatial position of the target object according to the at least two image information includes: establishing a real-time three-dimensional model of the target object according to the at least two image information, and communicating with the tissue After the model is registered, the real-time position information of the feature points in the coordinate system of the image acquisition device is obtained in real time, so as to obtain the spatial position of the target object.
具体的,根据所述至少两个图像信息,建立目标对象的实时的三维模型的原理如可基于双目视觉原理,下面结合图9和图10,对双目视觉的原理进行说明。Specifically, according to the at least two image information, the principle of establishing a real-time three-dimensional model of the target object may be based on the principle of binocular vision. The principle of binocular vision will be described below with reference to FIGS. 9 and 10 .
如图9所示,两个图像采集单元(在图9中分别为第一图像采集单元701和第二图像采集单元702)对同一个物体P呈现的图像P1、P2(在图9中分别为第一图像711和第二图像712)存在差异,也称“视差”。视差越大,可以探测的深度越小;反之,视差越小,可以探测的深度越大。视差的大小对应 着物体与两个图像采集单元之间距离的远近。As shown in FIG. 9, the images P1 and P2 presented by two image acquisition units (respectively the first image acquisition unit 701 and the second image acquisition unit 702 in FIG. 9) to the same object P (in FIG. There is a difference between the first image 711 and the second image 712 ), which is also called "parallax". The greater the parallax, the smaller the depth that can be detected; on the contrary, the smaller the parallax, the greater the depth that can be detected. The size of the parallax corresponds to the distance between the object and the two image acquisition units.
进一步地,如图10所示,两个图像采集单元的光心距离,即基线记为b,两个图像采集单元的焦距均为f。两个图像采集单元在同一时刻观看到被测物体的同一点P(x,y,z),在第一图像采集单元701和第二图像采集单元702所采集的第一图像711和第二图像712,点P的图像分别为P 1和P 2,其中P 1的坐标为(x l,y l),P 2的坐标为(x r+b,y r),根据相似三角形原理可以得到如下关系式: Further, as shown in FIG. 10 , the distance between the optical centers of the two image acquisition units, that is, the baseline is denoted as b, and the focal lengths of the two image acquisition units are both f. The two image acquisition units observe the same point P(x, y, z) of the measured object at the same time, the first image 711 and the second image collected by the first image acquisition unit 701 and the second image acquisition unit 702 712. The images of point P are respectively P 1 and P 2 , where the coordinates of P 1 are (x l , y l ), and the coordinates of P 2 are (x r +b, y r ). According to the principle of similar triangles, it can be obtained as follows Relational formula:
Figure PCTCN2022121629-appb-000001
Figure PCTCN2022121629-appb-000001
由式(1)可得如下关系式:From formula (1), the following relationship can be obtained:
Figure PCTCN2022121629-appb-000002
Figure PCTCN2022121629-appb-000002
Figure PCTCN2022121629-appb-000003
Figure PCTCN2022121629-appb-000003
Figure PCTCN2022121629-appb-000004
Figure PCTCN2022121629-appb-000004
根据上述式(2)-(4),可得到被测物体上的点P在图像采集装置坐标系下的三维坐标信息。同理,根据上述式(2)-(4)可得到被测物体上的任意特征点在双目图像采集装置坐标系下的三维坐标信息,进而可构建被测物体的三维模型。在得到目标对象的实时的三维模型后,可根据本领域常见的配准方法,与术前建立的组织模型进行配准,由此即可得到特征点在图像采集装置坐标系下实时的位置信息,从而可得到目标对象的空间位置。According to the above formulas (2)-(4), the three-dimensional coordinate information of the point P on the measured object in the coordinate system of the image acquisition device can be obtained. Similarly, according to the above formulas (2)-(4), the three-dimensional coordinate information of any feature point on the measured object in the coordinate system of the binocular image acquisition device can be obtained, and then the three-dimensional model of the measured object can be constructed. After obtaining the real-time 3D model of the target object, it can be registered with the preoperative tissue model according to the common registration method in the field, so as to obtain the real-time position information of the feature points in the coordinate system of the image acquisition device , so that the spatial position of the target object can be obtained.
可选的,在一些实施例中,至少两个所述图像采集单元设置于内窥镜222上,即内窥镜222为3D内窥镜,其具有至少两个摄像头。此时,图像采集装置坐标系即为内窥镜坐标系。在另一些实施例中,可利用两个具有单目摄像头的2D内窥镜分别配置为图像采集单元。当然在其它的一些实施例中,所述图像采集单元也可以独立于内窥镜222之外,本实施例对此不限。Optionally, in some embodiments, at least two of the image acquisition units are disposed on the endoscope 222, that is, the endoscope 222 is a 3D endoscope with at least two cameras. At this time, the coordinate system of the image acquisition device is the endoscope coordinate system. In some other embodiments, two 2D endoscopes with monocular cameras can be used as image acquisition units respectively. Of course, in some other embodiments, the image acquisition unit may also be independent from the endoscope 222, which is not limited in this embodiment.
在另一个实施例中,结合图11所示,正常手术操作时,术者在内窥镜图像的引导下,通过主从遥操作,对器械末端位置与姿态进行控制。器械末端位置,包括器械沿着X、Y、Z三个方向的平移运动,姿态包括器械末端俯仰、偏摆与自转运动。In another embodiment, as shown in FIG. 11 , during normal surgical operations, the operator controls the position and posture of the end of the instrument through master-slave teleoperation under the guidance of the endoscopic image. The position of the end of the instrument includes the translational movement of the instrument along the X, Y, and Z directions, and the attitude includes the pitch, yaw, and rotation of the end of the instrument.
本申请还提供一种碰撞检测方法,结合图12所示,以该方法应用于图1中的处理器为例进行说明,包括以下步骤:The present application also provides a collision detection method, which is illustrated in FIG. 12 by taking the method applied to the processor in FIG. 1 as an example, including the following steps:
S703:获取当前医学图像。S703: Acquire the current medical image.
具体地,当前医学图像是由医疗用镜所采集的实时的医学图像,其是在医疗用镜伸入至目标手术位置后,通过医疗用镜的图像传感器所采集的。Specifically, the current medical image is a real-time medical image collected by the medical mirror, which is collected by an image sensor of the medical mirror after the medical mirror is inserted into a target operation position.
S704:获取医疗器械的目标部位的位置信息。S704: Obtain position information of the target part of the medical device.
具体地,医疗器械是指手术操作器械,其目标部位可以是指手术操作器械的末端。其中对于医疗器械的目标部位的位置信息的获取可以是通过运动学原理计算得到的。例如主从手术操作时,基于主端的笛卡尔末端位置与速度,计算机器人坐标系下从端器械的响应指令笛卡尔位置与指令速度。Specifically, a medical device refers to a surgical operation device, and its target site may refer to an end of the surgical operation device. Wherein, the acquisition of the position information of the target part of the medical device may be obtained through kinematics calculation. For example, during master-slave surgery, based on the Cartesian end position and velocity of the master end, the response command Cartesian position and command velocity of the slave end instrument in the robot coordinate system are calculated.
S705:获取预先处理得到的标准医学图像,标准医学图像携带有目标对象的位置信息。S705: Acquire a pre-processed standard medical image, where the standard medical image carries position information of the target object.
S706:根据当前医学图像和标准医学图像的第一匹配关系将标准医学图像中目标对象的位置信息转换至目标医学图像中,根据当前医学图像与医疗器械的目标部位的位置信息的第二匹配关系将医疗器械的目标部位的位置信息转换至目标医学图像中。S706: Convert the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, and according to the second matching relationship between the current medical image and the position information of the target part of the medical device The position information of the target part of the medical device is converted into the target medical image.
S707:根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息进行碰撞检测。S707: Perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
具体地,第一匹配关系是当前医学图像和标准医学图像的匹配关系,其主要是通过适配关键组织标注点在当前医学图像的位置以完成当前医学图像和标准医学图像的匹配。第二匹配关系是当前医学图像与医疗器械的目标部位的位置信息的匹配,其主要是通过机器人运动学推导确立的,具体地,将器械臂与内窥镜臂的运动位置信息,包括各关节的位置、速度信息,通过机器人运动学与运动学映射关系,计算得到前医学图像与医疗器械的目标部位的位置信息的第二匹配关系。Specifically, the first matching relationship is the matching relationship between the current medical image and the standard medical image, which mainly completes the matching between the current medical image and the standard medical image by adapting the positions of key tissue labeling points in the current medical image. The second matching relationship is the matching between the current medical image and the position information of the target part of the medical device, which is mainly established through the derivation of robot kinematics. Specifically, the movement position information of the instrument arm and the endoscope arm, including the joints The position and speed information of the robot, through the kinematics of the robot and the kinematics mapping relationship, calculate the second matching relationship between the pre-medical image and the position information of the target part of the medical device.
在其中一个实施例中,根据当前医学图像和所述标准医学图像的第一匹配关系将标准医学图像中目标对象的位置信息转换至目标医学图像中之前,还包括:识别当前医学图像中的待处理对象;根据标准医学图像中的目标对象与待处理对象进行匹配融合得到第一匹配关系,第一匹配关系用于将标准医学图像中目标对象的位置信息转换至目标医学图像。In one of the embodiments, before converting the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, it also includes: identifying the target object in the current medical image Processing objects: matching and fusing the target object in the standard medical image with the object to be processed to obtain a first matching relationship, the first matching relationship is used to convert the position information of the target object in the standard medical image to the target medical image.
在其中一个实施例中,根据当前医学图像与所述医疗器械的目标部位的位置信息的第二匹配关系将医疗器械的目标部位的位置信息转换至目标医学图像中之前,还包括:根据采集当前医学图像的医疗用镜的运动信息以及医疗器械的目标部位的运动信息,进行运动学映射计算得到第二匹配关系,第二匹配关系用于将医疗器械的目标部位的位置信息转换至目标医学图像。In one of the embodiments, before converting the position information of the target part of the medical device into the target medical image according to the second matching relationship between the current medical image and the position information of the target part of the medical device, it further includes: The motion information of the medical mirror of the medical image and the motion information of the target part of the medical device are calculated by kinematic mapping to obtain the second matching relationship, and the second matching relationship is used to convert the position information of the target part of the medical device into the target medical image .
具体地,结合图14所示,其中在实际应用中,当前医学图像为内窥镜图像,标准医学图像为术前医学影像,医疗器械的目标部位的位置信息为医疗器械末端位置信息。其中第一匹配关系是先获取到手术空间组织的术前医学影像,然后识别组织位置信息,并根据组织位置信息进行重建得到腹腔内三维模型,将腹腔内三维模型与术中3D内窥镜图像进行融合即可得到第一匹配关系。Specifically, as shown in FIG. 14 , in practical applications, the current medical image is an endoscopic image, the standard medical image is a preoperative medical image, and the position information of the target part of the medical device is the position information of the end of the medical device. The first matching relationship is to obtain the preoperative medical image of the surgical space tissue first, then identify the tissue location information, and reconstruct the intra-abdominal 3D model according to the tissue location information, and combine the intra-abdominal 3D model with the intraoperative 3D endoscopic image The first matching relationship can be obtained by performing fusion.
而对于第二匹配关系的获取则是:首先获取到机械臂和内窥镜臂的运动位置信息,根据机器人正向运动学,将器械末端位置映射,得到器械在内窥镜图像坐标系下的位置信息,这样以实现3D内窥镜图像与机器人坐标系的融合。The acquisition of the second matching relationship is as follows: first obtain the movement position information of the mechanical arm and the endoscope arm, and map the position of the end of the instrument according to the forward kinematics of the robot to obtain the position of the instrument in the endoscope image coordinate system Position information, so as to realize the fusion of 3D endoscopic image and robot coordinate system.
具体地,目标医学图像可以看作是融合后的图像,其可以是将标准医学图像中的目标对象以及医疗器械融合至当前医学图像中所得到的。或者是将标准医学图像中的目标对象以及医疗器械融合至一新的空间中所得到的,在此不限制目标医学图像所在的空间。Specifically, the target medical image can be regarded as a fused image, which can be obtained by fusing target objects and medical instruments in the standard medical image into the current medical image. Or it is obtained by fusing the target objects and medical instruments in the standard medical image into a new space, and the space where the target medical image is located is not limited here.
具体地,在得到第一匹配关系和第二匹配关系后,根据第一匹配关系和第二匹配关系,将标准医学图像中目标对象的位置信息和医疗器械的目标部位的位置信息都转换至同一空间,例如目标医学图像空间,从而使得标准医学图像中目标对象的位置信息和医疗器械的目标部位的位置信息具有可比性。Specifically, after obtaining the first matching relationship and the second matching relationship, according to the first matching relationship and the second matching relationship, the position information of the target object in the standard medical image and the position information of the target part of the medical device are converted into the same space, such as the target medical image space, so that the position information of the target object in the standard medical image and the position information of the target part of the medical device are comparable.
上述实施例中先通过术前医学影像建立腹腔内三维模型,并标记关键组织的空间位置信息,然后适配关键组织标注点在3D内窥镜图像下的位置,完成两个坐标系的配准融合,实现术前医学影像与术中3D内窥镜图像的融合。再将器械臂与内窥镜臂的运动位置信息,包括各关节的位置、速度信息,通过机器人运动学与运动学映射关系,计算得到器械在内窥镜图像坐标系统的位置系统,实现术中3D内窥镜图像与机器人坐标系的融合。基于上述两步,实现术前医学影像、术中3D内窥镜图像、机器人坐标系的融合,实时计算出手术器械在手术区间的三维空间位置。In the above-mentioned embodiment, a three-dimensional intra-abdominal model is first established through preoperative medical images, and the spatial position information of key tissues is marked, and then the positions of key tissue marking points under the 3D endoscopic image are adapted to complete the registration of the two coordinate systems Fusion realizes the fusion of preoperative medical images and intraoperative 3D endoscopic images. Then, the motion position information of the instrument arm and the endoscope arm, including the position and speed information of each joint, is calculated through the robot kinematics and kinematics mapping relationship to obtain the position system of the instrument endoscope image coordinate system, realizing intraoperative Fusion of 3D endoscopic images with robot coordinate system. Based on the above two steps, the fusion of preoperative medical images, intraoperative 3D endoscopic images, and robot coordinate systems is realized, and the three-dimensional spatial position of surgical instruments in the surgical area is calculated in real time.
S707:根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息进行碰撞检测。S707: Perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
具体地,在目标医学图像中,计算目标对象的位置信息以及医疗器械的目标部位的位置信息之间的距离以判断是否发生碰撞,若是没有,则继续一下周期的检测。Specifically, in the target medical image, the distance between the position information of the target object and the position information of the target part of the medical device is calculated to determine whether a collision occurs, and if not, continue the detection for a next cycle.
具体地,结合图15和16所示,其中,图15为一个实施例中医疗器械与目标对象碰撞的示意图,图16为一个实施例中医疗器械在目标手术区域的空间位置的示意图,在该实施例中,在手术操作时,医生通过内窥镜图像看到手术区间的实时场景。在内窥镜视野框内,医生可以看到手术病灶组织(前列腺为例),同时也能看到部分敏感组织与血管组织,但也有其它敏感组织与血管神经组织不在内窥镜图像视野内,如果器械运动超出内窥镜视野,很容易误伤这些敏感组织。通过将术前医学影像建立的腹腔内三维模型、内窥镜视觉图像以及器械位置进行融合,不仅可以确定手术器械相对于视野范围内的病灶、敏感组织之间的相对空间位置关系,还可以确定手术器械相对于医生通过内窥镜看不到的敏感组织和周围神经血管的位置关系。Specifically, as shown in Figures 15 and 16, Figure 15 is a schematic diagram of a collision between a medical instrument and a target object in an embodiment, and Figure 16 is a schematic diagram of the spatial position of a medical instrument in a target operating area in an embodiment, in which In the embodiment, during the operation, the doctor sees the real-time scene of the operation area through the endoscopic image. Within the field of view of the endoscope, the doctor can see the surgical lesion tissue (prostate as an example), and can also see some sensitive tissues and vascular tissues, but there are also other sensitive tissues and vascular and nerve tissues that are not within the field of view of the endoscope image. These sensitive tissues are easily injured if the instrument moves beyond the field of view of the endoscope. By fusing the three-dimensional intra-abdominal model established by preoperative medical images, endoscopic visual images, and instrument positions, it is possible not only to determine the relative spatial position of surgical instruments relative to lesions and sensitive tissues within the field of view, but also to determine The positional relationship of surgical instruments relative to sensitive tissues and peripheral nerves and blood vessels that doctors cannot see through the endoscope.
结合图15,本实施例中特别地对手术器械运动超出内窥镜视野范围与看不见的组织、血管发生的碰撞能够有效识别。基于手术器械在手术操作场景内部的空间位置,对手术器械与敏感组织、血管神经等的碰撞进行实时检测。当器械空间位置与组织的空间位置有重叠时,发出碰撞告警并反馈碰撞发生点在腹腔内的位置。With reference to FIG. 15 , in this embodiment, collisions between surgical instruments moving beyond the field of view of the endoscope and invisible tissues and blood vessels can be effectively identified. Based on the spatial position of the surgical instrument in the surgical operation scene, the collision between the surgical instrument and sensitive tissues, blood vessels and nerves, etc. is detected in real time. When the spatial position of the instrument overlaps with the spatial position of the tissue, a collision warning is issued and the position of the collision point in the abdominal cavity is fed back.
上述碰撞检测方法,预先处理得到的标准医学图像中携带有目标对象的位置信息,这样计算当前医学图像和标准医学图像的第一匹配关系,计算当前医学图像与医疗器械的目标部位的位置信息的第二匹配关系;根据第一匹配关系将标准医学图像中目标对象的位置信息转换至目标医学图像中,根据第二匹配关系将所述医疗器械的目标部位的位置信息转换至目标医学图像中,这样在目标医学图像中进行碰撞检测,可以提高检测的准确性,更加智能。In the above collision detection method, the pre-processed standard medical image carries the position information of the target object, so that the first matching relationship between the current medical image and the standard medical image is calculated, and the relationship between the current medical image and the position information of the target part of the medical device is calculated. The second matching relationship: converting the position information of the target object in the standard medical image into the target medical image according to the first matching relationship, and converting the position information of the target part of the medical device into the target medical image according to the second matching relationship, In this way, the collision detection in the target medical image can improve the detection accuracy and be more intelligent.
在其中一个实施例中,根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息进行碰撞检测,包括以下至少一种:根据目标医学图像中目标对象的位置信息以及医疗器械的目标部 位的位置信息,基于射线碰撞检测方法进行碰撞检测;根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息,基于凸多边形碰撞检测方法进行碰撞检测;根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息,基于直线投影碰撞检测方法进行碰撞检测。In one embodiment, the collision detection is performed according to the position information of the target object in the target medical image and the position information of the target part of the medical device, including at least one of the following: according to the position information of the target object in the target medical image and the position information of the medical device The position information of the target part is based on the ray collision detection method for collision detection; according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on the convex polygon collision detection method; according to the target medical image The position information of the target object and the position information of the target part of the medical device are subjected to collision detection based on a linear projection collision detection method.
下面结合图17a和图17b,通过一个示范例对目标对象的空间位置以及手术器械221的位置信息的获取步骤进行说明。在图17b示出的示范例中,目标对象包括四个特征点O1、O2、O3、O4,该手术应用场景中,从端装置200包括3条器械臂210(分别为器械臂210-1、器械臂210-2和器械臂210-3)、两个手术器械221(分别为手术器械221a和手术器械221b)以及一个具有两个图像采集单元的内窥镜222。手术器械221a上具有两个器械标记点T1、T2,手术器械221b上具有两个器械标记点T3、T4,手术器械221a挂载连接于器械臂210-1上,手术器械221b挂载连接于器械臂210-2上,内窥镜222挂载连接于器械臂210-3上,此时器械臂210-3亦被称为持镜臂。The steps of acquiring the spatial position of the target object and the position information of the surgical instrument 221 will be described below with reference to FIG. 17a and FIG. 17b through an example. In the example shown in FIG. 17b, the target object includes four feature points O1, O2, O3, and O4. In this surgical application scenario, the slave device 200 includes three instrument arms 210 (respectively instrument arms 210-1, instrument arm 210-2 and instrument arm 210-3), two surgical instruments 221 (surgical instrument 221a and surgical instrument 221b respectively), and an endoscope 222 with two image acquisition units. The surgical instrument 221a has two instrument marking points T1 and T2, and the surgical instrument 221b has two instrument marking points T3 and T4. The surgical instrument 221a is mounted and connected to the instrument arm 210-1, and the surgical instrument 221b is mounted and connected to the instrument On the arm 210-2, the endoscope 222 is mounted and connected to the instrument arm 210-3. At this time, the instrument arm 210-3 is also called the mirror arm.
术中由内窥镜222可实时获得手术空间内的组织的图像信息,并由图像处理单元实时更新,可得到在内窥镜坐标系{Oe}中各特征点的位置信息;During the operation, the image information of the tissue in the operation space can be obtained in real time by the endoscope 222, and updated in real time by the image processing unit, and the position information of each feature point in the endoscope coordinate system {Oe} can be obtained;
内窥镜222和手术器械221a、221b均安装在器械臂210上,从端装置200之底座201的基坐标系设为{Ob},器械臂210-1的器械臂基坐标系为{Ob1},器械臂210-2的器械臂基坐标系为{Ob2},器械臂210-3(即持镜臂)的持镜臂基坐标系为{Ob3}。The endoscope 222 and the surgical instruments 221a and 221b are installed on the instrument arm 210, the base coordinate system of the base 201 of the slave device 200 is set to {Ob}, and the instrument arm base coordinate system of the instrument arm 210-1 is {Ob1} , the instrument arm base coordinate system of the instrument arm 210-2 is {Ob2}, and the instrument arm base coordinate system of the instrument arm 210-3 (ie, the scope arm) is {Ob3}.
计算得到内窥镜222相对于持镜臂基坐标系{Ob3}的位姿信息,即Oe在持镜臂基坐标系{Ob3}的位置Calculate the pose information of the endoscope 222 relative to the base coordinate system {Ob3} of the arm, that is, the position of Oe in the base coordinate system {Ob3} of the arm
Figure PCTCN2022121629-appb-000005
进而由运动学关系可得到特征点O1在基坐标系{Ob}下的位置信息
Figure PCTCN2022121629-appb-000006
Figure PCTCN2022121629-appb-000005
Furthermore, the position information of the feature point O1 in the base coordinate system {Ob} can be obtained from the kinematic relationship
Figure PCTCN2022121629-appb-000006
Figure PCTCN2022121629-appb-000007
Figure PCTCN2022121629-appb-000007
其中,
Figure PCTCN2022121629-appb-000008
为持镜臂基坐标系{Ob3}相对于基坐标系{Ob}的位姿信息,
Figure PCTCN2022121629-appb-000009
为特征点O1在内窥镜坐标系{Oe}中的位姿信息。由此计算得到手术器械221的各器械标记点相对于各自基坐标系的位置信息,即器械标记点T1、T2在器械臂基坐标系{Ob1}中的位置
Figure PCTCN2022121629-appb-000010
器械标记点T3、T4在器械臂基坐标系{Ob2}中的位置
Figure PCTCN2022121629-appb-000011
进而由运动学关系可得到各器械标记点在基坐标系{Ob}下的当前位置信息:
in,
Figure PCTCN2022121629-appb-000008
is the pose information of the arm base coordinate system {Ob3} relative to the base coordinate system {Ob},
Figure PCTCN2022121629-appb-000009
is the pose information of the feature point O1 in the endoscope coordinate system {Oe}. From this calculation, the position information of each instrument marker point of the surgical instrument 221 relative to the respective base coordinate system is obtained, that is, the position of the instrument marker points T1 and T2 in the instrument arm base coordinate system {Ob1}
Figure PCTCN2022121629-appb-000010
The positions of instrument markers T3 and T4 in the instrument arm base coordinate system {Ob2}
Figure PCTCN2022121629-appb-000011
Furthermore, the current position information of each instrument marker point in the base coordinate system {Ob} can be obtained from the kinematic relationship:
Figure PCTCN2022121629-appb-000012
Figure PCTCN2022121629-appb-000012
Figure PCTCN2022121629-appb-000013
Figure PCTCN2022121629-appb-000013
Figure PCTCN2022121629-appb-000014
Figure PCTCN2022121629-appb-000014
Figure PCTCN2022121629-appb-000015
Figure PCTCN2022121629-appb-000015
进而即可得到各手术器械221在基坐标系{Ob}下的当前位姿信息。Furthermore, the current pose information of each surgical instrument 221 in the base coordinate system {Ob} can be obtained.
器械标记点的运动信息包括速度信息、加速度信息、方向信息等。虚线以速度信息为例进行说明。器械标记点的速度信息如可由微分运动学得到:The motion information of the marker points of the instrument includes velocity information, acceleration information, direction information, and the like. The dotted line uses speed information as an example for illustration. Velocity information of instrument marking points can be obtained by differential kinematics:
Figure PCTCN2022121629-appb-000016
Figure PCTCN2022121629-appb-000016
Figure PCTCN2022121629-appb-000017
Figure PCTCN2022121629-appb-000017
其中,J为器械标记点相对于器械臂基坐标系的雅可比矩阵;
Figure PCTCN2022121629-appb-000018
表示关节i对器械标记点的线速度的影响矩阵;
Figure PCTCN2022121629-appb-000019
表示关节i对器械标记点的角速度的影响矩阵;
Figure PCTCN2022121629-appb-000020
表示器械臂中各关节的关节速度;v e表示器械标记点的速度。
Among them, J is the Jacobian matrix of the equipment markers relative to the base coordinate system of the equipment arm;
Figure PCTCN2022121629-appb-000018
Indicates the influence matrix of joint i on the linear velocity of the instrument marker point;
Figure PCTCN2022121629-appb-000019
Represents the influence matrix of joint i on the angular velocity of the instrument marker point;
Figure PCTCN2022121629-appb-000020
Indicates the joint velocity of each joint in the instrument arm; v e indicates the velocity of the instrument marker.
如上式(7)和式(8),经积分可得器械标记点在后续一定时间段内的可能的预期位置信息:As in the above formula (7) and formula (8), the possible expected position information of the device marker point in the subsequent certain period of time can be obtained by integration:
Figure PCTCN2022121629-appb-000021
Figure PCTCN2022121629-appb-000021
其中,p 0表示器械标记点的当前位置;
Figure PCTCN2022121629-appb-000022
表示经过t n时间后器械标记点的预期位置。
Among them, p 0 represents the current position of the instrument marking point;
Figure PCTCN2022121629-appb-000022
Indicates the expected position of the instrument marker after time t n elapses.
由此即可得到手术器械的预期位置信息,以便于确定碰撞情况。From this, the expected position information of the surgical instrument can be obtained, so as to determine the collision situation.
由于手术器械221的位置信息包括当前位置信息和预期位置信息,相应的,碰撞情况包括当前碰撞情况和预期碰撞情况。通过判断特征点的位置信息是否在器械标记点的包络区域或预期包络区域内,即可得到目标对象与手术器械221的当前碰撞情况和预期碰撞情况。Since the position information of the surgical instrument 221 includes current position information and expected position information, correspondingly, the collision situation includes the current collision situation and the expected collision situation. By judging whether the position information of the feature point is within the enveloping area or the expected enveloping area of the instrument marker point, the current collision situation and the expected collision situation between the target object and the surgical instrument 221 can be obtained.
请参考图18,在一个示范例中,步骤S4中确定碰撞情况的步骤包括:Please refer to FIG. 18 , in an example, the step of determining the collision situation in step S4 includes:
步骤S41:以所述特征点O o为球心,O o的位置为(xo 0,y o0,z o0),以Ro为半径建立球体Co: Step S41: Take the feature point O o as the center of the sphere, the position of O o is (xo 0 , y o0 , z o0 ), and establish a sphere Co with Ro as the radius:
C o:(x-x o0) 2+(y-y o0) 2+(z-z o0) 2=R o 2   (10) C o : (xx o0 ) 2 +(yy o0 ) 2 +(zz o0 ) 2 =R o 2 (10)
步骤S42:以所述手术器械上的器械标记点T o为球心,T o的位置为(x t0,y t0,z t0),以Rt为半径建立球体Ct: Step S42: Take the instrument marking point T o on the surgical instrument as the center of the sphere, the position of T o is (x t0 , y t0 , z t0 ), and use Rt as the radius to establish a sphere Ct:
C t:(x-x t0) 2+(y-y t0) 2+(z-z t0) 2=R t 2   (11) C t : (xx t0 ) 2 +(yy t0 ) 2 +(zz t0 ) 2 =R t 2 (11)
步骤S43:若所述球体Co和所述球体Ct间的距离D<Ro+Rt,则标记所述特征点与所述器械标记点接触,即表示手术器械221与目标对象产生碰撞;否则标记所述特征点与所述器械标记点未发生接触。具体见下式(12)和式(13):Step S43: If the distance D between the sphere Co and the sphere Ct<Ro+Rt, then mark the feature point in contact with the instrument marker point, which means that the surgical instrument 221 has collided with the target object; otherwise, mark the The feature point is not in contact with the device marker point. See the following formula (12) and formula (13) for details:
Figure PCTCN2022121629-appb-000023
Figure PCTCN2022121629-appb-000023
D<(R o+R t)   (13) D<(R o +R t ) (13)
进一步的,对于某一目标对象而言,可能存在多个特征点的情况,针对这种情况,可根据实际设定阈值P,来制定该目标对象的碰撞规则。在一个示范例中,所述目标对象包括M个特征点,其中有N个特征点与所述器械标记点发生接触,N为自然数,M为不小于N的自然数,若N与M的比值大于阈值P,P∈(0,1],则确定所述手术器械221将要与所述目标对象碰撞。其中M、N、P的值均可根据实际进行设定,且P值越小表明对该目标对象的碰撞检测越重要。Furthermore, for a certain target object, there may be multiple feature points, and for this case, the collision rule for the target object can be formulated according to the actual setting of the threshold P. In one example, the target object includes M feature points, wherein N feature points are in contact with the instrument marker point, N is a natural number, M is a natural number not less than N, if the ratio of N to M is greater than Threshold P, P∈(0,1], then it is determined that the surgical instrument 221 will collide with the target object. The values of M, N, and P can be set according to the actual situation, and the smaller the P value, the The collision detection of the target object is more important.
可选的,步骤S4中,启动安全保护机制的步骤包括:Optionally, in step S4, the step of starting the security protection mechanism includes:
步骤S44:根据所述碰撞情况为所述手术器械221的运动设置虚拟边界,并限制所述手术器械221进入所述虚拟边界的范围内。在一个示范例中,根据主端装置100的运动信息,经由主从映射得到从端装置200的期望运动信息后,根据预碰撞信息设置虚拟边界限制,避免手术器械221运动到与目标对象发生碰撞的位置;同时根据碰撞情况和从端装置200的期望运动指令,使得从端装置200的器械臂210和手术器械221远离碰撞位置。Step S44: setting a virtual boundary for the movement of the surgical instrument 221 according to the collision situation, and restricting the surgical instrument 221 from entering the range of the virtual boundary. In one example, after obtaining the expected motion information of the slave device 200 through the master-slave mapping according to the motion information of the master device 100, a virtual boundary limit is set according to the pre-collision information to prevent the surgical instrument 221 from moving to collide with the target object At the same time, according to the collision situation and the expected motion command of the slave device 200, the instrument arm 210 and the surgical instrument 221 of the slave device 200 are kept away from the collision position.
可选的,若所述碰撞情况包括所述手术器械与所述目标对象碰撞,则进行报警、提示和启动安全保护机制中的至少一种。优选的,步骤S4中,进行报警或提示的步骤包括:Optionally, if the collision situation includes the collision between the surgical instrument and the target object, at least one of an alarm, a reminder and a safety protection mechanism is activated. Preferably, in step S4, the step of alarming or prompting includes:
步骤S45:在成像设备102和/或显示设备302中,添加碰撞信息的文字提示,且对手术器械221与目标对象间的碰撞部位进行着重显示,如红色高亮显示等,以对医生和助手通过图像显示的方式进行报警或提示,如图19所示。Step S45: In the imaging device 102 and/or the display device 302, add a text prompt of the collision information, and emphatically display the collision part between the surgical instrument 221 and the target object, such as red highlighting, etc., to help doctors and assistants Alarm or reminder is given by means of image display, as shown in Figure 19.
可选的,步骤S4中,进行报警或提示的步骤包括:Optionally, in step S4, the step of alarming or prompting includes:
步骤S46:通过警示灯闪烁,和/或,通过声音提示。请参考图20,在一个示范例中,在器械臂210之工具臂的器械外侧端设置警示灯,若该器械臂210所挂载或连接的手术器械221发生碰撞(即当前碰撞情况为手术器械221与目标对象产生碰撞),则进行较高频率的闪烁,如2Hz黄灯闪烁。若该器械臂210所挂载或连接的手术器械221即将发生碰撞(即预期碰撞情况为手术器械221与目标对象产生碰撞),则进行较低频率的闪烁,如1Hz黄灯闪烁。进一步的,器械臂210上还可以设置报警声音提示装置,进行不同等级的声音提示,如若发生碰撞,则2Hz声音提示,若即将发生碰撞,则1Hz声音提示等。Step S46: flashing the warning light and/or prompting by sound. Please refer to FIG. 20 , in an example, a warning light is set at the instrument outer end of the tool arm of the instrument arm 210, if the surgical instrument 221 mounted or connected to the instrument arm 210 collides (that is, the current collision situation is a surgical instrument 221 collides with the target object), then flicker at a higher frequency, such as a 2Hz yellow light flicker. If the surgical instrument 221 mounted or connected to the instrument arm 210 is about to collide (that is, the expected collision situation is that the surgical instrument 221 collides with the target object), then flash at a lower frequency, such as a 1 Hz yellow light. Further, the instrument arm 210 can also be provided with an alarm sound prompting device to provide different levels of sound prompts, such as a 2 Hz sound prompt if a collision occurs, and a 1 Hz sound prompt if a collision is about to occur.
其中为了详细说明上述碰撞检测,下文中对上述三种方式进行详细说明:In order to describe the above collision detection in detail, the above three methods are described in detail below:
根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息,基于射线碰撞检测方法进行碰撞检测,包括:根据目标医学图像中医疗器械的目标部位的位置信息生成原点,并以原点为起点,向医疗器械运动方向发出射线;根据射线以及目标医学图像中目标对象的位置信息,判断射线与目标对象是否相交;当射线与目标对象相交,且交点的位置与原点的位置之间的距离满足预设条件时,判定所述目标对象和所述医疗器械发生碰撞。According to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on the ray collision detection method, including: generating the origin according to the position information of the target part of the medical device in the target medical image, and using the origin As the starting point, a ray is sent in the direction of movement of the medical device; according to the ray and the position information of the target object in the target medical image, it is judged whether the ray intersects the target object; when the ray intersects the target object, and the position of the intersection point and the position of the origin When the distance satisfies the preset condition, it is determined that the target object collides with the medical device.
结合图21和22所示,图21为一个实施例中的射线碰撞检测方法的示意图,图22为一个实施例中的射线碰撞检测方法的流程图,在该实施例中,如图21所示,选择一个位置作为原点,从原点朝某方向发射出一根射线,计算射线途径路线上是否和物体的表面相交,如果有相交点则计算相交点与原点的距离。具体地,在本实施例中,将器械末端作为原点,朝着器械末端的运动方向发出射线,然后计算射线和组织表面的相交点并给出相交点与原点的距离,如果距离为正或者无交点则器械和组织无碰撞。其中计算三维空间内两点距离通过公式:
Figure PCTCN2022121629-appb-000024
其中(x 1,y 1,z 1)为点1的坐标,(x 2,y 2,z 2)为点2的坐标。
In combination with Figures 21 and 22, Figure 21 is a schematic diagram of a ray collision detection method in an embodiment, and Figure 22 is a flowchart of a ray collision detection method in an embodiment, in this embodiment, as shown in Figure 21 , select a position as the origin, launch a ray from the origin in a certain direction, calculate whether the ray path intersects with the surface of the object, and if there is an intersection point, calculate the distance between the intersection point and the origin. Specifically, in this embodiment, the end of the instrument is taken as the origin, a ray is emitted toward the movement direction of the end of the instrument, and then the intersection point of the ray and the tissue surface is calculated and the distance between the intersection point and the origin is given. If the distance is positive or none The intersection point means that there is no collision between the instrument and the tissue. Among them, the distance between two points in three-dimensional space is calculated by the formula:
Figure PCTCN2022121629-appb-000024
Where (x 1 , y 1 , z 1 ) is the coordinates of point 1, and (x 2 , y 2 , z 2 ) is the coordinates of point 2.
其中,优选地,射线和物体的交点计算可以通过建立直线与几何体的参数方程并联立求解它们构成的方程组,如果方程组无解的话则没有交点,有解则所有的解对应他们所有交点的坐标。三维空间内球体参数方程:f(X)=||X-C|| 2=R 2,三维空间内射线参数方程:
Figure PCTCN2022121629-appb-000025
其中X是球体上点的坐标,C的球心坐标,R是球体的半径;P是射线起点的坐标,t是系数,
Figure PCTCN2022121629-appb-000026
是沿射线方向的单位向量。
Among them, preferably, the calculation of the intersection point of the ray and the object can be solved by establishing the parametric equations of the straight line and the geometric body and solving the equation system formed by them simultaneously. If the equation system has no solution, then there is no intersection point. If there is a solution, all the solutions correspond to all of their intersection points. coordinate. The parameter equation of a sphere in three-dimensional space: f(X)=||XC|| 2 =R 2 , the parameter equation of a ray in three-dimensional space:
Figure PCTCN2022121629-appb-000025
Where X is the coordinates of the point on the sphere, C is the coordinates of the center of the sphere, R is the radius of the sphere; P is the coordinates of the starting point of the ray, t is the coefficient,
Figure PCTCN2022121629-appb-000026
is a unit vector along the ray direction.
在其中一个实施例中,根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息,基于凸多边形碰撞检测方法进行碰撞检测,包括:根据目标医学图像中医疗器械的目标部位的位置信息生成第一几何体;根据目标医学图像中目标对象的位置信息生成第二几何体;计算第一几何体和第二几何体的闵可夫斯基差;根据闵可夫斯基差判断目标对象和医疗器械是否发生碰撞。In one embodiment, according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on a convex polygon collision detection method, including: according to the position information of the target part of the medical device in the target medical image Generate the first geometric body based on the position information; generate the second geometric body according to the position information of the target object in the target medical image; calculate the Minkowski difference between the first geometric body and the second geometric body; judge whether the target object collides with the medical device according to the Minkowski difference .
结合图23和24所示,图23为一个实施例中的凸多边形碰撞检测方法的示意图,图24为一个实施例中的凸多边形碰撞检测方法的流程图,在该实施例中,通过计算两个要做碰撞检测的几何体之间的Minkowski差,根据该差值是否包含了原点(0,0),来判断这两个几何体是否有碰撞。如本图中S1和S2有碰撞重叠的部分,因此由它们的Minkowski差生成的S3几何体包括了原点(0,0)。计算Minkowski差是通过将一个几何体内点的坐标与另一个几何体的全部点做差而得。23 and 24, FIG. 23 is a schematic diagram of a convex polygon collision detection method in an embodiment, and FIG. 24 is a flow chart of a convex polygon collision detection method in an embodiment. In this embodiment, by calculating two The Minkowski difference between two geometries to be collided with is judged whether there is a collision between the two geometries according to whether the difference contains the origin (0,0). As shown in this figure, S1 and S2 have collision overlapping parts, so the S3 geometry generated by their Minkowski difference includes the origin (0,0). The Minkowski difference is calculated by taking the difference between the coordinates of a point in one geometry and all points in another geometry.
具体地,将器械末端圆柱体作为一个几何体,敏感组织作为另一个几何体,然后计算两个几何体的Minkowski差,如果包含了原点则发生碰撞,没有包含则无碰撞。图中S1,S2几何体的Minkowski差计算过程为:S1={a1,a2,a3,a4};S2={b1,b2,b3,b4};S3={(a1-b1),(a1-b2),(a1-b3),(a1-b4),(a2-b1),(a2-b2),(a2-b3),(a2-b4),(a3-b1),(a3-b2),(a3-b3),(a3-b4),(a4-b1),(a4-b2),(a4-b3),(a4-b4)},其中a1,a2,a3,a4表示几何体S1的顶点,b1,b2,b3,b4表示几何体S2顶点。Specifically, the cylinder at the end of the instrument is taken as one geometry, and the sensitive tissue is taken as another geometry, and then the Minkowski difference of the two geometry is calculated, if the origin is included, there is a collision, and if the origin is not included, there is no collision. The Minkowski difference calculation process of S1 and S2 geometry in the figure is: S1={a1,a2,a3,a4}; S2={b1,b2,b3,b4}; S3={(a1-b1),(a1-b2 ),(a1-b3),(a1-b4),(a2-b1),(a2-b2),(a2-b3),(a2-b4),(a3-b1),(a3-b2), (a3-b3),(a3-b4),(a4-b1),(a4-b2),(a4-b3),(a4-b4)}, where a1, a2, a3, a4 represent the vertices of geometry S1 , b1, b2, b3, b4 represent the vertices of geometry S2.
在其中一个实施例中,根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息,基于直线投影碰撞检测方法进行碰撞检测,包括:根据目标医学图像中医疗器械的目标部位的位置信息生成第一几何体;根据目标医学图像中目标对象的位置信息生成第二几何体;以医疗器械移动方向作为投影方向,计算第一几何体和第二几何体的投影部分是否重叠;当重叠时,则判定目标对象和医疗器械发生碰撞。In one embodiment, according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on the linear projection collision detection method, including: according to the position information of the target part of the medical device in the target medical image Generating the first geometric body based on the position information; generating the second geometric body according to the position information of the target object in the target medical image; taking the moving direction of the medical device as the projection direction, calculating whether the projection parts of the first geometric body and the second geometric body overlap; when overlapping, then Determine the collision between the target object and the medical device.
结合图25所示,图25为一个实施例中的直线投影碰撞检测方法的示意图,在本实施例中,通过计算两个要做碰撞检测的几何体在直线上的投影是否重叠来判断这两个几何体在该投影直线所指方向上是否有碰撞。对于手术器械和组织而言,可以将器械末端作为一个几何体,敏感组织作为另一个几何体,然后以器械指令速度方向作为投影直线的方向,投影并计算两个几何体的投影部分是否重叠,如果重叠则表示器械按当前的运动趋势与组织发生碰撞。As shown in FIG. 25, FIG. 25 is a schematic diagram of a linear projection collision detection method in an embodiment. In this embodiment, it is judged by calculating whether the projections of two geometric bodies to be collision detected on a straight line overlap. Whether the geometry collides in the direction pointed by the projected line. For surgical instruments and tissues, the end of the instrument can be regarded as a geometric body, and the sensitive tissue can be regarded as another geometric body, and then the instruction speed direction of the instrument is used as the direction of the projected line to project and calculate whether the projected parts of the two geometric bodies overlap. If so, then Indicates that the instrument collides with the tissue according to the current motion trend.
在其中一个实施例中,根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息进行碰撞检测,包括:当所述目标对象和所述医疗器械发生碰撞时输出能够被感知的碰撞告警,碰撞告警包括显示提醒、声音提醒中的至少一个。In one of the embodiments, the collision detection is performed according to the position information of the target object in the target medical image and the position information of the target part of the medical device, including: when the target object collides with the medical device, outputting a perceivable The collision warning includes at least one of a display reminder and a sound reminder.
如图26所示,本实施例提供一种碰撞安全声音告警方法,在手术器械端检测是否有碰撞发生,如果有碰撞发生则机器人操作主端发出蜂鸣提示医生手术器械与敏感组织发生了碰撞,如果没有碰撞则不做声音提示。As shown in Figure 26, this embodiment provides a collision safety sound alarm method, which detects whether there is a collision at the surgical instrument end, and if there is a collision, the robot operation master sends a buzzer to remind the doctor that the surgical instrument has collided with sensitive tissues , if there is no collision, no sound prompt will be made.
在其中一个实施例中,获取预先处理得到的标准医学图像之前,还包括:通过医学图像扫描设备获取初始医学图像;识别初始医学图像中的目标对象,并标记目标对象的位置以得到目标对象的位置信息,根据目标对象的位置信息和初始医学图像得到标准医学图像。In one of the embodiments, before obtaining the pre-processed standard medical image, it also includes: obtaining an initial medical image through a medical image scanning device; identifying the target object in the initial medical image, and marking the position of the target object to obtain the target object Position information. A standard medical image is obtained according to the position information of the target object and the initial medical image.
请继续结合图11所示,使用CT、MRI等断层成像技术扫描得到的图像信息,经图像处理算法完 成手术空间内组织建模。术前可根据腹腔内的组织建模信息,确定需要特别关注的关键组织,并建立标记点,以用于手术操作场景三维重建,即在术前图像确定需要标记的组织,并在标记三维模型里标记,这样就可以得到标记了目标对象,包括敏感组织的标准医学图像。Please continue to use the image information scanned by CT, MRI and other tomographic imaging technologies as shown in Figure 11, and complete the tissue modeling in the surgical space through image processing algorithms. Before operation, according to the tissue modeling information in the abdominal cavity, the key tissues that need special attention can be determined, and marker points can be established for 3D reconstruction of the surgical operation scene, that is, the tissues that need to be marked can be determined in the preoperative image, and the 3D model can be marked In this way, standard medical images with labeled objects of interest, including sensitive tissues, can be obtained.
在其中一个实施例中,上述碰撞检测方法还包括:将当前医学图像和/或目标医学图像通过显示装置和/或增强现实设备进行显示。In one embodiment, the above collision detection method further includes: displaying the current medical image and/or the target medical image through a display device and/or an augmented reality device.
具体地,显示装置可以是指医生控制台上的显示装置,通过该显示装置可以显示当前医学图像和/或目标医学图像。在一个可选的实施例中,本系统还可以引入增强现实设备,通过该增强现实设备来显示当前医学图像和/或目标医学图像。Specifically, the display device may refer to a display device on a doctor's console, through which a current medical image and/or a target medical image may be displayed. In an optional embodiment, the system may also introduce an augmented reality device, through which the current medical image and/or the target medical image are displayed.
其中,在本实施例中,通过获取术前CT图像/MR图像重构一个腹腔内部三维数字场景,标记出手术区间组织,隐藏的神经、血管与敏感组织,通过将术前医学影像、内窥镜视觉图像以及机器人坐标系下的器械位置进行融合,确定手术器械在三维手术数字场景中的空间位置与姿态,以此获得器械、3D内镜、人体组织的三维空间位置关系并直观地显示在三维手术场景中。当手术器械与手术区间敏感组织发生碰撞时,存在戳伤敏感组织风险时,尤其是隐藏在内窥镜视觉图像不可见区间的敏感神经、血管、组织时,通过虚拟手术视觉图像(包括AR\VR)提示碰撞风险以及通过视觉图像渐进颜色变化与声音提示器械与组织发生了碰撞以及碰撞深度,并且将器械与组织的碰撞力反馈至主操作端,以此保证手术操作的安全性。Among them, in this embodiment, a three-dimensional digital scene inside the abdominal cavity is reconstructed by obtaining preoperative CT images/MR images, and the tissues in the surgical area, hidden nerves, blood vessels and sensitive tissues are marked, and preoperative medical images, endoscopic The endoscope vision image and the position of the instrument in the robot coordinate system are fused to determine the spatial position and posture of the surgical instrument in the 3D surgical digital scene, so as to obtain the 3D spatial position relationship of the instrument, 3D endoscope, and human tissue and intuitively display it on the 3D surgical scene. When the surgical instrument collides with the sensitive tissue in the operation area, there is a risk of poking the sensitive tissue, especially when sensitive nerves, blood vessels, and tissues hidden in the invisible area of the endoscopic visual image, through the virtual surgical visual image (including AR\ VR) prompts the risk of collision and prompts the collision and depth of the instrument and tissue through the gradual color change of the visual image and the sound, and feeds back the collision force between the instrument and the tissue to the main operating terminal to ensure the safety of the surgical operation.
具体地,结合图26所示,主操作手与机械臂201及手术器械构成主从控制关系,从而构成主从映射,然后计算腹腔内三维模型,以及术前医学图像、内窥镜图像的匹配关系,以将他们三者进行融合,以判断器械末端当前位置与敏感组织之间的位置是否小于阈值,若是,则器械与组织发生碰撞,按照上述流程进行处理以发出告警。若是器械末端当前位置与敏感组织之间的位置不小于阈值,则器械与组织发生未发生碰撞,并继续下一周期的检测。Specifically, as shown in FIG. 26 , the master operator forms a master-slave control relationship with the robotic arm 201 and surgical instruments, thereby forming a master-slave mapping, and then calculates the three-dimensional model of the abdominal cavity, as well as the matching of preoperative medical images and endoscopic images. relationship, so as to fuse the three of them to determine whether the position between the current position of the end of the device and the sensitive tissue is less than the threshold value, if so, the device collides with the tissue, and the processing is performed according to the above process to issue an alarm. If the position between the current position of the end of the device and the sensitive tissue is not less than the threshold, then the device has not collided with the tissue, and the next cycle of detection continues.
在其中一个实施例中,方法还包括:获取增强现实设备的视觉空间坐标系;计算增强现实设备的视觉空间坐标系与当前医学图像的图像空间坐标系之间的第三匹配关系;根据第三匹配关系将当前医学图像显示在增强现实设备的视觉空间;和/或根据第一匹配关系、第二匹配关系和第三匹配关系将目标医学图像显示在增强现实设备的视觉空间。In one of the embodiments, the method further includes: acquiring the visual space coordinate system of the augmented reality device; calculating a third matching relationship between the visual space coordinate system of the augmented reality device and the image space coordinate system of the current medical image; according to the third The matching relationship displays the current medical image in the visual space of the augmented reality device; and/or displays the target medical image in the visual space of the augmented reality device according to the first matching relationship, the second matching relationship and the third matching relationship.
其中视觉空间坐标系是增强显示设备的显示空间的坐标系,为了将当前医学图像和/或目标医学图像显示在视觉空间,本实施例中先计算增强现实设备的视觉空间坐标系与当前医学图像的图像空间坐标系之间的第三匹配关系,然后根据第三匹配关系将当前医学图像显示在增强现实设备的视觉空间;和/或根据第一匹配关系、第二匹配关系和第三匹配关系将目标医学图像显示在增强现实设备的视觉空间。The visual space coordinate system is the coordinate system of the display space of the augmented display device. In order to display the current medical image and/or the target medical image in the visual space, in this embodiment, the visual space coordinate system of the augmented reality device and the current medical image are first calculated. The third matching relationship between the image space coordinate systems, and then display the current medical image in the visual space of the augmented reality device according to the third matching relationship; and/or according to the first matching relationship, the second matching relationship and the third matching relationship Display the target medical image in the visual space of the augmented reality device.
其中可选地,当目标对象和医疗器械发生碰撞时,显示目标医学图像,这样可以通过显示融合后的图像提示医生碰撞发生的位置;当目标对象和医疗器械未发生碰撞时,显示当前医学图像。Optionally, when the target object collides with the medical device, the target medical image is displayed, so that the doctor can be prompted where the collision occurred by displaying the fused image; when the target object and the medical device do not collide, the current medical image is displayed .
应该理解的是,虽然如上的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the steps in the flow charts involved in the above embodiments are shown sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in the flow charts involved in the above embodiments may include multiple steps or stages, and these steps or stages are not necessarily executed at the same time, but may be executed at different times, The execution order of these steps or stages is not necessarily performed sequentially, but may be performed in turn or alternately with other steps or at least a part of steps or stages in other steps.
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的碰撞检测方法的碰撞检测装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个碰撞检测装置实施例中的具体限定可以参见上文中对于碰撞检测方法的限定,在此不再赘述。Based on the same inventive concept, an embodiment of the present application further provides a collision detection device for implementing the above-mentioned collision detection method. The solution to the problem provided by the device is similar to the implementation described in the above method, so for the specific limitations in one or more embodiments of the collision detection device provided below, please refer to the definition of the collision detection method above, I won't repeat them here.
在一个实施例中,如图27所示,提供了一种碰撞检测装置,包括:当前医学图像获取模块1901、位置信息获取模块1902、标准医学图像获取模块1903、转换模块1904和碰撞检测模块1905,其中:In one embodiment, as shown in FIG. 27 , a collision detection device is provided, including: a current medical image acquisition module 1901 , a location information acquisition module 1902 , a standard medical image acquisition module 1903 , a conversion module 1904 and a collision detection module 1905 ,in:
当前医学图像获取模块1901,用于获取当前医学图像;A current medical image acquisition module 1901, configured to acquire a current medical image;
位置信息获取模块1902,用于获取医疗器械的目标部位的位置信息;A location information acquisition module 1902, configured to acquire the location information of the target part of the medical device;
标准医学图像获取模块1903,用于获取预先处理得到的标准医学图像,标准医学图像携带有目标对象的位置信息;A standard medical image acquisition module 1903, configured to acquire a pre-processed standard medical image, where the standard medical image carries the position information of the target object;
转换模块1904,用于根据当前医学图像和标准医学图像的第一匹配关系将标准医学图像中目标对象的位置信息转换至目标医学图像中,根据当前医学图像与医疗器械的目标部位的位置信息的第二匹配关系将医疗器械的目标部位的位置信息转换至目标医学图像中;The conversion module 1904 is configured to convert the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, and according to the position information of the current medical image and the target part of the medical device The second matching relationship converts the position information of the target part of the medical device into the target medical image;
碰撞检测模块1905,用于根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位 置信息进行碰撞检测。The collision detection module 1905 is configured to perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
在其中一个实施例中,上述碰撞检测模块1905用于根据以下至少一种进行碰撞检测:根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息,基于射线碰撞检测方法进行碰撞检测;根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息,基于凸多边形碰撞检测方法进行碰撞检测;根据目标医学图像中目标对象的位置信息以及医疗器械的目标部位的位置信息,基于直线投影碰撞检测方法进行碰撞检测。In one of the embodiments, the collision detection module 1905 is configured to perform collision detection according to at least one of the following: according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision is performed based on the ray collision detection method Detection; according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on the convex polygon collision detection method; according to the position information of the target object in the target medical image and the position of the target part of the medical device Information, based on the linear projection collision detection method for collision detection.
在其中一个实施例中,上述碰撞检测模块1905包括:射线生成单元,用于根据目标医学图像中医疗器械的目标部位的位置信息生成原点,并以原点为起点,向医疗器械运动方向发出射线;相交判断单元,用于根据射线以及目标医学图像中目标对象的位置信息,判断射线与目标对象是否相交;第一碰撞结果输出单元,用于当射线与目标对象相交,且交点的位置与原点的位置之间的距离满足预设条件时,判定所述目标对象和所述医疗器械发生碰撞。In one of the embodiments, the above-mentioned collision detection module 1905 includes: a ray generation unit, configured to generate an origin according to the position information of the target part of the medical device in the target medical image, and use the origin as a starting point to send a ray toward the moving direction of the medical device; The intersection judging unit is used to judge whether the ray intersects the target object according to the ray and the position information of the target object in the target medical image; the first collision result output unit is used for when the ray intersects the target object, and the position of the intersection point is the same as the origin When the distance between the positions satisfies the preset condition, it is determined that the target object collides with the medical device.
在其中一个实施例中,上述碰撞检测模块1905包括:第一几何体生成单元,用于根据目标医学图像中医疗器械的目标部位的位置信息生成第一几何体;第二几何体生成单元,用于根据目标医学图像中目标对象的位置信息生成第二几何体;闵可夫斯基差计算单元,用于计算第一几何体和第二几何体的闵可夫斯基差;第二碰撞结果输出单元,用于根据闵可夫斯基差判断目标对象和医疗器械是否发生碰撞。In one of the embodiments, the collision detection module 1905 includes: a first geometry generation unit, configured to generate a first geometry according to the position information of the target part of the medical device in the target medical image; a second geometry generation unit, configured to generate a first geometry according to the target The position information of the target object in the medical image generates a second geometric body; the Minkowski difference calculation unit is used to calculate the Minkowski difference between the first geometric body and the second geometric body; the second collision result output unit is used to calculate the Minkowski difference according to the Minkowski difference Determine whether the target object collides with the medical device.
在其中一个实施例中,上述碰撞检测模块1905包括:第三几何体生成单元,用于根据目标医学图像中医疗器械的目标部位的位置信息生成第一几何体;第四几何体生成单元,用于根据目标医学图像中目标对象的位置信息生成第二几何体;重叠判断单元,用于以医疗器械移动方向作为投影方向,计算第一几何体和第二几何体的投影部分是否重叠;In one of the embodiments, the collision detection module 1905 includes: a third geometry generation unit, configured to generate the first geometry according to the position information of the target part of the medical device in the target medical image; a fourth geometry generation unit, configured to generate the first geometry according to the target The position information of the target object in the medical image generates a second geometric body; the overlapping judging unit is used to use the moving direction of the medical device as the projection direction, and calculate whether the projection parts of the first geometric body and the second geometric body overlap;
第三碰撞结果输出单元,用于当重叠时,则判定目标对象和医疗器械发生碰撞。The third collision result output unit is configured to determine that the target object collides with the medical instrument when they overlap.
在其中一个实施例中,上述装置还包括:告警模块,用于当目标对象和医疗器械发生碰撞时输出能够被感知的碰撞告警。In one embodiment, the above device further includes: an alarm module, configured to output a perceivable collision alarm when the target object collides with the medical device.
在其中一个实施例中,上述装置还包括:初始医学图像采集模块,用于通过医学图像扫描设备获取初始医学图像;标准医学图像生成模块,用于识别初始医学图像中的目标对象,并标记目标对象的位置以得到目标对象的位置信息,根据目标对象的位置信息和初始医学图像得到标准医学图像。In one of the embodiments, the above-mentioned device further includes: an initial medical image acquisition module, configured to acquire an initial medical image through a medical image scanning device; a standard medical image generation module, configured to identify the target object in the initial medical image, and mark the target The position of the object is used to obtain the position information of the target object, and the standard medical image is obtained according to the position information of the target object and the initial medical image.
在其中一个实施例中,上述装置还包括:识别单元,用于识别当前医学图像中的待处理对象;第一匹配关系生成单元,用于根据标准医学图像中的目标对象与待处理对象进行匹配融合得到第一匹配关系。In one of the embodiments, the above device further includes: an identification unit, configured to identify the object to be processed in the current medical image; a first matching relationship generating unit, configured to match the target object in the standard medical image with the object to be processed Fusion obtains the first matching relationship.
在其中一个实施例中,上述装置包括:第二匹配关系生成单元,用于根据采集当前医学图像的医疗用镜的运动信息以及医疗器械的目标部位的运动信息,进行运动学映射计算得到第二匹配关系。In one of the embodiments, the above-mentioned device includes: a second matching relationship generating unit, configured to perform kinematic mapping calculation to obtain the second matching relationship.
在其中一个实施例中,上述装置还包括:第一显示模块,用于将当前医学图像和/或目标医学图像通过显示装置和/或增强现实设备进行显示。In one of the embodiments, the above-mentioned device further includes: a first display module, configured to display the current medical image and/or the target medical image through a display device and/or an augmented reality device.
在其中一个实施例中,上述装置还包括:第一坐标系获取模块,用于获取增强现实设备的视觉空间坐标系;第三匹配关系生成模块,用于计算增强现实设备的视觉空间坐标系与当前医学图像的图像空间坐标系之间的第三匹配关系;第二显示模块,用于根据第三匹配关系将当前医学图像显示在增强现实设备的视觉空间;和/或根据第一匹配关系、第二匹配关系和第三匹配关系将目标医学图像显示在增强现实设备的视觉空间。In one of the embodiments, the above device further includes: a first coordinate system acquisition module, configured to acquire the visual space coordinate system of the augmented reality device; a third matching relationship generation module, configured to calculate the visual space coordinate system of the augmented reality device and The third matching relationship between the image space coordinate systems of the current medical image; the second display module is used to display the current medical image in the visual space of the augmented reality device according to the third matching relationship; and/or according to the first matching relationship, The second matching relationship and the third matching relationship display the target medical image in the visual space of the augmented reality device.
在其中一个实施例中,上述装置还包括:第三显示模块,用于当目标对象和医疗器械发生碰撞时,显示目标医学图像;当目标对象和医疗器械未发生碰撞时,显示当前医学图像。In one embodiment, the above device further includes: a third display module, configured to display the target medical image when the target object collides with the medical instrument; and display the current medical image when the target object does not collide with the medical instrument.
上述碰撞检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。Each module in the above-mentioned collision detection device can be fully or partially realized by software, hardware and a combination thereof. The above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图28所示。该计算机设备包括通过系统总线连接的处理器、存储器、通信接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、移动蜂窝网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种碰撞检测方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球 或触控板,还可以是外接的键盘、触控板或鼠标等。In one embodiment, a computer device is provided. The computer device may be a terminal, and its internal structure may be as shown in FIG. 28 . The computer device includes a processor, a memory, a communication interface, a display screen and an input device connected through a system bus. Wherein, the processor of the computer device is used to provide calculation and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer programs. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies. When the computer program is executed by the processor, a collision detection method is realized. The display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the casing of the computer device , and can also be an external keyboard, touchpad, or mouse.
本领域技术人员可以理解,图28中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in Figure 28 is only a block diagram of a part of the structure related to the solution of this application, and does not constitute a limitation on the computer equipment on which the solution of this application is applied. The specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。In one embodiment, there is also provided a computer device, including a memory and a processor, where a computer program is stored in the memory, and the processor implements the steps in the above method embodiments when executing the computer program.
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the computer programs can be stored in a non-volatile computer-readable memory In the medium, when the computer program is executed, it may include the processes of the embodiments of the above-mentioned methods. Wherein, any reference to storage, database or other media used in the various embodiments provided in the present application may include at least one of non-volatile and volatile storage. Non-volatile memory can include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive variable memory (ReRAM), magnetic variable memory (Magnetoresistive Random Access Memory, MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (Phase Change Memory, PCM), graphene memory, etc. The volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory, etc. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM). The databases involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database. The non-relational database may include a blockchain-based distributed database, etc., but is not limited thereto. The processors involved in the various embodiments provided by this application can be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, data processing logic devices based on quantum computing, etc., and are not limited to this.
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be It is considered to be within the range described in this specification.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation modes of the present application, and the description thereof is relatively specific and detailed, but should not be construed as limiting the patent scope of the present application. It should be noted that those skilled in the art can make several modifications and improvements without departing from the concept of the present application, and these all belong to the protection scope of the present application. Therefore, the protection scope of the present application should be determined by the appended claims.

Claims (27)

  1. 一种碰撞检测方法,其特征在于,包括:A collision detection method, characterized in that, comprising:
    获取目标对象于不同视角下的至少两个图像信息;Acquiring at least two image information of the target object under different viewing angles;
    根据所述至少两个图像信息,获得所述目标对象的空间位置;Obtaining the spatial position of the target object according to the at least two image information;
    获取手术机器人的器械臂的末端所连接的医疗器械的位置信息;Obtain the position information of the medical instrument connected to the end of the instrument arm of the surgical robot;
    根据所述目标对象的空间位置和所述医疗器械的位置信息确定所述医疗器械与所述目标对象的碰撞情况。A collision situation between the medical instrument and the target object is determined according to the spatial position of the target object and the position information of the medical instrument.
  2. 根据权利要求1所述的方法,其特征在于,所述目标对象具有特征点,所述特征点基于医学影像通过建立组织模型来确定。The method according to claim 1, wherein the target object has feature points, and the feature points are determined by establishing a tissue model based on medical images.
  3. 根据权利要求2所述的方法,其特征在于,根据所述至少两个图像信息,获得所述目标对象的空间位置的步骤包括:The method according to claim 2, wherein the step of obtaining the spatial position of the target object according to the at least two image information comprises:
    根据所述至少两个图像信息,建立所述目标对象的实时的三维模型,经与所述组织模型配准后,实时得到所述特征点在图像采集装置坐标系中的实时的位置信息,从而得到所述目标对象的空间位置。Establish a real-time three-dimensional model of the target object according to the at least two image information, and after registration with the tissue model, obtain real-time position information of the feature points in the coordinate system of the image acquisition device in real time, thereby The spatial position of the target object is obtained.
  4. 根据权利要求3所述的方法,其特征在于,利用至少两个图像采集单元获取所述至少两个图像信息,至少两个所述图像采集单元设置于内窥镜上,所述内窥镜连接于所述手术机器人的持镜臂的末端。The method according to claim 3, characterized in that at least two image acquisition units are used to obtain the at least two image information, at least two of the image acquisition units are arranged on the endoscope, and the endoscope is connected to at the end of the arm of the surgical robot.
  5. 根据权利要求4所述的方法,其特征在于,获得所述目标对象的空间位置的步骤包括:The method according to claim 4, wherein the step of obtaining the spatial position of the target object comprises:
    获取所述图像信息中所述特征点在内窥镜坐标系中的位置信息;Obtaining position information of the feature points in the image information in the endoscope coordinate system;
    获取所述内窥镜在持镜臂基坐标系中的位姿信息;Obtaining the pose information of the endoscope in the arm base coordinate system;
    根据所述特征点在内窥镜坐标系中的位置信息以及所述内窥镜在持镜臂基坐标系中的位姿信息,得到所述特征点在持镜臂基坐标系下的位置信息,进而得到所述特征点在基坐标系下的位置信息,从而得到所述目标对象的空间位置。According to the position information of the feature points in the endoscope coordinate system and the pose information of the endoscope in the arm base coordinate system, the position information of the feature points in the base coordinate system of the arm is obtained. , and then obtain the position information of the feature point in the base coordinate system, so as to obtain the spatial position of the target object.
  6. 根据权利要求1所述的方法,其特征在于,所述医疗器械的位置信息包括当前位置信息和预期位置信息;所述碰撞情况包括当前碰撞情况和预期碰撞情况。The method according to claim 1, wherein the location information of the medical device includes current location information and expected location information; the collision situation includes the current collision situation and the expected collision situation.
  7. 根据权利要求6所述的方法,其特征在于,获取所述医疗器械的预期位置信息的步骤包括:The method according to claim 6, wherein the step of obtaining the expected location information of the medical device comprises:
    获取所述医疗器械上的至少两个器械标记点在器械臂基坐标系中的位置信息,进而得到所述器械标记点在基坐标系下的位置信息;Obtaining position information of at least two instrument marking points on the medical instrument in the base coordinate system of the instrument arm, and then obtaining position information of the instrument marking points in the base coordinate system;
    获取所述器械标记点的运动信息;Acquiring motion information of the marker points of the instrument;
    根据所述器械标记点的位置信息和运动信息,得到所述器械标记点的预期空间位置;Obtaining the expected spatial position of the instrument marking point according to the position information and motion information of the instrument marking point;
    根据至少两个所述器械标记点的预期空间位置,得到所述医疗器械的预期位置信息。The expected position information of the medical device is obtained according to the expected spatial positions of at least two marking points of the device.
  8. 根据权利要求2所述的方法,其特征在于,确定碰撞情况的步骤包括:The method according to claim 2, wherein the step of determining the collision situation comprises:
    以所述特征点为球心,以Ro为半径建立球体Co;Establishing a sphere Co with the feature point as the center and Ro as the radius;
    以所述医疗器械上的器械标记点为球心,以Rt为半径建立球体Ct;Establishing a sphere Ct with the instrument marking point on the medical instrument as the center and Rt as the radius;
    若所述球体Co和所述球体Ct间的距离D<Ro+Rt,则标记所述特征点与所述器械标记点接触。If the distance D between the sphere Co and the sphere Ct<Ro+Rt, then mark the feature point in contact with the instrument marker point.
  9. 根据权利要求8所述的方法,其特征在于,所述目标对象包括M个特征点,其中有N个特征点与所述器械标记点发生接触,N为自然数,M为不小于N的自然数,若N与M的比值大于阈值P,P∈(0,1],则确定所述医疗器械将要与所述目标对象碰撞。The method according to claim 8, wherein the target object includes M feature points, wherein N feature points are in contact with the instrument marking points, N is a natural number, and M is a natural number not less than N, If the ratio of N to M is greater than the threshold P, P∈(0,1], it is determined that the medical instrument will collide with the target object.
  10. 根据权利要求1所述的方法,其特征在于,还包括,当所述目标对象和所述医疗器械发生碰撞时,则进行报警、提示和启动安全保护机制中的至少一种。The method according to claim 1, further comprising, when the target object collides with the medical device, at least one of giving an alarm, prompting and starting a safety protection mechanism.
  11. 根据权利要求10所述的方法,其特征在于,启动安全保护机制的步骤包括:The method according to claim 10, wherein the step of starting a security protection mechanism comprises:
    根据所述碰撞情况为所述医疗器械的运动设置虚拟边界,并限制所述医疗器械进入所述虚拟边界的范围内。A virtual boundary is set for the movement of the medical device according to the collision situation, and the medical device is restricted from entering the range of the virtual boundary.
  12. 一种碰撞检测方法,其特征在于,包括:A collision detection method, characterized in that, comprising:
    获取当前医学图像;Get the current medical image;
    获取医疗器械的目标部位的位置信息;Obtain the location information of the target part of the medical device;
    获取预先处理得到的标准医学图像,所述标准医学图像携带有目标对象的位置信息;Acquiring a pre-processed standard medical image, the standard medical image carrying the position information of the target object;
    根据所述当前医学图像和所述标准医学图像的第一匹配关系将所述标准医学图像中所述目标对象的位置信息转换至目标医学图像中,根据所述当前医学图像与所述医疗器械的目标部位的位置信息的第二匹配关系将所述医疗器械的目标部位的位置信息转换至所述目标医学图像中;Converting the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, and according to the relationship between the current medical image and the medical device The second matching relationship of the position information of the target part converts the position information of the target part of the medical device into the target medical image;
    根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息进行碰撞检测。Collision detection is performed according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
  13. 根据权利要求12所述的方法,其特征在于,所述根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息进行碰撞检测,包括以下至少一种:The method according to claim 12, wherein the collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device comprises at least one of the following:
    根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息,基于射线碰撞检测方法进行碰撞检测;Performing collision detection based on a ray collision detection method according to the position information of the target object in the target medical image and the position information of the target part of the medical device;
    根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息,基于凸多边形碰撞检测方法进行碰撞检测;Performing collision detection based on a convex polygon collision detection method according to the position information of the target object in the target medical image and the position information of the target part of the medical device;
    根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息,基于直线投影碰撞检测方法进行碰撞检测。According to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on a linear projection collision detection method.
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息,基于射线碰撞检测方法进行碰撞检测,包括:The method according to claim 13, wherein the collision detection is performed based on a ray collision detection method according to the position information of the target object in the target medical image and the position information of the target part of the medical device, include:
    根据所述目标医学图像中所述医疗器械的目标部位的位置信息生成原点,并以所述原点为起点,向所述医疗器械运动方向发出射线;generating an origin according to the position information of the target part of the medical device in the target medical image, and using the origin as a starting point to emit rays in the direction of movement of the medical device;
    根据所述射线以及所述目标医学图像中所述目标对象的位置信息,判断所述射线与所述目标对象是否相交;judging whether the ray intersects the target object according to the ray and the position information of the target object in the target medical image;
    当所述射线与所述目标对象相交,且交点的位置与所述原点的位置之间的距离满足预设条件时,判定所述目标对象和所述医疗器械发生碰撞。When the ray intersects the target object and the distance between the position of the intersection point and the position of the origin satisfies a preset condition, it is determined that the target object collides with the medical instrument.
  15. 根据权利要求13所述的方法,其特征在于,所述根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息,基于凸多边形碰撞检测方法进行碰撞检测,包括:The method according to claim 13, wherein the collision detection is performed based on a convex polygon collision detection method according to the position information of the target object in the target medical image and the position information of the target part of the medical device ,include:
    根据所述目标医学图像中所述医疗器械的目标部位的位置信息生成第一几何体;generating a first geometric body according to position information of a target part of the medical device in the target medical image;
    根据所述目标医学图像中所述目标对象的位置信息生成第二几何体;generating a second geometric body according to the position information of the target object in the target medical image;
    计算所述第一几何体和所述第二几何体的闵可夫斯基差;calculating the Minkowski difference of the first geometry and the second geometry;
    根据所述闵可夫斯基差判断所述目标对象和所述医疗器械是否发生碰撞。Whether the target object collides with the medical instrument is judged according to the Minkowski difference.
  16. 根据权利要求13所述的方法,其特征在于,所述根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息,基于直线投影碰撞检测方法进行碰撞检测,包括:The method according to claim 13, characterized in that, according to the position information of the target object in the target medical image and the position information of the target part of the medical device, the collision detection is performed based on a linear projection collision detection method ,include:
    根据所述目标医学图像中所述医疗器械的目标部位的位置信息生成第一几何体;generating a first geometric body according to position information of a target part of the medical device in the target medical image;
    根据所述目标医学图像中所述目标对象的位置信息生成第二几何体;generating a second geometric body according to the position information of the target object in the target medical image;
    以所述医疗器械移动方向作为投影方向,计算所述第一几何体和所述第二几何体的投影部分是否重叠;Using the moving direction of the medical device as the projection direction, calculate whether the projection parts of the first geometric body and the second geometric body overlap;
    当重叠时,则判定所述目标对象和所述医疗器械发生碰撞。When overlapping, it is determined that the target object collides with the medical instrument.
  17. 根据权利要求12至16任意一项所述的方法,其特征在于,所述获取预先处理得到的标准医学图像之前,还包括:The method according to any one of claims 12 to 16, characterized in that before acquiring the pre-processed standard medical image, further comprising:
    通过医学图像扫描设备获取初始医学图像;Obtain an initial medical image through a medical image scanning device;
    识别所述初始医学图像中的目标对象,并标记所述目标对象的位置以得到目标对象的位置信息;identifying a target object in the initial medical image, and marking the location of the target object to obtain location information of the target object;
    根据所述目标对象的位置信息和所述初始医学图像得到标准医学图像。A standard medical image is obtained according to the position information of the target object and the initial medical image.
  18. 根据权利要求12至16任意一项所述的方法,其特征在于,所述根据所述当前医学图像和所述标准医学图像的第一匹配关系将所述标准医学图像中所述目标对象的位置信息转换至目标医学图像中之前,还包括:The method according to any one of claims 12 to 16, wherein the position of the target object in the standard medical image is calculated according to the first matching relationship between the current medical image and the standard medical image Before the information is converted into the target medical image, it also includes:
    识别所述当前医学图像中的待处理对象;identifying an object to be processed in the current medical image;
    根据所述标准医学图像中的目标对象与所述待处理对象进行匹配融合得到第一匹配关系,所述第一匹配关系用于将所述标准医学图像中所述目标对象的位置信息转换至目标医学图像。According to the matching and fusion of the target object in the standard medical image and the object to be processed, a first matching relationship is obtained, and the first matching relationship is used to convert the position information of the target object in the standard medical image into a target medical images.
  19. 根据权利要求12至16任意一项所述的方法,其特征在于,所述根据所述当前医学图像与所述医疗器械的目标部位的位置信息的第二匹配关系将所述医疗器械的目标部位的位置信息转换至所述目标医学图像中之前,还包括:The method according to any one of claims 12 to 16, wherein the target part of the medical device is calculated according to the second matching relationship between the current medical image and the position information of the target part of the medical device Before converting the location information into the target medical image, it also includes:
    根据采集所述当前医学图像的医疗用镜的运动信息以及所述医疗器械的目标部位的运动信息,进行运动学映射计算得到第二匹配关系,所述第二匹配关系用于将所述医疗器械的目标部位的位置信息转换至所述目标医学图像。According to the motion information of the medical mirror that collects the current medical image and the motion information of the target part of the medical device, perform kinematic mapping calculation to obtain a second matching relationship, and the second matching relationship is used to use the medical device The position information of the target part is converted into the target medical image.
  20. 根据权利要求12至16任意一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 12 to 16, further comprising:
    将所述当前医学图像和/或所述目标医学图像通过显示装置和/或增强现实设备进行显示。Displaying the current medical image and/or the target medical image through a display device and/or an augmented reality device.
  21. 根据权利要求20所述的方法,其特征在于,所述方法还包括:The method according to claim 20, further comprising:
    获取增强现实设备的视觉空间坐标系;Obtain the visual space coordinate system of the augmented reality device;
    计算所述增强现实设备的视觉空间坐标系与所述当前医学图像的图像空间坐标系之间的第三匹配关系;calculating a third matching relationship between the visual space coordinate system of the augmented reality device and the image space coordinate system of the current medical image;
    根据所述第三匹配关系将所述当前医学图像显示在所述增强现实设备的视觉空间;和/或根据所述第一匹配关系、所述第二匹配关系和所述第三匹配关系将所述目标医学图像显示在所述增强现实设备的视觉空间。Displaying the current medical image in the visual space of the augmented reality device according to the third matching relationship; and/or displaying the current medical image according to the first matching relationship, the second matching relationship and the third matching relationship The target medical image is displayed in the visual space of the augmented reality device.
  22. 根据权利要求20所述的方法,其特征在于,所述方法还包括:The method according to claim 20, further comprising:
    当所述目标对象和所述医疗器械发生碰撞时,显示所述目标医学图像;displaying the target medical image when the target object collides with the medical instrument;
    当所述目标对象和所述医疗器械未发生碰撞时,显示所述当前医学图像。When the target object and the medical instrument do not collide, the current medical image is displayed.
  23. 一种碰撞检测系统,其特征在于,所述碰撞检测系统包括处理器、医疗用镜以及医疗器械,所述医疗器械上设置有传感器,所述传感器用于采集医疗器械的目标部位的位置信息,并将所采集医疗器械的目标部位的位置信息发送至所述处理器;所述医疗用镜用于采集当前医学图像,并将所述当前医学图像发送至所述处理器;所述处理器用于执行权利要求1至22任意一项所述的碰撞检测方法。A collision detection system, characterized in that the collision detection system includes a processor, a medical mirror and a medical device, the medical device is provided with a sensor, the sensor is used to collect the position information of the target part of the medical device, And send the position information of the target part of the collected medical equipment to the processor; the medical mirror is used to collect the current medical image, and send the current medical image to the processor; the processor is used to Executing the collision detection method described in any one of claims 1 to 22.
  24. 根据权利要求23所述的系统,其特征在于,所述系统还包括显示装置和/或增强现实设备,所述显示装置和/或增强现实设备与所述处理器相通信;所述显示装置和/或增强现实设备用于显示所述处理器发送的所述当前医学图像和/或所述目标医学图像。The system according to claim 23, wherein the system further comprises a display device and/or an augmented reality device, and the display device and/or augmented reality device communicate with the processor; the display device and and/or an augmented reality device is configured to display the current medical image and/or the target medical image sent by the processor.
  25. 一种碰撞检测装置,其特征在于,所述装置包括:A collision detection device, characterized in that the device comprises:
    当前医学图像获取模块,用于获取当前医学图像;The current medical image acquisition module is used to acquire the current medical image;
    位置信息获取模块,用于获取医疗器械的目标部位的位置信息;A position information acquisition module, configured to acquire position information of the target part of the medical device;
    标准医学图像获取模块,用于获取预先处理得到的标准医学图像,所述标准医学图像携带有目标对象的位置信息;A standard medical image acquisition module, configured to acquire a pre-processed standard medical image, the standard medical image carrying the position information of the target object;
    转换模块,用于根据所述当前医学图像和所述标准医学图像的第一匹配关系将所述标准医学图像中所述目标对象的位置信息转换至目标医学图像中,根据所述当前医学图像与所述医疗器械的目标部位的位置信息的第二匹配关系将所述医疗器械的目标部位的位置信息转换至所述目标医学图像中;a conversion module, configured to convert the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image; The second matching relationship of the position information of the target part of the medical device converts the position information of the target part of the medical device into the target medical image;
    碰撞检测模块,用于根据所述目标医学图像中所述目标对象的位置信息以及所述医疗器械的目标部位的位置信息进行碰撞检测。A collision detection module, configured to perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical device.
  26. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至22中任一项所述的方法的步骤。A computer device, comprising a memory and a processor, the memory stores a computer program, wherein the processor implements the steps of the method according to any one of claims 1 to 22 when executing the computer program.
  27. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至22中任一项所述的方法的步骤。A computer-readable storage medium, on which a computer program is stored, wherein, when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 22 are realized.
PCT/CN2022/121629 2021-10-21 2022-09-27 Collision detection method and apparatus, device, and readable storage medium WO2023065988A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111229335.7 2021-10-21
CN202111229335.7A CN115998439A (en) 2021-10-21 2021-10-21 Collision detection method for surgical robot, readable storage medium, and surgical robot
CN202111667608.6A CN114224512B (en) 2021-12-30 2021-12-30 Collision detection method, apparatus, device, readable storage medium, and program product
CN202111667608.6 2021-12-30

Publications (1)

Publication Number Publication Date
WO2023065988A1 true WO2023065988A1 (en) 2023-04-27

Family

ID=86057909

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/121629 WO2023065988A1 (en) 2021-10-21 2022-09-27 Collision detection method and apparatus, device, and readable storage medium

Country Status (1)

Country Link
WO (1) WO2023065988A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117179680A (en) * 2023-09-11 2023-12-08 首都医科大学宣武医院 Endoscope navigation system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102448680A (en) * 2009-03-31 2012-05-09 直观外科手术操作公司 Synthetic representation of a surgical robot
WO2013101273A1 (en) * 2011-12-30 2013-07-04 St. Jude Medical, Atrial Fibrillation Division, Inc. System and method for detection and avoidance of collisions of robotically-controlled medical devices
US20140243596A1 (en) * 2013-02-28 2014-08-28 Samsung Electronics Co., Ltd. Endoscope system and control method thereof
CN106426186A (en) * 2016-12-14 2017-02-22 国网江苏省电力公司常州供电公司 Electrified operation robot autonomous operation method based on multi-sensor information fusion
US20180357825A1 (en) * 2017-06-09 2018-12-13 Siemens Healthcare Gmbh Output of position information of a medical instrument
WO2020190832A1 (en) * 2019-03-20 2020-09-24 Covidien Lp Robotic surgical collision detection systems
CN112704564A (en) * 2020-12-22 2021-04-27 上海微创医疗机器人(集团)股份有限公司 Surgical robot system, collision detection method, system, and readable storage medium
CN112773506A (en) * 2021-01-27 2021-05-11 哈尔滨思哲睿智能医疗设备有限公司 Collision detection method and device for laparoscopic minimally invasive surgery robot
US20220071716A1 (en) * 2020-09-08 2022-03-10 Verb Surgical Inc. 3d visualization enhancement for depth perception and collision avoidance
CN114224512A (en) * 2021-12-30 2022-03-25 上海微创医疗机器人(集团)股份有限公司 Collision detection method, device, apparatus, readable storage medium, and program product
CN114494602A (en) * 2022-02-10 2022-05-13 苏州微创畅行机器人有限公司 Collision detection method, system, computer device and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102448680A (en) * 2009-03-31 2012-05-09 直观外科手术操作公司 Synthetic representation of a surgical robot
WO2013101273A1 (en) * 2011-12-30 2013-07-04 St. Jude Medical, Atrial Fibrillation Division, Inc. System and method for detection and avoidance of collisions of robotically-controlled medical devices
US20140243596A1 (en) * 2013-02-28 2014-08-28 Samsung Electronics Co., Ltd. Endoscope system and control method thereof
CN106426186A (en) * 2016-12-14 2017-02-22 国网江苏省电力公司常州供电公司 Electrified operation robot autonomous operation method based on multi-sensor information fusion
US20180357825A1 (en) * 2017-06-09 2018-12-13 Siemens Healthcare Gmbh Output of position information of a medical instrument
WO2020190832A1 (en) * 2019-03-20 2020-09-24 Covidien Lp Robotic surgical collision detection systems
US20220071716A1 (en) * 2020-09-08 2022-03-10 Verb Surgical Inc. 3d visualization enhancement for depth perception and collision avoidance
CN112704564A (en) * 2020-12-22 2021-04-27 上海微创医疗机器人(集团)股份有限公司 Surgical robot system, collision detection method, system, and readable storage medium
CN112773506A (en) * 2021-01-27 2021-05-11 哈尔滨思哲睿智能医疗设备有限公司 Collision detection method and device for laparoscopic minimally invasive surgery robot
CN114224512A (en) * 2021-12-30 2022-03-25 上海微创医疗机器人(集团)股份有限公司 Collision detection method, device, apparatus, readable storage medium, and program product
CN114494602A (en) * 2022-02-10 2022-05-13 苏州微创畅行机器人有限公司 Collision detection method, system, computer device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117179680A (en) * 2023-09-11 2023-12-08 首都医科大学宣武医院 Endoscope navigation system

Similar Documents

Publication Publication Date Title
US20150320514A1 (en) Surgical robots and control methods thereof
Hayashibe et al. Laser-scan endoscope system for intraoperative geometry acquisition and surgical robot safety management
JP5707449B2 (en) Tool position and identification indicator displayed in the border area of the computer display screen
EP3613547B1 (en) Synthetic representation of a surgical robot
JPH11309A (en) Image processor
CN111317568B (en) Chest imaging, distance measurement, surgical awareness, and notification systems and methods
JP7469120B2 (en) Robotic surgery support system, operation method of robotic surgery support system, and program
US20230172679A1 (en) Systems and methods for guided port placement selection
JP2010200894A (en) Surgery support system and surgical robot system
WO2016195919A1 (en) Accurate three-dimensional instrument positioning
US11944395B2 (en) 3D visualization enhancement for depth perception and collision avoidance
CN114224512B (en) Collision detection method, apparatus, device, readable storage medium, and program product
WO2023065988A1 (en) Collision detection method and apparatus, device, and readable storage medium
Mathur et al. A semi-autonomous robotic system for remote trauma assessment
CN114533263B (en) Mechanical arm collision prompting method, readable storage medium, surgical robot and system
US20180249953A1 (en) Systems and methods for surgical tracking and visualization of hidden anatomical features
US11771508B2 (en) Robotically-assisted surgical device, robotically-assisted surgery method, and system
CN114631886A (en) Mechanical arm positioning method, readable storage medium and surgical robot system
CN115252140A (en) Surgical instrument guiding method, surgical robot, and medium
CN115998439A (en) Collision detection method for surgical robot, readable storage medium, and surgical robot
US20210298854A1 (en) Robotically-assisted surgical device, robotically-assisted surgical method, and system
WO2023066019A1 (en) Surgical robot system, safety control method, slave device, and readable medium
US20240070875A1 (en) Systems and methods for tracking objects crossing body wallfor operations associated with a computer-assisted system
WO2023018685A1 (en) Systems and methods for a differentiated interaction environment
WO2023018684A1 (en) Systems and methods for depth-based measurement in a three-dimensional view

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882599

Country of ref document: EP

Kind code of ref document: A1