CN114224512B - Collision detection method, apparatus, device, readable storage medium, and program product - Google Patents

Collision detection method, apparatus, device, readable storage medium, and program product Download PDF

Info

Publication number
CN114224512B
CN114224512B CN202111667608.6A CN202111667608A CN114224512B CN 114224512 B CN114224512 B CN 114224512B CN 202111667608 A CN202111667608 A CN 202111667608A CN 114224512 B CN114224512 B CN 114224512B
Authority
CN
China
Prior art keywords
medical image
target
position information
medical
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111667608.6A
Other languages
Chinese (zh)
Other versions
CN114224512A (en
Inventor
请求不公布姓名
王家寅
李自汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202111667608.6A priority Critical patent/CN114224512B/en
Publication of CN114224512A publication Critical patent/CN114224512A/en
Priority to PCT/CN2022/121629 priority patent/WO2023065988A1/en
Application granted granted Critical
Publication of CN114224512B publication Critical patent/CN114224512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/06Measuring instruments not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/35Surgical robots for telesurgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/76Manipulators having means for providing feel, e.g. force or tactile feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Robotics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Endoscopes (AREA)

Abstract

The present application relates to a collision detection method, apparatus, device, readable storage medium, and program product. The method comprises the following steps: acquiring a current medical image; acquiring position information of a target part of the medical instrument; acquiring a standard medical image obtained by preprocessing, wherein the standard medical image carries the position information of a target object; converting the position information of the target object in the standard medical image into a target medical image according to the first matching relationship between the current medical image and the standard medical image, and converting the position information of the target part of the medical instrument into the target medical image according to the second matching relationship between the current medical image and the position information of the target part of the medical instrument; and performing collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument. By adopting the method, the detection accuracy can be improved, and the method is more intelligent.

Description

Collision detection method, apparatus, device, readable storage medium, and program product
Technical Field
The present application relates to the field of intelligent medical technology, and in particular, to a collision detection method, apparatus, device, readable storage medium, and program product.
Background
With the continuous development of robots, more and more robots are applied to the surgical field. Although the traditional rigid mechanism robot is widely applied to various operations in the medical field, the adaptability, the safety and the flexibility of the traditional rigid mechanism robot are relatively poor, and the damage to internal tissues of a human body is easy to occur in the operation process. The collision detection technology has wide application in the robot field, and the existing collision detection technology mainly comprises two kinds of technologies:
1) The collision detection method is mainly based on the establishment of the bounding boxes for the mechanical arms and is realized by detecting the collision between the bounding boxes.
2) External force collision detection of six-dimensional moment sensing or joint moment sensing at the tail end of the robot is adopted, and the external force variation generated when collision occurs can be detected by calculating the external force received by the tail end of the robot based on moment information fed back by the moment sensor in the collision detection mode.
However, OBB bounding box collision detection is mainly directed to collision detection between rigid body structures of convex polyhedron regular shape, and is not suitable for collision detection of instrument tip and soft tissue. The six-dimensional moment sensor or the joint moment sensor is large in size, is generally difficult to install at the tail end of an instrument, is commonly used for detecting the collision between a cooperative robot and the external environment, and is also not suitable for detecting the collision of a narrow channel in a body.
In summary, existing collision detection techniques are not effective for collision detection between miniature surgical instruments and soft tissue, particularly blood vessels, nerves and sensitive tissue that are not visible behind an endoscopic image.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a collision detection method, apparatus, device, readable storage medium, and program product that can expand the application range of collision detection and detect collisions between a microsurgical instrument and soft tissue.
In a first aspect, the present application provides a collision detection method, the method comprising:
acquiring a current medical image;
acquiring position information of a target part of the medical instrument;
acquiring a standard medical image obtained by preprocessing, wherein the standard medical image carries the position information of a target object;
converting the position information of a target object in the standard medical image into a target medical image according to a first matching relationship between the current medical image and the standard medical image, and converting the position information of a target part of the medical instrument into the target medical image according to a second matching relationship between the current medical image and the position information of the target part of the medical instrument;
And performing collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument.
In a second aspect, the present application provides a collision detection system, the collision detection system including a processor, a medical mirror, and a medical instrument, the medical instrument being provided with a sensor for acquiring positional information of a target site of the medical instrument, and transmitting the acquired positional information of the target site of the medical instrument to the processor; the medical mirror is used for acquiring a current medical image and sending the current medical image to the processor; the processor is configured to perform the collision detection method described above.
In a third aspect, the present application provides a collision detection apparatus, the apparatus comprising:
the current medical image acquisition module is used for acquiring a current medical image;
the position information acquisition module is used for acquiring the position information of the target part of the medical instrument;
the standard medical image acquisition module is used for acquiring a standard medical image obtained by preprocessing, wherein the standard medical image carries the position information of a target object;
the matching relation calculation module is used for calculating a first matching relation between the current medical image and the standard medical image and calculating a second matching relation between the current medical image and the position information of the target part of the medical instrument;
The conversion module is used for converting the position information of the target object in the standard medical image into a target medical image according to the first matching relation and converting the position information of the target part of the medical instrument into the target medical image according to the second matching relation;
and the collision detection module is used for carrying out collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument.
In a fourth aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method as referred to in any one of the embodiments described above when the computer program is executed by the processor.
In a fifth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method as referred to in any of the embodiments described above.
In a sixth aspect, the application provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method as referred to in any of the embodiments described above.
According to the collision detection method, the device, the equipment, the readable storage medium and the program product, the position information of the target object is carried in the standard medical image obtained through preprocessing, so that the position information of the target object in the standard medical image is converted into the target medical image according to the first matching relation between the current medical image and the standard medical image, the position information of the target part of the medical instrument is converted into the target medical image according to the second matching relation between the current medical image and the position information of the target part of the medical instrument, the collision detection is carried out in the target medical image, the application range of the collision detection is enlarged, the application range of the collision detection is more intelligent, and the collision between the miniature surgical instrument and the soft tissue can be detected.
Drawings
FIG. 1 is a system diagram of a collision detection system in one embodiment;
FIG. 2 is a schematic view of a use scenario of a collision detection system according to one embodiment of the present application;
FIG. 3 is a schematic diagram of the configuration of a physician console according to one embodiment of the application;
FIG. 4 is a schematic view of an operating table cart according to an embodiment of the application;
FIG. 5 is a schematic view of the structure of a surgical instrument according to one embodiment of the present application;
FIG. 6 is a schematic diagram of a master-slave operation control of a robot according to the present application;
FIG. 7 is a flow chart of a collision detection method in one embodiment;
FIG. 8 is a schematic view of a scene of acquisition of a standard medical image in one embodiment;
FIG. 9 is a flow chart of an endoscopic image, a preoperative medical image, and an endoscopic image, a standard medical image being a fusion of preoperative medical images, in one embodiment;
FIG. 10 is a schematic illustration of a collision of a medical instrument with a target object in one embodiment;
FIG. 11 is a schematic illustration of the spatial location of a medical instrument in a target surgical field according to one embodiment;
FIG. 12 is a schematic diagram of a method of ray-collision detection in one embodiment;
FIG. 13 is a flow chart of a method of ray crash detection in one embodiment;
FIG. 14 is a schematic diagram of a convex polygon collision detection method in one embodiment;
FIG. 15 is a flow chart of a convex polygon collision detection method in one embodiment;
FIG. 16 is a schematic diagram of a linear projection collision detection method in one embodiment;
FIG. 17 is a schematic view of a collision safety visual alert in one embodiment;
FIG. 18 is a schematic diagram of a crash-safe audible alert in one embodiment;
FIG. 19 is a block diagram showing the structure of a collision detecting device in one embodiment;
fig. 20 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The collision detection method provided by the embodiment of the application can be applied to a collision detection system shown in fig. 1. The collision detection system comprises a processor, a medical mirror and a medical instrument, wherein a sensor is arranged on the medical instrument and used for collecting the position information of a target part of the medical instrument and sending the collected position information of the target part of the medical instrument to the processor; the medical mirror is used for acquiring a current medical image and sending the current medical image to the processor; the processor is used to perform the collision detection method hereinafter.
Specifically, please refer to fig. 2-6, wherein fig. 2 is a schematic diagram illustrating a usage scenario of a collision detection system according to an embodiment of the present application; FIG. 3 is a schematic diagram of the configuration of a physician console according to one embodiment of the application;
FIG. 4 is a schematic view of an operating table cart according to an embodiment of the application; FIG. 5 is a schematic view of the structure of a surgical instrument according to one embodiment of the present application; fig. 6 is a schematic diagram of a master-slave operation control of a robot according to the present application.
In conjunction with fig. 2, the collision detection system is used in a surgical application scenario, and physically includes a plurality of trolleys, including a doctor console 100, a surgical trolley 200, an image trolley 300, and a tool trolley 400, with a main manipulator disposed on the doctor console 100. The surgical cart 200 has at least two robot arms 201, and medical mirrors (e.g., endoscopes) and medical instruments (e.g., surgical operation instruments) can be mounted on the robot arms, respectively. An operator (e.g., a surgeon) performs minimally invasive surgical treatment on a patient on a hospital bed through remote manipulation of the doctor's console 100 and a master manipulator. Wherein the master manipulator, the mechanical arm 201 and the surgical operation device form a master-slave control relationship. Specifically, the robot arm 201 and the surgical operation instrument are moved according to the movement of the main operator, i.e., according to the operation of the operator's hand during the surgery. Further, the main manipulator also receives acting force information of the human tissue and organs to the surgical operation instrument and feeds back the acting force information to the hands of the operator, so that the operator can feel the surgical operation more intuitively.
In one embodiment, the system further comprises a display device and/or an augmented reality device, the display device and/or the augmented reality device being in communication with the processor; the display device and/or the augmented reality device is used for displaying the current medical image and/or the target medical image sent by the processor. The doctor console 100 has a display device that is communicatively connected to the medical scope mounted on the arm of the surgical cart 200, and is capable of receiving and displaying the current medical image acquired by the endoscope and/or the fused target medical image. The operator controls the movement of the manipulator and surgical instrument by the main manipulator according to the current medical image and/or the target medical image displayed by the display device on the doctor console 100. The endoscope and surgical instrument are passed through a wound in the patient's body into the patient's site, respectively.
Optionally, in other embodiments, the collision system further includes auxiliary components such as a ventilator and anesthesia machine 500 for use during surgery. Those skilled in the art may select and configure these auxiliary components according to the needs of the surgery and will not be described in detail here.
In one embodiment, shown in connection with fig. 3, wherein the physician console includes an adjustment component, a manipulation arm, a trolley component, an image component. The two control arms 120 detect hand motion information of the operator through control handles at the tail ends of the control arms as motion control input of the whole system; the dolly part 130 is a base bracket for mounting other components, and the control dolly has movable casters that can be moved or fixed as required; the trolley part 130 is provided with a foot switch for detecting a switching value control signal sent by an operator; the adjustment component 110 can electrically adjust the position of the manipulator arm, the imaging component, the operator armrest, etc., i.e., the man-machine parameter adjustment function. In operation, an operator sitting in front of the doctor's console is located outside the sterile field, and the operator controls the surgical instrument and laparoscope by manipulating the control handle that manipulates the end of the arm. The operator observes the intracavity picture of returning through the image part, and both hands action control patient's operation platform arm and apparatus motion accomplish various operations to reach the purpose of doing the operation for the patient, the operator can accomplish relevant operation input such as electric cutting, electric coagulation etc. through foot switch control part action simultaneously through foot switch.
In one embodiment, as shown in connection with fig. 4, the surgical trolley includes an adjustment arm 210, a tool arm 201, a tool arm surgical instrument 204, and a medical scope 205. A surgical instrument 204 and a medical scope 205 are mounted on the distal end of the tool arm, and the surgical instrument and the medical scope can be inserted into the body through the stabbing hole on the surface of the human body. The image information in the body is transmitted to the display screen by the image carriage 300 through the medical mirror. An image trolley of a surgical robot system, which mainly comprises: medical mirror and display device. The medical mirror is used for acquiring and processing images of an operation space in a patient; the display device is used for displaying the acquired and processed images of the medical mirror in real time. In operation, an operator can operate the surgical instrument and the medical mirror by operating the control handle at the tail end of the operating arm, so that corresponding operation is completed.
In one embodiment, as shown in connection with fig. 5, a surgical instrument includes: a transmission mechanism, an instrument rod and an operating mechanism. The surgical instrument can perform telescopic movement along the axial direction of the instrument rod; the rotation movement, namely the rotation joint, can be carried out around the direction of the instrument rod; the operating mechanism can perform pitching movement, swaying movement and opening and closing movement, namely a pitching joint, a swaying joint and an opening and closing joint, so as to realize various applications in operation.
In one embodiment, and as shown in connection with fig. 6, during normal surgical operation, the operator controls the position and posture of the distal end of the instrument by master-slave teleoperation under the guidance of the endoscopic images. The position of the tail end of the instrument comprises translational movement of the instrument along the X, Y, Z three directions, and the posture comprises pitching, swaying and autorotation movement of the tail end of the instrument.
The application provides a collision detection method, which is described by taking a processor in FIG. 1 as an example, and is applied to the method as shown in FIG. 7, and comprises the following steps:
s702: a current medical image is acquired.
Specifically, the current medical image is a real-time medical image acquired by a medical scope, which is acquired by an image sensor of the medical scope after the medical scope is extended to a target surgical site.
S704: positional information of a target site of a medical instrument is acquired.
Specifically, the medical instrument refers to a surgical operation instrument, and the target site thereof may refer to an end of the surgical operation instrument. Wherein the acquisition of the position information of the target site of the medical instrument can be calculated by the principle of kinematics. For example, in a master-slave operation, based on the cartesian end position and velocity of the master, the response command cartesian position and command velocity of the slave instrument in the robot coordinate system are calculated.
S706: and acquiring a standard medical image obtained by preprocessing, wherein the standard medical image carries the position information of the target object.
Specifically, the standard medical image is obtained in advance, and can be combined with fig. 8, wherein in the embodiment, a surgical space modeling method based on CT, MRI and other tomography scanning is provided. In this embodiment, the image information obtained by scanning with tomographic imaging techniques such as CT and MRI may be used to complete tissue modeling in the operation space through an image processing algorithm. The key tissues needing special attention can be determined according to the tissue modeling information in the abdominal cavity before operation, and marking points are established for three-dimensional reconstruction of an operation scene.
S708: and converting the position information of the target part of the medical instrument into the target medical image according to the second matching relation between the current medical image and the position information of the target part of the medical instrument.
Specifically, the first matching relationship is a matching relationship between the current medical image and the standard medical image, and matching between the current medical image and the standard medical image is completed mainly by adapting the positions of key tissue annotation points in the current medical image. The second matching relation is the matching of the current medical image and the position information of the target part of the medical instrument, is mainly established through robot kinematics deduction, specifically, the movement position information of the instrument arm and the endoscope arm, including the position and the speed information of each joint, is calculated through the mapping relation of robot kinematics and kinematics, and the second matching relation of the position information of the target part of the medical instrument and the previous medical image is obtained.
In one embodiment, before converting the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, the method further includes: identifying an object to be processed in the current medical image; and carrying out matching fusion on the target object in the standard medical image and the object to be processed to obtain a first matching relation, wherein the first matching relation is used for converting the position information of the target object in the standard medical image into the target medical image.
In one embodiment, before converting the position information of the target site of the medical instrument into the target medical image according to the second matching relationship between the current medical image and the position information of the target site of the medical instrument, the method further includes: and according to the motion information of the medical mirror for acquiring the current medical image and the motion information of the target part of the medical instrument, performing kinematic mapping calculation to obtain a second matching relationship, wherein the second matching relationship is used for converting the position information of the target part of the medical instrument into the target medical image.
Specifically, referring to fig. 9, in practical application, the current medical image is an endoscope image, the standard medical image is a preoperative medical image, and the position information of the target portion of the medical instrument is the position information of the distal end of the medical instrument. The first matching relationship is that a preoperative medical image of the operation space tissue is acquired firstly, then tissue position information is identified, reconstruction is carried out according to the tissue position information to obtain an intra-abdominal three-dimensional model, and the intra-abdominal three-dimensional model and an intra-operative 3D endoscope image are fused to obtain the first matching relationship.
And the second matching relationship is obtained by: firstly, the movement position information of the mechanical arm and the endoscope arm is obtained, the position of the tail end of the instrument is mapped according to the forward kinematics of the robot, and the position information of the instrument under the endoscope image coordinate system is obtained, so that the fusion of the 3D endoscope image and the robot coordinate system is realized.
In particular, the target medical image may be regarded as a fused image, which may be obtained by fusing the target object in the standard medical image and the medical instrument into the current medical image. Or by fusing the target object and medical instrument in the standard medical image into a new space, where the space in which the target medical image is located is not limited.
Specifically, after the first matching relationship and the second matching relationship are obtained, according to the first matching relationship and the second matching relationship, both the position information of the target object in the standard medical image and the position information of the target part of the medical instrument are converted into the same space, such as a target medical image space, so that the position information of the target object in the standard medical image and the position information of the target part of the medical instrument are comparable.
In the embodiment, the intra-abdominal three-dimensional model is built through the preoperative medical image, the spatial position information of the key tissue is marked, then the position of the key tissue marking point under the 3D endoscope image is adapted, the registration fusion of the two coordinate systems is completed, and the fusion of the preoperative medical image and the intraoperative 3D endoscope image is realized. And then, the movement position information of the instrument arm and the endoscope arm, including the position and the speed information of each joint, is calculated to obtain a position system of an endoscope image coordinate system of the instrument through the mapping relation between the robot kinematics and the kinematics, so as to realize the fusion of the 3D endoscope image and the robot coordinate system in the operation. Based on the two steps, fusion of preoperative medical images, intra-operative 3D endoscope images and a robot coordinate system is realized, and the three-dimensional space position of the surgical instrument in an operation area is calculated in real time.
S710: and performing collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument.
Specifically, in the target medical image, the distance between the position information of the target object and the position information of the target part of the medical instrument is calculated to determine whether a collision occurs, and if not, the detection is continued for a period.
Specifically, in conjunction with fig. 10 and 11, fig. 10 is a schematic view of collision of a medical instrument with a target object in one embodiment, and fig. 11 is a schematic view of spatial position of the medical instrument in a target operation area in one embodiment, in which a doctor views a real-time scene of an operation section through an endoscopic image at the time of an operation. In the endoscope view field frame, a doctor can see the operation focus tissues (the prostate is an example), and can also see partial sensitive tissues and vascular tissues, but other sensitive tissues and vascular nerve tissues are not in the endoscope image view field, and if the instrument moves beyond the endoscope view field, the sensitive tissues are easy to be injured by mistake. By fusing the intra-abdominal three-dimensional model, the endoscope visual image and the instrument position established by the preoperative medical image, the relative spatial position relation between the surgical instrument and the focus and the sensitive tissue in the visual field range can be determined, and the position relation between the surgical instrument and the sensitive tissue and the peripheral nerve blood vessel which cannot be seen by the doctor through the endoscope can be determined.
In connection with fig. 10, in particular, collisions with invisible tissue and angiogenesis, which are seen by the surgical instrument moving beyond the field of view of the endoscope, can be effectively recognized in this embodiment. Based on the spatial position of the surgical instrument in the surgical operation scene, collision of the surgical instrument with sensitive tissues, vascular nerves and the like is detected in real time. When the spatial position of the instrument overlaps with the spatial position of the tissue, a collision alarm is sent out and the position of the collision occurrence point in the abdominal cavity is fed back.
According to the collision detection method, the obtained standard medical image is preprocessed to carry the position information of the target object, so that the first matching relation between the current medical image and the standard medical image is calculated, and the second matching relation between the current medical image and the position information of the target part of the medical instrument is calculated; according to the first matching relation, the position information of the target object in the standard medical image is converted into the target medical image, and according to the second matching relation, the position information of the target part of the medical instrument is converted into the target medical image, so that collision detection is carried out in the target medical image, the detection accuracy can be improved, and the detection is more intelligent.
In one embodiment, collision detection is performed according to position information of a target object in a target medical image and position information of a target site of a medical instrument, including at least one of the following: performing collision detection based on a ray collision detection method according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument; performing collision detection based on a convex polygon collision detection method according to the position information of a target object in the target medical image and the position information of a target part of the medical instrument; and performing collision detection based on a linear projection collision detection method according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument.
Wherein, in order to explain the above collision detection in detail, the following describes the above three modes in detail:
according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument, performing collision detection based on a ray collision detection method, including: generating an origin according to the position information of the target part of the medical instrument in the target medical image, and sending rays to the motion direction of the medical instrument by taking the origin as a starting point; judging whether the ray intersects with the target object or not according to the ray and the position information of the target object in the target medical image; when the ray intersects with the target object and the distance between the position of the intersection point and the position of the origin point meets the preset condition, the collision between the target object and the medical instrument is judged.
Referring to fig. 12 and 13, fig. 12 is a schematic diagram of a ray-collision detection method in an embodiment, and fig. 13 is a flowchart of a ray-collision detection method in an embodiment, in which, as shown in fig. 12, a position is selected as an origin, a ray is emitted from the origin in a direction, and whether the ray path is equal to the surface of an object or not is calculatedAnd if the intersection point exists, calculating the distance between the intersection point and the origin. Specifically, in the present embodiment, with the instrument tip as the origin, rays are emitted toward the direction of movement of the instrument tip, then the intersection point of the rays and the tissue surface is calculated and the distance of the intersection point from the origin is given, and if the distance is positive or there is no intersection point, the instrument and the tissue do not collide. Wherein, the distance between two points in the three-dimensional space is calculated by the formula: Wherein (x) 1 ,y 1 ,z 1 ) Is the coordinates of point 1, (x 2 ,y 2 ,z 2 ) Is the coordinates of point 2.
Wherein, preferably, the intersection point calculation of the ray and the object can be realized by establishing a parameter equation of the straight line and the geometric body and solving an equation set formed by the straight line and the geometric body in parallel, if the equation set has no solution, no intersection point exists, and if the equation set has a solution, all solutions correspond to the coordinates of all intersection points of the ray and the geometric body. Sphere parameter equation in three-dimensional space: f (X) = |x-c|| 2 =R 2 Ray parameter equation in three-dimensional space:wherein X is the coordinates of a point on the sphere, the coordinates of the sphere center of C, and R is the radius of the sphere; p is the coordinates of the ray origin, t is the coefficient, < >>Is a unit vector along the ray direction.
In one embodiment, according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument, collision detection is performed based on a convex polygon collision detection method, including: generating a first geometry from position information of a target site of the medical instrument in the target medical image; generating a second geometry according to the position information of the target object in the target medical image; calculating the minkowski difference of the first geometry and the second geometry; judging whether the target object collides with the medical instrument according to the Minkowski difference.
Referring to fig. 14 and 15, fig. 14 is a schematic diagram of a convex polygon collision detection method in an embodiment, and fig. 15 is a flowchart of a convex polygon collision detection method in an embodiment, in which whether two geometric bodies collide is determined by calculating a Minkowski difference between the two geometric bodies to be collision detected according to whether the difference includes an origin (0, 0). As in this figure S1 and S2 have portions that overlap in collision, so the S3 geometry generated by their Minkowski differences includes the origin (0, 0). The Minkowski difference is calculated by differencing the coordinates of points within one geometry with all points of the other geometry.
Specifically, the instrument tip cylinder is taken as one geometry and the sensitive tissue as the other geometry, then the Minkowski difference of the two geometries is calculated, collision occurs if the origin is contained, and no collision occurs if not contained. In the figure, the Minkowski difference calculation process of the S1, S2 geometry is as follows: s1= { a1, a2, a3, a4}; s2= { b1, b2, b3, b4}; s3 = { (a 1-b 1), (a 1-b 2), (a 1-b 3), (a 1-b 4), (a 2-b 1), (a 2-b 2), (a 2-b 3), (a 2-b 4), (a 3-b 1), (a 3-b 2), (a 3-b 3), (a 3-b 4), (a 4-b 1), (a 4-b 2), (a 4-b 3), (a 4-b 4) }, wherein a1, a2, a3, a4 represents the apex of geometry S1 and b1, b2, b3, b4 represents the apex of geometry S2.
In one embodiment, collision detection is performed based on a linear projection collision detection method according to position information of a target object in a target medical image and position information of a target part of a medical instrument, including: generating a first geometry from position information of a target site of the medical instrument in the target medical image; generating a second geometry according to the position information of the target object in the target medical image; calculating whether projection parts of the first geometric body and the second geometric body overlap or not by taking the moving direction of the medical instrument as a projection direction; when the target object and the medical instrument overlap, it is determined that the target object collides with the medical instrument.
Referring to fig. 16, fig. 16 is a schematic diagram of a method for detecting a collision of a straight line projection in an embodiment, in this embodiment, whether two geometric objects to be detected collide in a direction pointed by the projected straight line is determined by calculating whether projections of the two geometric objects on the straight line overlap. For surgical instruments and tissues, the tail end of the instrument can be used as one geometrical body, the sensitive tissue can be used as the other geometrical body, then the direction of the instrument command speed is used as the direction of a projection straight line, whether the projection parts of the two geometrical bodies are overlapped or not is projected and calculated, and if the projection parts are overlapped, the collision of the instrument with the tissues according to the current movement trend is indicated.
In one embodiment, collision detection is performed according to position information of a target object in a target medical image and position information of a target site of a medical instrument, including: and outputting a collision alarm which can be perceived when the target object collides with the medical instrument, wherein the collision alarm comprises at least one of a display reminder and a sound reminder.
Specifically, referring to fig. 17 and 18, fig. 17 is a schematic view of a collision safety visual alert in one embodiment, and fig. 18 is a schematic view of a collision safety audible alert in one embodiment.
As shown in fig. 17, in the in-vivo image display provided to the doctor and the assistant, the doctor is prompted by changing the background color of the image when the surgical instrument collides with the sensitive tissue, and a specific collision portion between the instrument and the tissue is displayed with a mark.
As shown in fig. 18, the present embodiment provides a collision safety sound warning method, in which whether a collision occurs is detected at the surgical instrument end, if so, the robot operation main end sends out a buzzer to prompt the doctor that the surgical instrument collides with sensitive tissues, and if not, no sound prompt is made.
In one embodiment, before acquiring the standard medical image obtained by preprocessing, the method further comprises: acquiring an initial medical image by a medical image scanning device; and identifying the target object in the initial medical image, marking the position of the target object to obtain the position information of the target object, and obtaining a standard medical image according to the position information of the target object and the initial medical image.
With continued reference to fig. 6, the image information obtained by scanning with tomographic imaging techniques such as CT and MRI is subjected to image processing algorithm to complete tissue modeling in the operation space. The critical tissues needing special attention can be determined according to tissue modeling information in the abdominal cavity before operation, marking points are established for three-dimensional reconstruction of an operation scene, namely the tissues needing marking are determined in the preoperation image and marked in the marked three-dimensional model, so that a standard medical image marking a target object including sensitive tissues can be obtained.
In one embodiment, the collision detection method further includes: the current medical image and/or the target medical image is displayed by a display device and/or an augmented reality apparatus.
In particular, the display device may refer to a display device on a physician console, by means of which a current medical image and/or a target medical image may be displayed. In an alternative embodiment, the present system may also incorporate an augmented reality device through which the current medical image and/or the target medical image is displayed.
In this embodiment, an intra-abdominal three-dimensional digital scene is reconstructed by acquiring preoperative CT images/MR images, tissues in an operation region are marked, hidden nerves, blood vessels and sensitive tissues are fused with the preoperative medical images, endoscopic visual images and the positions of the instruments in a robot coordinate system, and the spatial positions and the postures of the surgical instruments in the three-dimensional surgical digital scene are determined, so that the three-dimensional spatial position relations of the instruments, the 3D endoscope and the tissues of a human body are obtained and intuitively displayed in the three-dimensional surgical scene. When the surgical instrument collides with sensitive tissues in an operation area, when the risk of stabbing the sensitive tissues exists, particularly when sensitive nerves, blood vessels and tissues in an invisible area of an endoscope visual image are hidden, the collision risk is prompted through a virtual operation visual image (including AR\VR), the collision and the collision depth of the instrument and the tissues are prompted through progressive color change of the visual image and sound, and the collision force of the instrument and the tissues is fed back to a main operation end, so that the safety of operation is ensured.
Specifically, as shown in fig. 18, the master manipulator, the mechanical arm 201 and the surgical instrument form a master-slave control relationship, so as to form a master-slave mapping, then a three-dimensional model in the abdominal cavity and a matching relationship between the preoperative medical image and the endoscopic image are calculated, so that the three are fused to judge whether the position between the current position of the tail end of the instrument and the sensitive tissue is smaller than a threshold value, if yes, the instrument collides with the tissue, and the instrument is processed according to the flow to give an alarm. If the position between the current position of the tail end of the instrument and the sensitive tissue is not smaller than the threshold value, the instrument and the tissue are not collided, and the detection of the next period is continued.
In one embodiment, the method further comprises: acquiring a visual space coordinate system of the augmented reality equipment; calculating a third matching relationship between a visual space coordinate system of the augmented reality device and an image space coordinate system of the current medical image; displaying the current medical image in a visual space of the augmented reality device according to the third matching relationship; and/or displaying the target medical image in a visual space of the augmented reality device according to the first, second and third matching relationships.
In order to display the current medical image and/or the target medical image in the visual space, a third matching relationship between the visual space coordinate system of the augmented reality device and the image space coordinate system of the current medical image is calculated, and then the current medical image is displayed in the visual space of the augmented reality device according to the third matching relationship; and/or displaying the target medical image in a visual space of the augmented reality device according to the first, second and third matching relationships.
Wherein optionally, when the target object collides with the medical instrument, the target medical image is displayed, so that the doctor can be prompted to the position where the collision occurs by displaying the fused image; when the target object and the medical instrument do not collide, the current medical image is displayed.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a collision detection device for realizing the above related collision detection method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation of one or more embodiments of the collision detection device provided below may be referred to above for the limitation of the collision detection method, which is not repeated here.
In one embodiment, as shown in fig. 19, there is provided a collision detecting apparatus including: a current medical image acquisition module 1901, a location information acquisition module 1902, a standard medical image acquisition module 1903, a conversion module 1904, and a collision detection module 1905, wherein:
a current medical image acquisition module 1901 for acquiring a current medical image;
a position information acquisition module 1902, configured to acquire position information of a target site of a medical instrument;
a standard medical image acquisition module 1903, configured to acquire a standard medical image obtained by preprocessing, where the standard medical image carries position information of a target object;
a conversion module 1904, configured to convert the position information of the target object in the standard medical image into the target medical image according to the first matching relationship between the current medical image and the standard medical image, and convert the position information of the target part of the medical instrument into the target medical image according to the second matching relationship between the current medical image and the position information of the target part of the medical instrument;
The collision detection module 1905 is configured to perform collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical apparatus.
In one embodiment, the collision detection module 1905 is configured to perform collision detection according to at least one of: performing collision detection based on a ray collision detection method according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument; performing collision detection based on a convex polygon collision detection method according to the position information of a target object in the target medical image and the position information of a target part of the medical instrument; and performing collision detection based on a linear projection collision detection method according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument.
In one embodiment, the collision detection module 1905 includes:
the ray generation unit is used for generating an origin according to the position information of the target part of the medical instrument in the target medical image, and emitting rays to the movement direction of the medical instrument by taking the origin as a starting point;
the intersection judging unit is used for judging whether the ray intersects with the target object or not according to the ray and the position information of the target object in the target medical image;
And the first collision result output unit is used for judging that the target object collides with the medical instrument when the ray intersects with the target object and the distance between the position of the intersection point and the position of the origin meets the preset condition.
In one embodiment, the collision detection module 1905 includes:
a first geometry generating unit for generating a first geometry from position information of a target site of the medical instrument in the target medical image;
a second geometry generating unit for generating a second geometry according to the position information of the target object in the target medical image;
a minkowski difference calculating unit for calculating a minkowski difference of the first geometry and the second geometry;
and a second collision result output unit for judging whether the target object collides with the medical instrument according to the Minkowski difference.
In one embodiment, the collision detection module 1905 includes:
a third geometry generating unit for generating a first geometry from position information of a target site of the medical instrument in the target medical image;
a fourth geometry generating unit for generating a second geometry according to the position information of the target object in the target medical image;
An overlap judging unit for calculating whether or not the projected portions of the first geometric body and the second geometric body overlap with the movement direction of the medical instrument as a projection direction;
and a third collision result output unit for determining that the target object collides with the medical instrument when overlapped.
In one embodiment, the apparatus further includes:
and the alarm module is used for outputting a collision alarm which can be perceived when the target object collides with the medical instrument.
In one embodiment, the apparatus further includes:
the initial medical image acquisition module is used for acquiring initial medical images through the medical image scanning equipment;
and the standard medical image generation module is used for identifying the target object in the initial medical image, marking the position of the target object to obtain the position information of the target object, and obtaining the standard medical image according to the position information of the target object and the initial medical image.
In one embodiment, the apparatus further includes:
the identification unit is used for identifying the object to be processed in the current medical image;
and the first matching relation generating unit is used for carrying out matching fusion on the target object in the standard medical image and the object to be processed to obtain a first matching relation.
In one embodiment, the apparatus includes:
and the second matching relation generating unit is used for performing kinematic mapping calculation according to the motion information of the medical mirror for acquiring the current medical image and the motion information of the target part of the medical instrument to obtain a second matching relation.
In one embodiment, the apparatus further includes:
the first display module is used for displaying the current medical image and/or the target medical image through a display device and/or augmented reality equipment.
In one embodiment, the apparatus further includes:
the first coordinate system acquisition module is used for acquiring a visual space coordinate system of the augmented reality equipment;
the third matching relation generation module is used for calculating a third matching relation between a visual space coordinate system of the augmented reality device and an image space coordinate system of the current medical image;
the second display module is used for displaying the current medical image in the visual space of the augmented reality device according to the third matching relation; and/or displaying the target medical image in a visual space of the augmented reality device according to the first, second and third matching relationships.
In one embodiment, the apparatus further includes:
The third display module is used for displaying the target medical image when the target object collides with the medical instrument; when the target object and the medical instrument do not collide, the current medical image is displayed.
The respective modules in the above-described collision detection apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 20. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a collision detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 20 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (13)

1. A collision detection apparatus, characterized in that the apparatus comprises:
the current medical image acquisition module is used for acquiring a current medical image;
the position information acquisition module is used for acquiring the position information of the target part of the medical instrument;
the standard medical image acquisition module is used for acquiring a standard medical image obtained by preprocessing, wherein the standard medical image carries the position information of a target object;
The conversion module is configured to convert, into a target medical image, positional information of a target object in the standard medical image according to a first matching relationship between the current medical image and the standard medical image, and convert, into the target medical image, positional information of a target site of the medical instrument according to a second matching relationship between the current medical image and the positional information of the target site of the medical instrument, including: acquiring motion information of a medical mirror for acquiring the current medical image and motion information of a target part of the medical instrument; according to forward kinematics of the robot, performing kinematic mapping on the target part of the medical instrument to the current medical image to obtain a second matching relationship, wherein the second matching relationship is used for converting the position information of the target part of the medical instrument into the target medical image;
and the collision detection module is used for carrying out collision detection according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument.
2. The apparatus of claim 1, wherein the collision detection module is configured to perform collision detection according to at least one of: performing collision detection based on a ray collision detection method according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument; or according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument, performing collision detection based on a convex polygon collision detection method; or according to the position information of the target object in the target medical image and the position information of the target part of the medical instrument, performing collision detection based on a linear projection collision detection method.
3. The apparatus of claim 2, wherein the collision detection module comprises:
the ray generation unit is used for generating an origin according to the position information of the target part of the medical instrument in the target medical image, and emitting rays to the motion direction of the medical instrument by taking the origin as a starting point;
the intersection judging unit is used for judging whether the ray intersects with the target object or not according to the ray and the position information of the target object in the target medical image;
and the first collision result output unit is used for judging that the target object collides with the medical instrument when the ray intersects with the target object and the distance between the position of the intersection point and the position of the origin meets the preset condition.
4. The apparatus of claim 2, wherein the collision detection module comprises:
a first geometry generating unit for generating a first geometry according to position information of a target site of the medical instrument in the target medical image;
a second geometry generating unit for generating a second geometry according to the position information of the target object in the target medical image;
A minkowski difference calculation unit for calculating a minkowski difference of the first geometry and the second geometry;
and a second collision result output unit for judging whether the target object collides with the medical instrument according to the minkowski difference.
5. The apparatus of claim 2, wherein the collision detection module comprises:
a third geometry generating unit for generating a first geometry from position information of a target site of the medical instrument in the target medical image;
a fourth geometry generating unit for generating a second geometry according to the position information of the target object in the target medical image;
an overlap judging unit configured to calculate whether or not projected portions of the first geometric body and the second geometric body overlap with the medical instrument moving direction as a projection direction;
and a third collision result output unit configured to determine that the target object collides with the medical instrument when overlapped.
6. The apparatus of claim 1, wherein the apparatus further comprises:
and the alarm module is used for outputting a collision alarm which can be perceived when the target object collides with the medical instrument.
7. The apparatus according to any one of claims 1 to 6, further comprising:
the initial medical image acquisition module is used for acquiring initial medical images through the medical image scanning equipment;
the standard medical image generation module is used for identifying a target object in the initial medical image and marking the position of the target object to obtain the position information of the target object; and obtaining a standard medical image according to the position information of the target object and the initial medical image.
8. The apparatus according to any one of claims 1 to 6, further comprising:
an identification unit for identifying an object to be processed in the current medical image;
the first matching relation generation unit is used for carrying out matching fusion on the target object in the standard medical image and the object to be processed to obtain a first matching relation, and the first matching relation is used for converting the position information of the target object in the standard medical image into the target medical image.
9. The apparatus according to any one of claims 1 to 6, further comprising:
the first display module is used for displaying the current medical image and/or the target medical image through a display device and/or augmented reality equipment.
10. The apparatus of claim 9, wherein the apparatus further comprises:
the first coordinate system acquisition module is used for acquiring a visual space coordinate system of the augmented reality equipment;
a third matching relation generating module, configured to calculate a third matching relation between a visual space coordinate system of the augmented reality device and an image space coordinate system of the current medical image;
a second display module for displaying the current medical image in a visual space of the augmented reality device according to the third matching relationship; and/or displaying the target medical image in a visual space of the augmented reality device according to the first, second and third matching relationships.
11. The apparatus of claim 9, wherein the apparatus further comprises:
a third display module for displaying the target medical image when the target object collides with the medical instrument; and displaying the current medical image when the target object and the medical instrument are not collided.
12. The collision detection system is characterized by comprising a processor, a medical mirror and a medical instrument, wherein a sensor is arranged on the medical instrument and is used for acquiring the position information of a target part of the medical instrument and sending the acquired position information of the target part of the medical instrument to the processor; the medical mirror is used for acquiring a current medical image and sending the current medical image to the processor; the processor comprising a collision detecting device according to any one of claims 1 to 11.
13. The system of claim 12, further comprising a display device and/or an augmented reality device, the display device and/or augmented reality device in communication with the processor; the display device and/or the augmented reality device are used for displaying the current medical image and/or the target medical image sent by the processor.
CN202111667608.6A 2021-10-21 2021-12-30 Collision detection method, apparatus, device, readable storage medium, and program product Active CN114224512B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111667608.6A CN114224512B (en) 2021-12-30 2021-12-30 Collision detection method, apparatus, device, readable storage medium, and program product
PCT/CN2022/121629 WO2023065988A1 (en) 2021-10-21 2022-09-27 Collision detection method and apparatus, device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111667608.6A CN114224512B (en) 2021-12-30 2021-12-30 Collision detection method, apparatus, device, readable storage medium, and program product

Publications (2)

Publication Number Publication Date
CN114224512A CN114224512A (en) 2022-03-25
CN114224512B true CN114224512B (en) 2023-09-19

Family

ID=80745157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667608.6A Active CN114224512B (en) 2021-10-21 2021-12-30 Collision detection method, apparatus, device, readable storage medium, and program product

Country Status (1)

Country Link
CN (1) CN114224512B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023065988A1 (en) * 2021-10-21 2023-04-27 上海微创医疗机器人(集团)股份有限公司 Collision detection method and apparatus, device, and readable storage medium
CN116077182B (en) * 2022-12-23 2024-05-28 北京纳通医用机器人科技有限公司 Medical surgical robot control method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109464194A (en) * 2018-12-29 2019-03-15 上海联影医疗科技有限公司 Display methods, device, medical supply and the computer storage medium of medical image
WO2019164275A1 (en) * 2018-02-20 2019-08-29 (주)휴톰 Method and device for recognizing position of surgical instrument and camera
CN110806581A (en) * 2019-10-21 2020-02-18 边缘智能研究院南京有限公司 Medical cart anti-collision detection method, device and system
CN111772792A (en) * 2020-08-05 2020-10-16 山东省肿瘤防治研究院(山东省肿瘤医院) Endoscopic surgery navigation method, system and readable storage medium based on augmented reality and deep learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9289267B2 (en) * 2005-06-14 2016-03-22 Siemens Medical Solutions Usa, Inc. Method and apparatus for minimally invasive surgery using endoscopes
DE102015200355B3 (en) * 2015-01-02 2016-01-28 Siemens Aktiengesellschaft A medical robotic device with collision detection and method for collision detection of a medical robotic device
EP3412242A1 (en) * 2017-06-09 2018-12-12 Siemens Healthcare GmbH Dispensing of position information relating to a medical instrument

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019164275A1 (en) * 2018-02-20 2019-08-29 (주)휴톰 Method and device for recognizing position of surgical instrument and camera
CN109464194A (en) * 2018-12-29 2019-03-15 上海联影医疗科技有限公司 Display methods, device, medical supply and the computer storage medium of medical image
CN110806581A (en) * 2019-10-21 2020-02-18 边缘智能研究院南京有限公司 Medical cart anti-collision detection method, device and system
CN111772792A (en) * 2020-08-05 2020-10-16 山东省肿瘤防治研究院(山东省肿瘤医院) Endoscopic surgery navigation method, system and readable storage medium based on augmented reality and deep learning

Also Published As

Publication number Publication date
CN114224512A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
US11638999B2 (en) Synthetic representation of a surgical robot
US20230363833A1 (en) Methods And Systems For Robot-Assisted Surgery
US11896318B2 (en) Methods and systems for controlling a surgical robot
US10064682B2 (en) Collision avoidance during controlled movement of image capturing device and manipulatable device movable arms
CN114224512B (en) Collision detection method, apparatus, device, readable storage medium, and program product
JP6083103B2 (en) Image complementation system for image occlusion area, image processing apparatus and program thereof
US20200169673A1 (en) Synthesizing spatially-aware transitions between multiple camera viewpoints during minimally invasive surgery
CN114533263A (en) Mechanical arm collision prompting method, readable storage medium, surgical robot and system
WO2023065988A1 (en) Collision detection method and apparatus, device, and readable storage medium
US10849696B2 (en) Map of body cavity
US20210298854A1 (en) Robotically-assisted surgical device, robotically-assisted surgical method, and system
US20230249354A1 (en) Synthetic representation of a surgical robot
US20240070875A1 (en) Systems and methods for tracking objects crossing body wallfor operations associated with a computer-assisted system
WO2023018685A1 (en) Systems and methods for a differentiated interaction environment
WO2023018684A1 (en) Systems and methods for depth-based measurement in a three-dimensional view
WO2024068549A1 (en) Neuromapping systems and devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant