CN114041875B - Integrated operation positioning navigation system - Google Patents

Integrated operation positioning navigation system Download PDF

Info

Publication number
CN114041875B
CN114041875B CN202111402396.9A CN202111402396A CN114041875B CN 114041875 B CN114041875 B CN 114041875B CN 202111402396 A CN202111402396 A CN 202111402396A CN 114041875 B CN114041875 B CN 114041875B
Authority
CN
China
Prior art keywords
focus
marker
coordinate system
surgical
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111402396.9A
Other languages
Chinese (zh)
Other versions
CN114041875A (en
Inventor
王钊
翟雨轩
王盛吉
许川
李恺文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202111402396.9A priority Critical patent/CN114041875B/en
Publication of CN114041875A publication Critical patent/CN114041875A/en
Application granted granted Critical
Publication of CN114041875B publication Critical patent/CN114041875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • A61B2034/2057Details of tracking cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses an integrated operation positioning navigation system which comprises three parts, namely imaging equipment, a pitching self-adaptive device and a marker, wherein the pitching self-adaptive device is connected with the imaging equipment and is carried on an operation robot, the marker is respectively arranged on the surface of an end effector and the surface above a focus of a patient, the precise movement direction of each multi-degree-of-freedom mechanical arm is calibrated and determined, the coordinate system of the imaging equipment and the coordinate system of the robot are unified, the movement navigation of the multi-degree-of-freedom mechanical arm at the far end of the focus is carried out based on the detection of the focus marker and the measurement position of the imaging equipment, the relative space geometrical relation is calculated based on the detection of the focus marker and the effector, and the movement navigation of the multi-degree-of-freedom mechanical arm at the near end of the focus is carried out. The invention has compact structure and simple flow, and improves the intelligent, automatic and integrated degree of the operation robot; meanwhile, by tracking the surgical instrument and the focus in real time, the method has good robustness for the position movement of the patient.

Description

Integrated operation positioning navigation system
Technical Field
The invention relates to the technical field of medical equipment, in particular to an integrated operation positioning navigation system.
Background
Robot assisted surgery is the completion of some traditional surgical tasks by surgical robots instead of doctors. The robot may be manipulated by a physician or trained empirically by a physician. The robot assisted surgery combines the experience of doctors, and has the advantages of high precision, stable work, no tremble of hands, simplified working steps, radiation damage avoidance, operation damage minimization, pain relief of patients and the like. Accurate work of the surgical robot requires that the exact location of the lesion and surgical instruments in the procedure be ensured to plan and guide the action of the robot to be performed.
Current medical imaging techniques can accomplish the examination and observation of patient lesions. Common imaging techniques are CT scanning (Computer Tomography, computed tomography), MRI (Magnetic Resonance Imaging ), ultrasound imaging, and the like. The CT and the MRI have higher resolution ratio and can acquire fine three-dimensional images, but the CT and the MRI lack real-time navigation capability, and the shooting process is complex; CT has the potential for radiation damage to the patient. Ultrasound imaging can acquire a lesion area picture in real time, but the image resolution is lower. These imaging techniques can be used to image tissue and organs near the lesion without imaging and navigation of the robot distal to the lesion. Therefore, the completion of the robot-assisted surgery requires not only medical imaging technology, but also navigation system with positioning capability at the end of the robot, tracking focus and surgical instruments, combining with medical imaging, and completing the whole-course surgery operation.
The current common surgical navigation systems can be divided into two major categories, namely optical navigation and electromagnetic navigation, according to the difference of the spatial positioning principle. Optical navigation is to take a picture of a target object through a plurality of optical imaging devices, or calculate the time of light emission and return from the target object, and then calculate the spatial coordinates of the marker points through geometric relationships.
Patent CN 110025891a proposes a binocular-based surgical visual navigation device, which measures the positions of markers of surgical instruments and patients by using the principle of binocular parallax, and realizes tracking.
Patent CN 113229937a proposes a method for performing surgical navigation by using an imaging device of structured light, and obtaining real-time three-dimensional point cloud data by using structured light scanning, positioning an instrument and a focus, and completing a navigation task.
Electromagnetic navigation generally utilizes different directional arrangement modes of space coils to establish a three-dimensional magnetic field space, and space parameters of the probe are detected and calculated through the magneto-electric sensitive probe.
Patent CN113069206a proposes an augmented reality operation navigation system calibration method based on electromagnetic positioning. The invention is used for quick calibration of the operation field and improves the precision of virtual-real fusion.
In addition, there is also a work of combining optical navigation and electromagnetic navigation, and patent CN110537983a proposes a puncture operation navigation platform integrating optomagnetism and magnetism, which synchronously adopts an electromagnetic operation navigation technology and an optical operation navigation technology to track the tip of a puncture needle and reconstruct the puncture needle in three dimensions.
However, the above-mentioned navigation device mainly adopts an independent design, and is generally separated from the surgical robot for erection, so that the operation flow is relatively complex, the occupied space is relatively large, and in addition, the clinically existing navigation device is high in price and unfavorable for popularization.
Disclosure of Invention
The invention aims to provide an integrated operation positioning navigation system, which is based on depth information, adopts the integrated design of an operation positioning navigation device and a mechanical arm, can track information such as the position of a focus of a patient, the position and the posture of an instrument at the tail end of the mechanical arm of a robot and the like in real time in operation, and provides accurate real-time positioning navigation for an operation robot.
The invention relates to an operation positioning navigation system which comprises three parts, namely imaging equipment, a pitching self-adaptive device and a marker, wherein the imaging equipment is connected with the pitching self-adaptive device and is arranged on a multi-degree-of-freedom mechanical arm, and the marker is a point set (the point number is M) respectively arranged on the surface of a focus of a patient and a point set (the point number is E) on an end effector shell. The imaging device captures depth and RGB image information, information such as the position of a focus of a patient, coordinates and postures of an instrument at the tail end of a robot arm and the like is obtained through processing, and the pitching self-adaptive device automatically adjusts the shooting angle of the imaging device according to the focus position information, so that focus tracking and surgical instrument navigation in the whole operation process can be realized. The pitching self-adaptive device adjusts the shooting direction of the imaging equipment, and keeps the focus marker in the visual field; the imaging device recognizes and measures the markers and determines the spatial positional relationship of the lesion relative to the robot by processing the geometric relationship. Wherein the imaging device can acquire depth information and RGB image information simultaneously, the depth information being matched with the RGB image information, the image being used to identify the marker and to obtain a central position thereof, coordinates of the central position in a camera coordinate system being obtained using the depth information. End effector markers are distinguished from lesion markers. The relative geometrical relationship between the focus marker and the focus and the puncture path is calculated and obtained by medical images such as CT before operation. The coordinates of the focus marker point set are used to calculate the coordinates of the focus and the puncture path in the robot coordinate system, or the geometric relationship of the end effector relative to the focus. The pitching self-adaptive device automatically adjusts the orientation of the imaging equipment according to the focus coordinates, and ensures that the focus marker is always positioned in the imaging visual field. The imaging device can specifically adopt a binocular stereoscopic vision camera, a structured light stereoscopic vision camera or a laser radar plus RGB camera. The pitching self-adaptive device consists of a motor driving module, a rotary encoder, a coupler and a camera fixing bracket. The motor driving module comprises a motor and a driver, and the camera fixing support comprises a group of fixing supports and is used for connecting the motor, the imaging equipment, the encoder and the mechanical arm. In operation, the driver drives the motor to control the imaging device to rotate, the rotation of the imaging device is measured and read by the rotary encoder in real time for closed-loop control of posture adjustment, and the motor controls pitching of the imaging device in real time according to the spatial position coordinates of the focus marker measured by the imaging device, so that the focus marker is kept in the field of view continuously.
In the present invention, the imaging device may be composed of a set of lidar and an RGB camera. The laser radar remote sensing technology is based on the principle of TOF laser ranging, utilizes a laser transmitter and a receiver to measure the flight time of laser from imaging equipment to a target object so as to acquire the distance, adopts a micro-electromechanical system to control a reflecting mirror to scan a space scene, acquires depth information of an object point in a full field of view, constructs a spherical coordinate system taking a radar as a primary center, and realizes high-resolution real-time three-dimensional imaging. The RGB camera corresponds to the lidar camera field of view. The invention utilizes an RGB camera to acquire an operation image, identifies the position of a marker point set on the body surface of a patient and an end effector in the RGB operation image, and a laser radar acquires the coordinate of the position as the space coordinate of the marker point set. And (5) acquiring the space positions and the postures of the focus and the surgical instrument by processing the coordinate information of the marker point set.
The pitching self-adaptive device comprises a motor, an encoder, a coupler and a camera bracket. The camera support is connected with the multi-degree-of-freedom mechanical arm, the motor, the imaging equipment and the encoder, and the imaging equipment is respectively connected with the motor and the encoder through the coupler. The motor drives the imaging device to rotate, and the encoder measures the rotation angle of the imaging device. In the operation process, the imaging device acquires the focus position in real time, and the pitching self-adaptive device controls the imaging device to rotate, so that the focus marker point set is always kept at a central position in the field of view. The integrated operation positioning navigation means that the imaging device is connected to the mechanical arm with multiple degrees of freedom to work instead of being placed separately. The pitching self-adaptive device aims to ensure that imaging equipment always keeps shooting and position measurement of a focus when moving along with the mechanical arm.
Markers are classified into lesion markers and end effector markers, each consisting of a number of pattern blocks of fixed shape. The focus mark is a group of stickers which are dispersedly stuck on the body surface above the focus in the chest of the patient before operation by doctors, and the number M is 4 or other values. The end effector marker is a set of marker dot patterns on the surface of the end effector housing, and the number E can be 12 or other values.
The mechanical arm with multiple degrees of freedom comprises a chassis, an XYZ three-dimensional moving platform, a three-axis cradle head module and a mechanical arm closed-loop control module. The bottom of the chassis is provided with a universal wheel with the gravity center horizontally adjustable, so that the whole movement and fixed-point anchoring of the puncture robot can be realized. The XYZ three-dimensional moving platform uses the ball screw for braking, has high precision and strong stability, and is used for moving the surgical end effector to a specified space position. The triaxial holder module comprises a yaw adjusting unit, a roll adjusting unit and a pitch adjusting unit and is used for adjusting the end effector to a specified posture. The multi-degree-of-freedom mechanical arm aims to realize the full-dimensional position and posture adjustment of the end effector in an operation area.
The invention provides an integrated operation positioning navigation system, which comprises a calibrated navigation coordinate system, a remote navigation part and a near navigation part. During operation, the multi-degree-of-freedom mechanical arm drives the end effector to move from far to near to the focus, and for an integrated operation positioning navigation system, the multi-degree-of-freedom mechanical arm and the end effector are far away from the focus in the early stage of operation, so that the end effector cannot be ensured to be in the field of view of the imaging equipment at the same time when the focus marker point set is in the field of view of the imaging equipment, and the stage adopts far-end navigation for processing. In the post-operative phase, the end effector is moved toward the lesion, and proximal navigation may be used to treat the lesion when the lesion marker and the end effector marker are within the field of view of the imaging device. The design aims to ensure that the integrated operation positioning navigation system completes accurate positioning and navigation at each stage of operation.
The aim of calibrating the navigation coordinate system is to determine the precise movement direction of each degree of freedom of the multi-degree mechanical arm and unify the imaging system coordinate system and the robot coordinate system. Firstly, a world coordinate system of a mechanical arm, namely a robot coordinate system, is established, the origin position of the coordinate system is determined, and the spatial positions of the rotation center of the imaging equipment and the origin of the imaging equipment are measured. Secondly, the movement of the multi-degree-of-freedom mechanical arm three-dimensional platform is controlled respectively, and the movement direction of each degree of freedom of the multi-degree-of-freedom mechanical arm three-dimensional movement platform is determined by measuring the movement direction of a fixed position marker in the imaging equipment. And finally, controlling the end effector to rotate for a plurality of times in the field of view of the imaging equipment, and determining the movement direction of each degree of freedom of the three-axis holder of the multi-degree-of-freedom mechanical arm by measuring the rotation direction of the marking point.
The remote navigation means that in the preoperative period, firstly, when the mechanical arm with multiple degrees of freedom is in a reset state, the navigation module is started, the angle of the imaging device is adjusted, and the focus marker of the patient is placed in the center of the visual field of the imaging device. Then, the focus marker point set is identified through the RGB image, the imaging device detects the space position of the focus marker point set, and the specific position of the focus marker point set under the imaging system coordinate system is obtained. Then, the position of the focus marker under the imaging system coordinate system is converted into the robot coordinate system coordinate by the coordinate conversion relation. And registering the coordinates of the focus marker point set acquired by the imaging equipment with the coordinates of the focus marker point set acquired by the CT image three-dimensional reconstruction to obtain a corresponding transformation matrix, and obtaining the positions of the focus and the puncture path under the robot coordinate system by using the matrix to realize the remote positioning of the focus position and the puncture path under the robot coordinate system.
Proximal navigation refers to controlling the robotic arm to move the end effector to the proximal end of the lesion at the time that the end effector marker and lesion marker are simultaneously present in the field of view of the imaging device. At this time, first, the imaging apparatus recognizes and detects the two types of markers and acquires corresponding positional information, from which a relative relationship between the end effector marker and the lesion marker, that is, a relative position of the end effector and the lesion marker, can be obtained. Then, the focus is projected into the coordinate system of the imaging system through point registration and matrix transformation, kalman filtering processing is carried out on point cloud information to eliminate noise, so that the position relation of the focus relative to the end effector can be obtained, the instrument tip point on the end effector is taken as an origin point to establish an instrument tip coordinate system, the direction of the coordinate system is consistent with the direction of the robot coordinate system, and the near-end positioning of the focus position and the puncture path under the instrument tip coordinate system is realized.
The integrated operation positioning navigation system provided by the invention has a compact structure, avoids the complexity of the split operation navigation system, and improves the usability; meanwhile, the use condition of the operation navigation system is simplified, so that the operation robot can work independently, and the use scene is increased; in addition, the invention provides that the information fusion of the laser radar and the RGB image can be used for positioning the target, so that the surgical instrument can be guided accurately, and the cost is reduced greatly; by tracking the surgical instrument and the focus in real time, the method has good robustness for the position movement of the patient; the invention is a non-contact guiding scheme, which improves the operation efficiency and the intelligent level.
Drawings
In order to make the objects, technical solutions and advantageous effects of the present invention more clear, the present invention provides the following drawings for description:
fig. 1 is a diagram showing the overall structure of a surgical positioning navigation system according to the present invention mounted on a multi-degree-of-freedom surgical robot.
Fig. 2 is a block diagram of a multi-degree of freedom mechanical arm base and a Z-axis motion platform.
Fig. 3 is a block diagram of the XY axis moving platform of the multi-degree-of-freedom robot arm device.
Fig. 4 is a block diagram of a robotic head and surgical end effector.
Fig. 5 is a block diagram of the surgical positioning navigation system of the present invention.
Fig. 6 is a schematic diagram of the operation of the surgical positioning and navigation system of the present invention.
Fig. 7 is a schematic diagram of the remote navigation of the present invention.
FIG. 8 is a schematic view of the near-end navigation of the present invention.
Fig. 9 is a schematic diagram of a coordinate system of the present invention.
Detailed description of the preferred embodiments
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings.
An example of the present invention is shown in fig. 1 as an integrated surgical positioning navigation surgical robot. The surgical robot consists of a multi-degree-of-freedom mechanical arm device 1, a surgical positioning navigation system 2 and a surgical end effector device 3. The multi-degree-of-freedom mechanical arm device 1 comprises a chassis unit 4,Z axis moving platform 5, an X axis moving platform 6, a Y axis moving platform 7, a roll adjusting unit 8, a yaw adjusting unit 9 and a pitch adjusting unit 10. The multi-degree-of-freedom mechanical arm device 1 is constructed by connecting the above units one by one.
The chassis unit 4 shown in fig. 2 mainly comprises a car body aluminum frame 11 and a universal quick anchoring wheel set 12, wherein the universal quick anchoring wheel set 12 is a universal wheel with four additional supporting plates, after the universal wheel 18 moves in place, the supporting plate 19 can be rotated, and the universal wheel frame 20 is connected with the universal wheel frame through threads, so that the height can be reduced through rotation, the ground is supported, and the position of the surgical robot is maintained. The universal quick anchoring wheel is fixedly connected with the bottom plate, the chassis is fixedly connected with the vehicle body aluminum frame, so that the chassis can move universally, the position can be quickly fixed after reaching the target position, and the horizontal condition of the machine is regulated.
The sliding part of the Z-axis moving platform 5 is formed by fixing two square guide rails on two parallel aluminum pipes on the chassis aluminum frame 11 in parallel. As shown in fig. 3, the sliding parts of the X-axis moving platform 6 and the y-axis moving platform 7 are respectively fixed on two parallel aluminum tubes on the platform aluminum frame in parallel by two square guide rails 28. The sliding table of the Z-axis moving platform 5 is connected with the X-axis moving platform 6, and the sliding table of the X-axis moving platform 6 is connected with the sliding table of the Y-axis moving platform 7 to form an XYZ three-dimensional moving platform. The three-dimensional moving platform braking parts are provided with SFU-1605 ball screws 22, 23, 24 with a lead of 4mm, driven by stepper motors 21, 25, 26. (if the stepping motor is matched with 57 steps, the needle inserting speed of 6cm/s can be realized, and the controllable precision is less than 0.1 mm). The ball screw of the three-dimensional moving platform is connected with the encoder 27 through the couplings 29 and 30, and the sliding table is controlled to move in a closed loop mode. The XYZ three-dimensional moving platform can realize the positioning of the end effector device 3 at the target puncture position above the sickbed of the CT machine.
As shown in fig. 4, the surgical end effector 3 is positioned above the CT machine, and the angle of the puncture needle effector needs to be adjusted according to the lesion position and the puncture needle insertion path obtained from the CT image. The optimal needle insertion angle of the end effector is adjusted by three adjustment units, namely a roll adjustment unit 8, a yaw adjustment unit 9 and a pitch adjustment unit 10. The roll adjusting unit 8, the yaw adjusting unit 9 and the pitch adjusting unit 10 are respectively provided with a rotating center by adopting a slewing bearing 31, and are fixedly connected with a bearing box 39, a braking part is respectively provided with a worm wheel and a worm (such as 32), a stepping motor (such as 33 and 37) drives the worm, and three adjusting unit sliding tables 34, 35 and 36 are respectively connected with an encoder (such as 40) (wherein 34 is a roll adjusting unit sliding table, 35 is a yaw adjusting unit sliding table and 36 is a pitch adjusting unit sliding table) for realizing closed-loop control of rotation. The base of the roll adjustment unit 8 is connected to the Y-axis moving platform. The sliding table of the rolling adjusting unit 8 is connected with the base of the yaw adjusting unit 9, the sliding table of the yaw adjusting unit 9 is connected with the base of the pitching adjusting unit 10 to form a cloud deck of the operation end effector device, and the operation end effector device 3 is connected with a sliding table 36 of the pitching adjusting unit on the cloud deck. By means of parallel invocation of the three adjustment units, a spatial three-dimensional posture adjustment of the surgical end effector means 3 can be achieved. End effector marker 41 is affixed to the end effector housing.
As shown in fig. 5, the surgical positioning navigation system 2 is connected to the middle section below the Y-axis moving platform 7, and mainly includes two parts, namely, an imaging device for acquiring depth information and image information, and an attitude adjusting module of the imaging device. One imaging device scheme adopted in this example is a lidar camera 13 combining a lidar and an RGB camera, and the attitude adjusting module is specifically a pitching adjusting device 14. The laser radar camera 13 acquires the depth information of the space object by using a TOF laser ranging technology, acquires the 2D image by using an RGB camera, and the depth information of the space object and the 2D image are matched and fused. The invention determines the coordinates of the target object in the field of view in the camera coordinate system by processing the depth information of the spatial object and the 2D image. The pitching adjusting device 14 is connected with the Y-axis moving platform 7 through a camera fixing bracket 15, the stepping motor 16 and the encoder 17 are respectively connected with the laser radar camera 13 through a coupler (such as 38), so as to realize pitching adjustment of an imaging view field of the camera.
The imaging device may in particular also take the form of a binocular stereo vision camera. The imaging of the binocular stereo vision camera is based on the parallax principle and utilizes the camera to shoot the operation image containing the marker from different positions, the marker pattern can be segmented from the image, and the three-dimensional geometric information of the marker is obtained by calculating the position deviation between the marker points corresponding to the left and right camera images. By processing the three-dimensional coordinate information of the marker point set, the spatial position and posture of the lesion, the puncture path and the surgical instrument, and the spatial position of the puncture path can be acquired.
The imaging device adopts a structured light stereoscopic vision camera which is an implementation method, and the basic principle of the structured light stereoscopic vision camera is that light rays with certain structural characteristics are projected onto a shot object (namely a chest of a patient and an end effector of a robot) through a projection device, and then the projection object is shot by a camera. The spherical coordinate system with the camera as the origin is established as the imaging system coordinate system, different image phase information is returned according to different depths of the surface of the shot object as light rays projected on the target object, and then the change of the phase is converted into depth information through calculation, so that three-dimensional position information is obtained. The position of the marker can be obtained by dividing an image shot by the camera, and the position information of the divided marker in space is obtained by phase calculation. Likewise, by processing the three-dimensional positional information of the marker point set, the spatial position and pose of the lesion and surgical instrument, as well as the spatial position of the penetration path, may also be obtained.
Fig. 6 shows an example of the operating principle of the surgical positioning and navigation system, which comprises three parts, namely a calibrated navigation coordinate system, a distal navigation and a proximal navigation. Before the surgical positioning navigation system and the surgical robot are put into use, navigation coordinate system calibration 108 is first performed. The purpose of the calibration is to determine the surgical robot movements When the XYZ three axes of the multi-degree-of-freedom mechanical arm device 1 are in the initial position, namely in the zero state, the center point of the top of the Z-axis moving platform is set as the origin of the robot coordinate system 103The direction of the robot motion coordinate system isAnd->(/>And->Respectively representing the motion directions of an XYZ three-dimensional moving platform, rolling, yawing and pitching three-axis cradle head), and the coordinate Q of a pitching rotation point of the laser radar is +.>The lidar acquires depth information of objects within the field of view and projects it in an imaging system coordinate system (i.e., a lidar spherical coordinate system) 102. Marking +.>Object block mark point C h In the imaging system coordinate system the coordinates are +.>h is the number of marked points, G is the number G, g=10 (G is 10 being an example, other values are possible), the coordinates a of the marked points on the surgical end effector a f Is->f is the number of the mark points, the number is E, e=12 (E is 12Other values may be taken as an example). When calibrating the XYZ three-axis moving direction, marking a point C by adopting a material block h As a reference, the relative movement direction of the moving uniaxial marker is calculated as the axis direction. Gradually moving X-axis, Y-axis or Z-axis, and respectively calculating object block mark points C h Average coordinate difference->(Δr ch 、Δθ ch 、Δφ ch Respectively representing the coordinate difference of the marking points of the object blocks at two moments before and after the movement of the moving platform), and converting the coordinate difference into a rectangular coordinate system to respectively obtain +.>As->When the three-axis movement direction of rolling, pitching and yawing is marked, a marking point A on an operation end effector is adopted f As a reference object, the single shaft is rotated successively, the plane where the mark point track is located in the space is calculated, and three planes represent three directions of the movement of the cradle head. Further, the lidar pitch rotation point coordinate Q is obtained by measurement. The lung image 116 of the patient is acquired by a CT machine before operation, and the focus mark object point under the CT coordinate system 101, the focus point cloud position and the puncture path 107 confirmed by a doctor are obtained by carrying out image segmentation 105 and three-dimensional reconstruction 106 on the CT image (the starting point of the puncture path is positioned on the chest surface and the end point of the puncture path is positioned in the focus center).
During the surgical execution, the robotic arm first initializes 112, completing the device self-test. The multi-degree-of-freedom mechanical arm device drives the operation end effector device to move from far to near to the focus. In the preoperative period, the mechanical arm device with multiple degrees of freedom and the surgical end effector device are far away from the focus, only the focus marker is ensured to be in the field of view 109 of the imaging device, at this time, the imaging device recognizes the focus marker point set 203 by adopting the far-end navigation 113, as shown in fig. 7, the focus marker point set position is projected to the robot coordinate system through the conversion of the imaging system coordinate system 102 and the robot coordinate system 103, and then the machine is used for projecting the focus marker point set position to the robot coordinate system The point cloud registration 201 is performed on the focus marker point set under the human coordinate system and the CT coordinate system, a transformation matrix is obtained, and focus points 202 and puncture path starting points 206 are converted into a robot coordinate system 103, so that the navigation of the movement of the mechanical arm is realized. In operation, the laser radar moves by a distance D on the X axis, the distance L between the actual signal receiving end of the laser radar camera and the pitching rotation point coordinate Q of the laser radar, the pitching adjustment angle is theta, and at the moment, the actual receiving point coordinate of the laser radar camera isPerforming remote navigation, observing a marker point set 203 around a lesion of a patient by using a laser radar, and returning a coordinate 204 of the marker point set of the lesion to be +.>u is the number of lesion markers, M, m=4, (M is 4 is an example, other values may be taken), P u Is a point on the R-centered imaging system coordinate system. Point P on the coordinate system of the imaging system u Rectangular coordinate system midpoint converted into R as origin> That is, the focus marker point set should have R and +.>The sum of the corresponding xyz components is +.>The coordinates 205 of the marker points acquired from the CT image are +.>For M u And N u Registration 201, 104 of the point sets is performed.
When the surgical end effector device is moved to the lesion of the patient At the near end, as shown in fig. 8, the near-end navigation 114 is performed, and the end effector device and the two types of markers 304, 203 near the lesion are simultaneously observed (110) by a laser radar camera, and the coordinate system information is simultaneously acquired, thereby obtaining the end effector marker point set a f 302 to perilesional markers 303. The focus marker point set 301 in the coordinate system of the matching imaging system and in the CT coordinate system after being converted into the spherical coordinate system, namely point cloud303 and point cloud->305(N u * Is N u Projection in a spherical coordinate system) matches 301, 104. Substituting the optimal rotation matrix and translation matrix with the completed point cloud registration to obtain the position relationship between the focus and the operation end effector device, namely completing the focus coordinate B without pitching translation error 0 Is a solution to (c). Considering that the establishment of the end effector coordinates on the surgical end effector device has systematic errors due to manual operation, the focus position is not restored to the robot coordinate system 103 in the stage, and the movement of the end effector on the three-dimensional space is controlled by measuring the relative position relationship between the end effector and the focus through the laser radar to realize the puncture of the focus, wherein the coordinate system is an instrument tip coordinate system established at a fixed point T' on the instrument tip, and the direction of the coordinate system is consistent with that of the robot coordinate system. In the process of puncturing and needle inserting of the end effector, the laser radar always tracks two types of markers, and the algorithm is utilized to realize solving and tracking of focus position coordinates and calculate puncture path starting point coordinates. The marking point measurement has noise, and an optimal position value is estimated by using a Kalman filtering mode. Finally, the vector relationship between the end effector and the lesion is converted into the motion amount of each arm, and the arm is adjusted to move to the needle insertion preparation position 115. In the whole needle inserting process, the imaging equipment always tracks two types of markers, and real-time confirmation and correction 111 of focus coordinates and puncture path starting point coordinates are realized.
The method for calculating the actual motion direction of each degree of freedom of the multi-degree mechanical arm by calibrating the navigation coordinate system specifically comprises the following steps: and (3) moving the end effector for multiple times with a single degree of freedom, measuring and tracking the marker through an imaging device, calculating the moving direction of the marker, and constructing an actual robot coordinate system by taking the moving direction of the marker as the actual moving direction of each degree of freedom of the mechanical arm.
The method for calculating the coordinates of the focus in the robot coordinate system by the remote navigation specifically comprises the following steps: the method comprises the steps of processing an RGB image of imaging equipment by using a CNN convolutional neural network, dividing a focus marker pattern, extracting a contour calculation center point, obtaining coordinates of a focus marker point set in an imaging system coordinate system (namely a depth camera spherical coordinate system), calculating coordinates of a focus in the imaging system coordinate system through a relative geometric relation between the focus marker and the focus, and projecting focus coordinates to a robot coordinate system through a conversion matrix of the imaging system coordinate system and the robot coordinate system.
The method for calculating the relative spatial position relation between the surgical instrument and the focus by the proximal navigation specifically comprises the following steps: and (3) dividing and detecting a focus marker and an end effector marker, and establishing an instrument tip coordinate system by taking the surgical instrument tip as an origin, wherein the coordinate axis direction is consistent with the movement direction of the mechanical arm degree of freedom. And converting the coordinates of the focus in the imaging system coordinate system into an instrument tip coordinate system, and guiding the multi-degree-of-freedom mechanical arm to move towards the focus by using the coordinate system.
One embodiment of point cloud registration shown in FIG. 9, denoted as P in FIG. 8 u And N u * Registration 301 of the point set is performed as an example. The idea of registration is to rely on two point set data P to be registered u And N u * Firstly, constructing local geometric features, then, carrying out point cloud data repositioning 401 according to the local geometric features, and mainly utilizing an iterative algorithm 402 to perform P u And N u * The two point set data are registered. The aligned registration transformation of the two point sets should minimize the following objective functionWherein S is a rotation matrix and T is a translation matrixM is the number of focus markers, |·| represents norm calculation), that is, rotation parameters and translation parameters between the found point cloud data to be registered and the reference point cloud data, so that the two point set data satisfy the optimal matching 403 under a certain measurement criterion. In the rotation matrix S and translation matrix T which are optimally matched, focus center coordinates V which are obtained in CT images are obtained 0 Performing rotation and translation operations to obtain focus coordinates 404B in the robot coordinate system 0 =V 0 S+T, and the same can calculate the starting point coordinates of the puncture path. After knowing the coordinates of the focus and the starting point of the puncture path in the instrument tip coordinate system, the end effector is controlled to move to the focus of the patient. In addition, the point cloud registration 201 in fig. 7 uses the same method, except that the registered point cloud is the point set M u And point set N u 205。
The invention discloses an integrated operation positioning navigation device which comprises three parts, namely imaging equipment, a pitching self-adaptive device and a marker. The pitching self-adaptive device is connected with imaging equipment and is mounted on the surgical robot. The markers are placed on the surface of the end effector and on the surface of the patient above the lesion, respectively. Calibrating and determining the accurate motion direction of each degree of freedom of the mechanical arm, and unifying the coordinate system of the imaging equipment and the coordinate system of the robot. And performing motion navigation of the multi-degree-of-freedom mechanical arm at the far end of the focus based on detection of the focus marker and the measurement position of the imaging device. Based on detection of the focus marker and the actuator marker, calculating the relative space geometrical relationship of the focus marker and the actuator marker, and executing motion navigation of the multi-degree-of-freedom mechanical arm at the near end of the focus. The integrated positioning navigation system realizes real-time accurate navigation of the whole surgical course, has compact structure and simple flow, and improves the intellectualization, automation and integration degree of the surgical robot.
The embodiments described above are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention. The invention aims to complete accurate navigation of surgical instruments and operation in surgery by adopting an integrated surgical positioning navigation system.

Claims (4)

1. The integrated surgical positioning navigation system is characterized by comprising imaging equipment, a pitching self-adaptive device and a marker, wherein the imaging equipment is connected with the pitching self-adaptive device and is carried on a multi-degree-of-freedom mechanical arm, the marker is a patient focus marker point set which is respectively arranged on the surface above a patient focus and a surgical end effector marker point set on a surgical end effector shell, the imaging equipment captures depth and RGB image information, the position of the patient focus, the coordinates and posture information of a multi-degree-of-freedom mechanical arm end instrument are obtained through processing, and the pitching self-adaptive device automatically adjusts the shooting angle of the imaging equipment according to focus position information, so that focus tracking and surgical instrument navigation in the whole surgical process are realized; the imaging equipment and the pitching self-adaptive device form a navigation device, the navigation device is fixedly connected with the multi-degree-of-freedom mechanical arm to form an integrated operation robot, and the navigation device is not separated from the multi-degree-of-freedom mechanical arm to erect;
the imaging device consists of a group of laser radars and an RGB camera, the laser radar remote sensing technology is based on the principle of TOF laser ranging, the laser transmitter and the receiver are utilized to measure the flight time of laser from the imaging device to a target object so as to acquire the distance, a micro-electromechanical system is adopted to control a reflecting mirror to scan a space scene, the depth information of an object point in a full field of view is acquired, a spherical coordinate system taking the radar as a center is constructed, high-resolution real-time three-dimensional imaging is realized, the RGB camera corresponds to the field of view of the laser radar camera, a 2D operation image is acquired by utilizing the RGB camera, and the depth information of the space object and the 2D image are matched and fused; identifying the positions of a patient focus and a marker point set of an operation end effector in an RGB operation image, and acquiring coordinates of the positions as space coordinates of the marker point set by a laser radar; the method comprises the steps of obtaining the space positions and the postures of a focus and a surgical instrument and the space positions of puncture paths by processing coordinate information of marker point sets;
The pitching self-adaptive device comprises a motor, an encoder, a coupler and a camera bracket, wherein the camera bracket is connected with the multi-degree-of-freedom mechanical arm, the motor, imaging equipment and the encoder, and the imaging equipment is respectively connected with the motor and the encoder through the coupler; the motor drives the imaging equipment to rotate, and the encoder measures the rotation angle of the imaging equipment to realize a rotation control closed loop; in the operation process, the imaging device acquires focus positions in real time, and the pitching self-adaptive device controls the imaging device to rotate, so that focus marker point sets of patients always keep a centered position in a visual field; the integrated operation positioning navigation means that the imaging equipment is directly connected to a fixed position of the mechanical arm with multiple degrees of freedom to work instead of being separately placed, and the pitching self-adaptive device aims to ensure that the imaging equipment always keeps shooting and position measurement of a focus when moving along with the mechanical arm;
the markers are divided into two types, namely a patient focus marker and an operation end effector marker, the shape of the marker is fixed, the patient focus marker is a group of stickers, a doctor is dispersedly stuck on the body surface above a focus in the chest of the patient before operation, the number is M, and the M is an integer larger than or equal to 1; the surgical end effector marker is a group of marker point patterns on the surface of the surgical end effector shell, the number of the marker point patterns is E, and the E is an integer greater than or equal to 1;
The multi-degree-of-freedom mechanical arm comprises a chassis, an XYZ three-dimensional moving platform, a triaxial holder module and a mechanical arm closed-loop control module, and the multi-degree-of-freedom mechanical arm device is formed by connecting and building the modules one by one; the chassis unit (4) consists of a vehicle body aluminum frame (11) and a universal quick anchoring wheel set (12), the universal quick anchoring wheel set (12) is a universal wheel with four additional supporting plates, after the universal wheel (18) moves in place, the supporting plate (19) is rotated to be connected with the universal wheel frame (20) through threads, the universal quick anchoring wheel is rotated to be used for lowering the height and supporting the ground, the chassis is fixedly connected with the vehicle body aluminum frame through the universal quick anchoring wheel, so that the chassis can move universally, the quick anchoring position can be achieved after the chassis reaches a target position, and the horizontal condition of the machine can be adjusted; the XYZ three-dimensional moving platform is braked by using a ball screw, has high precision and strong stability, and is used for realizing the positioning of the target puncture position of the surgical end effector (3) above a sickbed of the CT machine; the sliding part of the Z-axis moving platform (5) is formed by connecting two square guide rails in parallel on two parallel aluminum pipes on a chassis aluminum frame (11), the sliding parts of the X-axis moving platform (6) and the Y-axis moving platform (7) are respectively formed by connecting two square guide rails (28) in parallel on two parallel aluminum pipes on the chassis aluminum frame, the sliding table of the Z-axis moving platform (5) and the X-axis moving platform (6), the sliding table of the X-axis moving platform (6) and the sliding table of the Y-axis moving platform (7) are connected to form an XYZ three-dimensional moving platform, the braking parts of the XYZ three-dimensional moving platform are respectively provided with SFU-1605 ball screws (22), (23) and (24), the lead is 4mm, the ball screws of the XYZ three-dimensional moving platform are driven by stepping motors (21), (25) and (26), and the ball screws of the XYZ three-dimensional moving platform are connected with an encoder (27) through couplings (29), (30), and the sliding platform is controlled to move in a closed loop;
The surgical end effector (3) is positioned above the CT machine, the angle of the puncture needle insertion effector is required to be adjusted according to a focus path and a puncture needle insertion path which are obtained by CT images, and the optimal needle insertion angle of the surgical end effector is adjusted by utilizing a three-axis holder module consisting of three adjusting units, namely a rolling adjusting unit (8), a yaw adjusting unit (9) and a pitching adjusting unit (10); the device comprises a rolling adjusting unit (8), a yaw adjusting unit (9), a pitching adjusting unit (10) and a pitching adjusting unit, wherein a rotating center is built by adopting a slewing bearing (31), the pitching adjusting unit is fixedly connected with a bearing box (39), a braking part adopts a worm wheel and a worm (32), stepping motors (33) and (37) drive the worm, three adjusting unit sliding tables (34), 35) and 36) are connected with an encoder (40), and the rolling adjusting unit sliding table (34), the yaw adjusting unit sliding table (35) and the pitching adjusting unit sliding table (36) are used for realizing closed-loop control of rotation; the base of the rolling adjusting unit (8) is connected with the Y-axis moving platform, the sliding table of the rolling adjusting unit (8) is connected with the base of the yaw adjusting unit (9), the sliding table of the yaw adjusting unit (9) is connected with the base of the pitch adjusting unit (10) to form a cloud deck of the surgical end effector device, the surgical end effector (3) is connected with the sliding table (34) of the pitch adjusting unit on the cloud deck, and the three-dimensional space posture adjustment of the surgical end effector (3) is realized through the parallel calling of the three adjusting units; the multi-degree-of-freedom mechanical arm aims to realize the full-dimensional position and posture adjustment of the surgical end effector (3) in the surgical area;
The operating principle of the operation positioning navigation system comprises a calibration navigation system, a remote navigation and a near navigation; the calibration navigation system is used for measuring the relative spatial distance between the rotation center of the imaging device and the origin of the coordinate system of the surgical robot and the conversion matrix before operation, measuring the relative spatial distance between the marker of the surgical end effector and the tip of the surgical instrument, and calculating the actual motion direction of each degree of freedom of the mechanical arm; the positioning navigation in the operation adopts a staged navigation method, and the navigation of the movement of the surgical instrument is completed by utilizing the combination of the distal navigation and the proximal navigation: when the tail end of the mechanical arm is far away from the focus, namely the surgical end effector is not in the field of view of the imaging device, enabling remote navigation, only identifying and tracking focus markers, and calculating coordinates of the focus and the puncture path in a surgical robot coordinate system; when the tail end of the mechanical arm is close to a focus, namely the surgical end effector enters the field of view of the imaging device, a near-end navigation module is started, the surgical end effector and a focus marker are identified and tracked simultaneously, the relative spatial position relation between the surgical instrument and the focus is calculated, and the position coordinates of the puncture path are calculated.
2. The integrated surgical positioning and navigation system of claim 1, wherein the surgical positioning and navigation system
The working principle of the system specifically comprises:
the surgical positioning navigation comprises three parts of calibrating a navigation coordinate system, namely a far-end navigation part and a near-end navigation part, wherein before the surgical positioning navigation system and the surgical robot are put into use, firstly, the navigation coordinate system is calibrated (108), the purpose of calibration is to determine the direction of the surgical robot coordinate system, the pitch point coordinate and the rotation direction of the laser radar, and when the XYZ three axes of the multi-degree-of-freedom mechanical arm device (1) are all in an initial position, namely in a zero state, the center point at the top of the Z-axis moving platform is set as the origin of the surgical robot coordinate system (103)The direction of the surgical robot coordinate system is +.>And->Wherein->And->Respectively representing the motion direction of the XYZ three-dimensional moving platform and the rolling, yawing and pitching three-axis cradle head, and the coordinate Q of the pitching rotation point of the laser radar is +.>The laser radar acquires depth information of an object in the field of view and projects it in the imaging system coordinate system (102), calibrated using a fixed position object block C with a marker>Object block mark point C h In the imaging system coordinate system the coordinates are +.>h is the number of the marking point, h=1, 2, …, G, the coordinates a of the marking point on the surgical end effector a f Is->f is the serial number of the marking point, f=1, 2, …, E, and the object block is adopted to mark the point C when the XYZ triaxial moving direction is calibrated h As a reference object, calculating the relative movement direction of the moving uniaxial mark object, namely the axial direction, gradually moving the X axis, the Y axis or the Z axis, and respectively calculating the mark point C of the object block h Average coordinate difference->Wherein Deltar ch 、Δθ ch 、Δφ ch Marking points C of the object blocks at two moments before and after the movement of the mobile platform are respectively shown h And converted to rectangular coordinate system to obtain respectivelyAs->When the three-axis movement direction of rolling, pitching and yawing is marked, a marking point A on an operation end effector is adopted f As a reference object, sequentially rotating a single shaft, calculating a plane in which a mark point track is positioned in a space, wherein three planes represent three directions of motion of a cradle head, measuring to obtain a pitching rotation point coordinate Q of a laser radar, acquiring a lung image (116) of a patient before an operation through a CT machine, and obtaining focus mark object points and focus point cloud positions under a CT coordinate system (101) and a puncture path (107) confirmed by a doctor through image segmentation (105) and three-dimensional reconstruction (106) of the CT image;
during the execution of the operation, the mechanical arm is initialized (112) to finish the self-inspection of the equipment, and the multi-degree-of-freedom mechanical arm device drives the operation end effector to move from far to near to the focus; in the preoperative period, the multi-degree-of-freedom mechanical arm device and the surgical end effector are far away from a focus, only a focus marker can be ensured to be in the field of view of imaging equipment (109), at the moment, a remote navigation (113) is adopted, the imaging equipment is used for identifying a focus marker point set (203) of a patient, the focus marker point set position of the patient is projected to the surgical robot coordinate system through the conversion of the imaging system coordinate system (102) and the surgical robot coordinate system (103), then the point cloud registration (201) is carried out on the focus marker point set under the surgical robot coordinate system and the CT coordinate system, a transformation matrix is obtained, focus points (202) are converted to the surgical robot coordinate system (103), and therefore the navigation on the movement of the mechanical arm is realized; in the process, the laser radar moves by a distance D on the X axis, the distance L between the actual signal receiving end of the laser radar camera and the coordinate Q of the pitching rotation point of the laser radar, the pitching adjustment angle is theta, and at the moment, the actual receiving point coordinate of the laser radar camera is Performing remote navigation, observing a marker point set (203) around a lesion of a patient by using a laser radar, and returning coordinates (204) of the marker point set of the lesion to beu is the focus of infection markObject number, u=1, 2, …, M, P u Is a point on the imaging system coordinate system with R as the center, and points P on the imaging system coordinate system u Rectangular coordinate system midpoint converted into R as origin>That is, the coordinates of the focus marker point set under the coordinate system of the surgical robot are R and P u * The sum of the corresponding xyz components isThe coordinates (205) of each marker point obtained from the CT image are +.>For M u And N u Registering (201) or (104) the point set;
when the surgical end effector moves to the near end of a lesion of a patient, near-end navigation (114) is performed, and two types of markers (304) and (203) near the surgical end effector and the lesion are simultaneously observed (110) by using a laser radar camera, and coordinate system information of the two types of markers is simultaneously acquired, so that a marker point set A of the surgical end effector is obtained f (302) Matching focus marker point sets (301) in the imaging system coordinate system and in the imaging system coordinate system after the CT coordinate system is converted into the imaging system coordinate system with the position relation of focus surrounding markers (303), namely point clouds(303) And (3) point cloud->(305) Matching (301) or (104), where N u * Is N u Projection under an imaging system coordinate system; substituting the optimal rotation matrix and translation matrix with the completed point cloud registration to obtain the position relationship between the focus and the operation end effector, namely completing the focus coordinate B without pitching translation error 0 Is solved; considering that systematic errors exist in the establishment of end effector coordinates on a surgical end effector due to manual operation, lesions are no longer consideredThe position is restored to a surgical robot coordinate system (103), the relative position relation between the surgical end effector and the focus is measured through a laser radar, the surgical end effector is controlled to move in a three-dimensional space to realize the puncture of the focus, the coordinate system of the instrument tip established by a fixed point T' on the instrument tip is consistent with the coordinate system of the surgical robot in direction, the laser radar always tracks two types of markers in the process of puncturing the surgical end effector into a needle, the method is utilized to realize the implementation of solving and tracking of the focus position coordinates, and the starting point coordinates of a puncture path are calculated, wherein noise exists in the marker point set measurement, the optimal position value is estimated by utilizing a Kalman filtering mode, finally, the vector relation between the surgical end effector and the focus is converted into the motion quantity of each mechanical arm, the mechanical arm is adjusted, the mechanical arm is moved to a needle-entering position (115) is prepared, and the imaging equipment always tracks the two types of markers in the whole needle-entering process, so that the focus coordinates and the starting point coordinates of the puncture path are confirmed and corrected in real time (111).
3. The integrated surgical positioning and navigation system of claim 2, wherein P is u And N u * Performing point set
The working principle of an implementation method of point cloud registration for registration (301) is as follows: according to two point set data P to be registered u And N u * Firstly, constructing local geometric features, then, repositioning point cloud data (401) according to the local geometric features, and mainly utilizing an iterative algorithm (402) to perform P u And N u * Registering the two point set data, and enabling the following objective function to be minimum through the alignment registration conversion of the two point setsThe rotation parameters and translation parameters between the point cloud data to be aligned and the reference point cloud data are found, so that the two point set data meet the optimal matching under a certain measurement criterion (403), wherein S is a rotation matrix, T is a translation matrix, and the expression norm is calculated; in the process of finding the rotation matrix S and translation of the optimal matchAfter matrix T, focus center coordinate V obtained from CT image 0 Performing rotation and translation operations to obtain lesion coordinates (404) B in the surgical robot coordinate system 0 =V 0 S+T; after knowing the coordinates of the lesion in the surgical robot coordinate system, the surgical end effector is controlled to move to the lesion, and in addition, the point cloud registration (201) adopts the same method, except that the registration point cloud is a point set M u And point set N u (205)。
4. The integrated surgical positioning and navigation system according to claim 3, wherein the imaging device is replaced by a binocular stereoscopic camera or a structured light stereoscopic camera, wherein imaging of the binocular stereoscopic camera is based on parallax principle and uses the camera to shoot a surgical image containing a marker from different positions, the marker pattern is segmented from the image, three-dimensional geometric information of the marker is obtained by calculating position deviation between corresponding marker points of left and right camera images, and spatial positions and attitudes of a lesion, a puncture path and a surgical instrument and spatial positions of the puncture path can be obtained by processing three-dimensional coordinate information of a set of marker points;
in addition, the basic principle of the structured light stereoscopic vision camera is: the method comprises the steps of projecting light rays with certain structural characteristics onto a shot object through projection equipment, shooting the shot object by a camera, establishing a spherical coordinate system with the camera as an origin as an imaging system coordinate system, returning different image phase information according to different depths of the surface of the shot object for the light rays projected onto a target object, and converting the phase change into depth information through calculation, so that three-dimensional position information is obtained; the position of the marker is obtained by dividing an image shot by a camera, and the position information of the divided marker in space is obtained by phase calculation; likewise, by processing the three-dimensional position information of the marker point set, the spatial position and posture of the lesion and the surgical instrument, and the spatial position of the puncture path can be obtained, wherein the photographed object is the chest of the patient and the surgical end effector.
CN202111402396.9A 2021-11-24 2021-11-24 Integrated operation positioning navigation system Active CN114041875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111402396.9A CN114041875B (en) 2021-11-24 2021-11-24 Integrated operation positioning navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111402396.9A CN114041875B (en) 2021-11-24 2021-11-24 Integrated operation positioning navigation system

Publications (2)

Publication Number Publication Date
CN114041875A CN114041875A (en) 2022-02-15
CN114041875B true CN114041875B (en) 2023-07-18

Family

ID=80210611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111402396.9A Active CN114041875B (en) 2021-11-24 2021-11-24 Integrated operation positioning navigation system

Country Status (1)

Country Link
CN (1) CN114041875B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612536B (en) * 2022-03-22 2022-11-04 北京诺亦腾科技有限公司 Method, device and equipment for identifying three-dimensional model of object and readable storage medium
CN114767031B (en) * 2022-03-31 2024-03-08 常州朗合医疗器械有限公司 Endoscope apparatus, position guidance apparatus, system, method, and computer-readable storage medium for endoscope
CN114748164B (en) * 2022-05-07 2022-11-04 鑫君特(苏州)医疗科技有限公司 Operation positioning device
CN114869424B (en) * 2022-05-12 2023-11-21 首都医科大学附属北京朝阳医院 Operation puncture positioning equipment
CN114757995B (en) * 2022-06-16 2022-09-16 山东纬横数据科技有限公司 Medical instrument visualization simulation method based on data identification
CN115349953B (en) * 2022-08-03 2024-03-15 江苏省人民医院(南京医科大学第一附属医院) System for guiding instrument positioning based on external marking
CN115721417B (en) * 2022-09-09 2024-01-30 苏州铸正机器人有限公司 Device and method for measuring full visual field of tail end pose of surgical robot
CN115597821B (en) * 2022-12-15 2023-03-14 中国空气动力研究与发展中心超高速空气动力研究所 Large hypersonic high-temperature wind tunnel model feeding system
CN117338422B (en) * 2023-10-30 2024-04-05 赛诺威盛医疗科技(扬州)有限公司 Space registration and kinematics solver control method, system and device
CN117316393B (en) * 2023-11-30 2024-02-20 北京维卓致远医疗科技发展有限责任公司 Method, apparatus, device, medium and program product for precision adjustment
CN117340898B (en) * 2023-12-05 2024-02-20 真健康(广东横琴)医疗科技有限公司 Kinematic analysis method for miniaturized hybrid puncture robot
CN117474906B (en) * 2023-12-26 2024-03-26 合肥吉麦智能装备有限公司 Intraoperative X-ray machine resetting method based on spine X-ray image matching
CN117462267B (en) * 2023-12-27 2024-03-01 苏州铸正机器人有限公司 Aiming method of robot end effector under perspective guidance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110215284A (en) * 2019-06-06 2019-09-10 上海木木聚枞机器人科技有限公司 A kind of visualization system and method
CN110461265A (en) * 2017-01-31 2019-11-15 美敦力导航股份有限公司 Method and apparatus for the navigation based on image

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3041707C (en) * 2011-11-15 2021-04-06 Manickam UMASUTHAN Method of real-time tracking of moving/flexible surfaces
CN106714681A (en) * 2014-07-23 2017-05-24 凯内蒂科尔股份有限公司 Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US11911110B2 (en) * 2019-01-30 2024-02-27 Medtronic Navigation, Inc. System and method for registration between coordinate systems and navigation of selected members
WO2020190832A1 (en) * 2019-03-20 2020-09-24 Covidien Lp Robotic surgical collision detection systems
CN110025891A (en) * 2019-04-22 2019-07-19 上海大学 Transcranial magnetic stimulation operation vision guided navigation device
CN111616800B (en) * 2020-06-09 2023-06-09 电子科技大学 Ophthalmic surgery navigation system
CN113499137B (en) * 2021-07-07 2022-07-12 南开大学 Surgical robot navigation positioning system and measurement visual angle multi-target optimization method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110461265A (en) * 2017-01-31 2019-11-15 美敦力导航股份有限公司 Method and apparatus for the navigation based on image
CN110215284A (en) * 2019-06-06 2019-09-10 上海木木聚枞机器人科技有限公司 A kind of visualization system and method

Also Published As

Publication number Publication date
CN114041875A (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN114041875B (en) Integrated operation positioning navigation system
CN107468350B (en) Special calibrator for three-dimensional image, operation positioning system and positioning method
CN110051436B (en) Automated cooperative work assembly and application thereof in surgical instrument
US8509503B2 (en) Multi-application robotized platform for neurosurgery and resetting method
CN101474075B (en) Navigation system of minimal invasive surgery
Lathrop et al. Minimally invasive holographic surface scanning for soft-tissue image registration
CN110215284A (en) A kind of visualization system and method
JP2002186603A (en) Method for transforming coordinates to guide an object
WO2022218388A1 (en) Method and apparatus for performing positioning by means of x-ray image, and x-ray machine and readable storage medium
CN113768527B (en) Real-time three-dimensional reconstruction device based on CT and ultrasonic image fusion and storage medium
CN111227935A (en) Surgical robot navigation positioning system
JP7071078B2 (en) Robot X-ray fluoroscopic navigation
CN112043382A (en) Surgical navigation system and use method thereof
CN112932667A (en) Special positioning scale for three-dimensional image, operation navigation system and positioning method thereof
CN111603205A (en) Three-dimensional image reconstruction and positioning analysis system used in CT (computed tomography) cabin of puncture surgical robot
CN112190328A (en) Holographic perspective positioning system and positioning method
CN113940755A (en) Surgical operation planning and navigation method integrating operation and image
CN112006776A (en) Surgical navigation system and registration method thereof
CN114711969B (en) Surgical robot system and application method thereof
CN116883471B (en) Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN111870343A (en) Surgical robot system
WO2023214398A1 (en) Robotic arm navigation using virtual bone mount
CN212281375U (en) C-shaped arm X-ray machine with operation positioning and navigation functions
CN209826968U (en) Surgical robot system
Tseng et al. Image‐guided robotic navigation system for neurosurgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant