CN115120349A - Computer-readable storage medium, electronic device, and surgical robot system - Google Patents
Computer-readable storage medium, electronic device, and surgical robot system Download PDFInfo
- Publication number
- CN115120349A CN115120349A CN202110315595.XA CN202110315595A CN115120349A CN 115120349 A CN115120349 A CN 115120349A CN 202110315595 A CN202110315595 A CN 202110315595A CN 115120349 A CN115120349 A CN 115120349A
- Authority
- CN
- China
- Prior art keywords
- image
- information
- punching
- acquisition device
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 39
- 238000004080 punching Methods 0.000 claims abstract description 103
- 238000000034 method Methods 0.000 claims abstract description 41
- 230000000007 visual effect Effects 0.000 claims description 17
- 230000001133 acceleration Effects 0.000 claims description 16
- 238000013507 mapping Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 8
- 239000003550 marker Substances 0.000 claims description 7
- 230000000149 penetrating effect Effects 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 13
- 210000001519 tissue Anatomy 0.000 description 23
- 210000000683 abdominal cavity Anatomy 0.000 description 18
- 238000003384 imaging method Methods 0.000 description 14
- 238000005553 drilling Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000005452 bending Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000003902 lesion Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000001356 surgical procedure Methods 0.000 description 4
- 240000007643 Phytolacca americana Species 0.000 description 3
- 208000005646 Pneumoperitoneum Diseases 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000002357 laparoscopic surgery Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000002980 postoperative effect Effects 0.000 description 2
- 206010003497 Asphyxia Diseases 0.000 description 1
- 241000270295 Serpentes Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000036461 convulsion Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000013872 defecation Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000002271 resection Methods 0.000 description 1
- 230000036186 satiety Effects 0.000 description 1
- 235000019627 satiety Nutrition 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 210000000115 thoracic cavity Anatomy 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B17/34—Trocars; Puncturing needles
- A61B17/3403—Needle locating or guiding means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/70—Manipulators specially adapted for use in surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/70—Manipulators specially adapted for use in surgery
- A61B34/77—Manipulators with motion or force scaling
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/06—Measuring instruments not otherwise provided for
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/108—Computer aided selection or customisation of medical implants or cutting guides
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B2034/301—Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/06—Measuring instruments not otherwise provided for
- A61B2090/064—Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Robotics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Pathology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Gynecology & Obstetrics (AREA)
- Radiology & Medical Imaging (AREA)
- Manipulator (AREA)
Abstract
The invention relates to a computer readable storage medium, an electronic device and a surgical robot system, wherein the computer storage medium is applied to a surgical robot and used for planning a target pose of an image acquisition device before punching when punching is carried out on the surface of a surgical object, and driving the image acquisition device to move to the target pose so as to monitor a punching state and generate guide information in a punching process to guide punching operation, so that the dependence of the punching operation on the experience of an operator is greatly reduced, and the safety of the operation is improved.
Description
Technical Field
The invention belongs to the technical field of medical instruments, and particularly relates to a computer-readable storage medium, electronic equipment and a surgical robot system.
Background
The design concept of surgical robots is to perform complex surgical procedures precisely in a minimally invasive manner. The surgical robot is developed under the condition that the traditional surgical operation faces various limitations, breaks through the limitation of human eyes, and can more clearly present organs in the human body to an operator by utilizing a three-dimensional imaging technology. And to the narrow and small region that some people's hand can't stretch into, the operation robot still steerable surgical instruments accomplish to move, swing, centre gripping and 360 rotations to can avoid the shake, improve the operation accuracy, further reach the advantage that the wound is little, the bleeding is few, the postoperative resumes soon, greatly shorten the operation object postoperative time of being in hospital. Therefore, the surgical robot is popular among doctors and patients and widely applied to respective clinical operations.
Like the traditional operation, before the operation is carried out by using an operation robot, the focus needs to be positioned, a punching point is determined according to the position of the focus, and then punching is carried out at the punching point, so that the operation is carried out. A perforating device for punching is very sharp usually, and execute the operation person for puncturing the operation object body surface, need exert oneself very often, consequently the operation of punching very relies on the experience of executing the operation person, and the operation person who is inexperienced very easily exerts oneself too violently when punching and leads to perforating device stab internal tissue after penetrating the body surface, brings unnecessary wound for the operation object, influences operation safety.
Disclosure of Invention
The invention aims to provide a computer readable storage medium, an electronic device and a surgical robot system, wherein the computer readable storage medium is applied to a surgical robot and used for guiding an operator in a punching process when punching operation is performed, so that the tissue is prevented from being punctured by a punching device after the punching device penetrates through the body surface, the surgical safety is improved, and meanwhile, the experience requirement on the operator can be reduced.
To achieve the above object, the present invention provides a computer-readable storage medium having stored thereon a program which, when executed, performs the steps of:
planning a target pose of an image acquisition device according to a first hole site on the body surface of an operation object, so that when the image acquisition device is in the target pose, the first hole site is positioned in the visual field range of the image acquisition device; and (c) a second step of,
and planning a motion scheme of the image acquisition device so that the image acquisition device can move to the target pose according to the motion scheme after being inserted into the body of the surgical object from a second hole position on the body surface of the surgical object.
Optionally, the program further performs the steps of:
establishing a three-dimensional model according to first image information in the operation object body acquired by the image acquisition device; and the number of the first and second groups,
and acquiring punching state information according to the three-dimensional model and second image information of the punching tail end of the punching device penetrating through the body surface of the surgical object at the first hole position and acquired by the image acquisition device when the image acquisition device is in the target pose, and generating guide information.
Optionally, the target pose comprises a first target position and a target pose;
the motion scheme at least comprises a global motion path planned according to the second hole position and the first target position; the image capture device is capable of reaching the first target location when the image capture device is moved along the global motion path.
Optionally, when an obstacle exists on the global motion path, the motion scheme further includes a local motion path planned according to the global motion path and the three-dimensional model; the local motion path is disposed outside a boundary of the obstacle, and a start point and an end point of the local motion path are both on the global motion path.
Optionally, the movement plan further includes a rotation plan planned according to the current posture when the image capturing device reaches the first target position and the target posture, so that the image capturing device can reach the target posture when moving according to the rotation plan.
Optionally, during the movement of the image capturing apparatus according to the movement scheme, the program further performs the following steps:
and acquiring the collision torque between the image acquisition device and the target tissue, judging whether the collision torque is greater than a preset torque threshold value, if so, controlling the image acquisition device to stop moving, and generating prompt information to prompt execution of intervention operation.
Optionally, after performing the intervention operation to position the image acquisition device away from the target tissue and in a safe position, the procedure further performs the steps of:
and acquiring a position closest to the safety position on the global motion path as the second target position, and driving the image acquisition device to move to the second target position so as to enable the image acquisition device to return to the global motion path.
Optionally, the image acquisition device is mounted on an image arm, and a joint sensor is arranged on the image arm and used for acquiring an actual joint moment of the image arm;
the program executes the following steps to acquire the collision torque:
receiving the actual joint moment;
acquiring a theoretical joint moment of the image arm;
calculating a difference value between the actual joint moment and the theoretical joint moment as the collision moment.
Optionally, the image capturing device is mounted on an image arm, and the program, when executed, is configured to drive the image arm to move to drive the image capturing device to move according to the movement scheme, and is further configured to:
planning a motion track of the image acquisition device to obtain a change relation of a pose of the image acquisition device along with time when the image acquisition device moves according to the motion scheme, and acquiring speed information, acceleration information and position information of a joint of the image arm;
and driving the image arm to move according to the speed information, the acceleration information and the position information so as to enable the image acquisition device to move according to the movement track.
Optionally, the program is for selecting a motion trajectory planning method according to a predetermined condition.
Optionally, the predetermined condition comprises at least one of time-optimal, path-softest, or energy-optimal.
Optionally, the program is configured to perform the steps of:
establishing a first body sign image model according to first body form data and focus data of a surgical object in a first state so as to plan a first pre-hole position and a second pre-hole position on the first body sign image;
establishing a second body feature image model according to second body surface data and focus data of the operation object in a second state;
registering the second volume feature image model and the first volume feature image model to obtain a first target hole site and a second target hole site on the second volume feature image model;
planning a preset target pose of the image acquisition device according to the first target hole site, so that the second target hole site is in the visual field range of the image acquisition device when the image acquisition device is in the preset target pose; the predetermined target pose comprises a predetermined target position;
planning a preset global motion path according to the second target hole site and the preset target position so that the image acquisition device can reach the preset target position when being inserted into the body of the operation object at the second target hole site and moving along the preset global motion path;
and obtaining the first hole position, the second hole position, the target pose and the global motion path according to the mapping relation between the second body feature image model and the operation object.
Optionally, the program performs the steps of:
establishing a first feature image model according to first body table data and focus data of an operation object in a first state so as to plan a first pre-hole position and a second pre-hole position on the first feature image model;
planning a preset target pose of the image acquisition device according to the first pre-hole position, so that the first pre-hole position is located in the visual field range of the image acquisition device when the image acquisition device is in the preset target pose; the predetermined target pose comprises a predetermined target position;
planning a preset global motion path according to the second pre-hole position and the preset target position so that the image acquisition device can reach the preset target position when being inserted into the body of the surgical object from the second pre-hole position and moving along the preset global motion path;
establishing a second body feature image model according to second body surface information of the operation object in a second state;
and registering the second body feature image model and the first body feature image model, and obtaining the first hole site, the second hole site, the target pose and the global motion path according to a registration result and a mapping relation between the second body feature image model and an operation object.
Optionally, the image acquisition device is inserted into the body of the operation object from the second hole position and is in an initial position;
before executing the motion scheme, the program further performs the steps of:
planning a desired starting position and driving the image acquisition device to move from the initial position to the desired starting position;
driving the image acquisition device to rotate by a predetermined angle at the starting desired position to obtain a desired starting posture of the image acquisition device;
driving the image acquisition device to rotate to the desired starting posture;
the desired starting position and the desired starting posture are a starting position and a starting posture when the image acquisition device moves according to the global motion path.
Optionally, after the punching tip of the punching device penetrates the body surface of the surgical object at the first hole site, the program further performs the following steps:
and adjusting the pose of the image acquisition device so that the punching tail end is always positioned in the visual field of the image acquisition device.
Optionally, the puncture status information includes position information of the puncture tip and speed information of the puncture tip; the guiding information comprises at least one of punching progress information, collision reminding information and expected punching direction information;
the program performs the steps of:
acquiring the position information of the punching tail end in real time according to the second image information and the three-dimensional model;
acquiring the speed of the punching tail end according to the change of the position of the punching tail end;
the program also performs at least one of the following steps:
generating the punching progress information according to the current position information of the punching tail end and a preset punching depth;
acquiring collision probability according to the position information of the punching tail end, the speed information of the punching tail end and the three-dimensional model, and generating collision reminding information;
and acquiring the expected punching direction information according to the position information of the punching tail end and the three-dimensional model.
Optionally, the program performs the following steps to obtain the collision reminding information:
obtaining a target tissue closest to the punching device according to the position information of the punching tail end and the three-dimensional model;
calculating a distance between the perforating device and the target tissue;
and calculating collision occurrence time according to the speed information of the punching tail end and the distance, judging whether the collision occurrence time is larger than a set time threshold value, if not, judging that the collision probability is high, and generating the collision reminding information.
Optionally, the program performs the following steps to obtain the expected punching direction information:
acquiring a tangent plane of the target tissue;
acquiring a direction vector of a punching tail end of the punching device;
and projecting the direction vector onto the tangent plane, and obtaining the expected punching direction information.
To achieve the above object, the present invention also provides an electronic device comprising a processor and a computer-readable storage medium as described in any of the preceding, the processor being configured to execute a program stored on the computer-readable storage medium.
To achieve the above object, the present invention also provides a surgical robot system including:
the tool arm is used for driving the punching tail end to penetrate through the body surface of the surgical object at a first hole position on the body surface of the surgical object and enter the body of the surgical object;
the image arm is used for driving the image acquisition device to be inserted into the body of the surgical object at a second hole site on the body surface of the surgical object, acquiring first image information in the body of the surgical object and acquiring second image information of the punching tail end entering the body of the surgical object;
a control unit communicatively coupled to the tool arm and the image arm and configured to execute a program stored on a computer readable storage medium as in any of the preceding.
Optionally, the control unit is further configured to generate guidance information and/or prompt information, where the prompt information is used to prompt an intervention operation;
the surgical robot system further comprises a prompting unit in communication connection with the control unit and configured to receive the guidance information and to perform guidance and/or to receive the prompting information to prompt an intervention operation.
Optionally, the surgical robot system further includes the punching device, a marker is disposed on the punching end of the punching device, and the image acquiring device is configured to identify the marker and acquire the second image information of the punching end.
Optionally, the surgical robot system further includes the image capturing device, and the image capturing device is a controllable bending type image capturing device or an uncontrollable bending type image capturing device.
Compared with the prior art, the computer-readable storage medium, the electronic device and the surgical robot system have the following advantages:
the aforementioned computer-readable storage medium has stored thereon a program that, when executed, performs the steps of: planning a target pose of an image acquisition device according to a first hole site on the body surface of an operation object, so that the first hole site is positioned in the visual field range of the image acquisition device when the image acquisition device is in the target pose; and planning a motion scheme of the image acquisition device so that the image acquisition device can move to reach the target pose according to the motion scheme after being inserted into the body of the surgical object from a second hole on the body surface of the surgical object, so that the image acquisition device can be used for monitoring and/or guiding a punching process. When the computer-readable storage medium is applied to a surgical robot system and the surgical robot system is adopted to execute punching operation, the target pose and the motion scheme of an image acquisition device can be planned before punching, the image acquisition device can move to the target pose according to the motion scheme, then punching can be started, the image acquisition device is utilized to monitor punching operation, and the punching operation is guided, so that tissue puncture at the punching tail end of the punching device is avoided, punching safety is improved, and experience requirements on an operator are reduced.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic view of an application scenario of a surgical robotic system provided in accordance with an embodiment of the present invention;
FIG. 2 is a schematic view of a surgical robotic system for perforating a body surface of a surgical object according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a punch device of a surgical robotic system according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of a global motion path planned by a control unit of a surgical robotic system according to an embodiment of the present invention, the global motion path shown without an obstacle;
FIG. 5 is a schematic diagram of a global motion path and a local motion path planned by a control unit of a surgical robotic system provided in accordance with an embodiment of the present invention;
FIG. 6 is a flowchart of a method for guiding a surgical robotic system during drilling according to an embodiment of the present invention;
FIG. 7 is a flow chart of a surgical robotic system in planning a target pose and a global motion path according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a first imaging device being used to obtain first volumetric data and lesion data of a subject according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a second imaging device being used to obtain second volumetric table data of a surgical object according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating the mapping between the target and the control unit on the surface of the surgical object according to an embodiment of the present invention;
FIG. 11 is a flow chart of a control unit of a surgical robotic system planning target pose and global motion scenarios provided by the present invention according to an alternative embodiment;
FIG. 12 is a graph of the position, velocity, acceleration, and acceleration of the endoscope over time as the control unit of the surgical robotic system plans the trajectory of motion of the endoscope using the S-shaped trajectory planning method, according to an embodiment of the present invention;
fig. 13 is a schematic structural view of an endoscope of a surgical robot system according to an embodiment of the present invention, the endoscope being a straight tube type endoscope;
FIG. 14 is a schematic structural view of an endoscope of a surgical robotic system according to an embodiment of the present invention, the endoscope being a bendable endoscope;
FIG. 15 is a schematic view of binocular camera imaging principles of an endoscope of a surgical robotic system provided in accordance with an embodiment of the present invention;
FIG. 16 is a flow chart of a control unit of a surgical robotic system for building a three-dimensional model from first image information according to an embodiment of the present invention;
FIG. 17 is a schematic diagram of a surgical robotic system configured to acquire a collision torque of an endoscope in accordance with an embodiment of the present invention;
FIG. 18 is a flow chart of a surgical robotic system acquisition punch process provided in accordance with an embodiment of the present invention;
FIG. 19 is a flow chart of a surgical robotic system acquiring collision alert information and a desired punch direction provided in accordance with an embodiment of the present invention;
FIG. 20 is a schematic view of a control unit of a surgical robotic system acquiring a resection plane of a target tissue according to an embodiment of the present invention;
FIG. 21 is a schematic view of a control unit of a surgical robotic system obtaining a desired perforation direction according to an embodiment of the present invention;
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the present embodiment are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Furthermore, each of the embodiments described below has one or more technical features, and thus, the use of the technical features of any one embodiment does not necessarily mean that all of the technical features of any one embodiment are implemented at the same time or that only some or all of the technical features of different embodiments are implemented separately. In other words, those skilled in the art can selectively implement some or all of the features of any embodiment or combinations of some or all of the features of multiple embodiments according to the disclosure of the present invention and according to design specifications or implementation requirements, thereby increasing the flexibility in implementing the invention.
As used in this specification, the singular forms "a", "an" and "the" include plural referents and the plural forms "a plurality" includes more than two referents unless the content clearly dictates otherwise. As used in this specification, the term "or" is generally employed in its sense including "and/or" unless the content clearly dictates otherwise, and the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either fixedly connected, detachably connected, or integrally connected. Either mechanically or electrically. Either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
To further clarify the objects, advantages and features of the present invention, a more particular description of the invention will be rendered by reference to the appended drawings. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is provided for the purpose of facilitating and clearly illustrating embodiments of the present invention. The same or similar reference numbers in the drawings identify the same or similar elements.
Fig. 1 is a schematic view of an application scenario of the surgical robot system of the present invention, and fig. 2 is a schematic view of the surgical robot system during perforating on a surface of a surgical object. Referring to fig. 1 and 2, the surgical robot system includes a control end and an execution end, the control end includes a doctor console and a doctor control device 10 disposed on the doctor console. The execution end comprises a patient end control device, a surgical operation device 20, an image display device 30 and the like. Wherein, an image arm 21 and a tool arm 22 are hung on the operation device. The image arm 21 is used to mount an image acquisition device for acquiring image information (for example, first image information, second image information, and the like described later) of an area or a device of interest. The image acquisition device is, for example, an endoscope 300. The tool arm 22 is used to carry a perforating device 100 (as shown in fig. 2) or a surgical instrument, and a perforating tip, such as a tapered tip 110 (shown in fig. 3), of the perforating device 100 is used to perforate the surgical object at a first hole M on the surgical object to perform a perforating operation. In addition, the surgical robotic system includes a control unit (not shown) communicatively coupled to the endoscope 300 and the imaging arm 21. The specific arrangement of the control unit is not limited in the embodiments of the present invention, and in some embodiments, the control unit may be integrally disposed at the patient-end control device, or integrally disposed at the doctor-end control device, or partially disposed at the patient-end control device and partially disposed at the doctor-end control device, as long as it can perform the corresponding function.
Before perforating at the first hole site M with the surgical robot system, the control unit is configured to plan a target pose of the endoscope 300 according to the first hole site M, so that the first hole site M is located within the visual field range S of the endoscope 300 when the endoscope 300 is in the target pose. Planning the motion scheme of the endoscope 300, and driving the endoscope 300 to move to the target pose according to the motion scheme after the endoscope 300 is inserted into the body of the operation object from a second hole on the body surface of the operation object. As such, the endoscope 300 may be used to monitor and/or direct the perforation operation performed at the first hole site M.
In particular, the endoscope 300 may acquire first image information of a target region within a surgical object during movement and after reaching the target pose, the control unit being further configured to build a three-dimensional model of the target region from the first image information.
Then, the control unit is configured to actuate the tool arm 22 to move, so as to drive the perforating device 100 to move, and cause the tapered tip 110 of the perforating device 100 to penetrate the body surface of the surgical object at the first hole M to perform the perforating operation. In this process, the endoscope 300 further acquires second image information of the tapered tip 110 after penetrating the body surface of the surgical object, while the control unit is further configured to acquire punching state information from the three-dimensional model and the second image information and generate guide information to guide the punching operation.
By planning the target pose of the endoscope 300 before perforation, moving the endoscope 300 to the target pose, and building a three-dimensional model based on the first image information of the target region in the surgical object before perforation, the endoscope 300 can immediately acquire the second image information of the tapered tip 110 once the tapered tip 110 of the perforating device 100 penetrates the patient surface, thereby monitoring the perforation state in real time in combination with the three-dimensional model, guiding the perforation process, avoiding the tapered tip 110 from stabbing human tissues, reducing the dependence of the surgery on the experience of the operator, improving the safety of the surgery, shortening the surgery time, and reducing the fatigue degree of the operator. In this embodiment, the "target region" is determined according to a specific operation, for example, in a laparoscopic operation, the target region is an abdominal cavity, and in a thoracoscopic operation, the target region is a thoracic cavity, and the abdominal cavity in the laparoscopic operation is taken as an example of the target region hereinafter.
In this embodiment, as shown in fig. 3, a marker 111 may be disposed on the surface of the tapered tip 110, and the marker 111 may be recognized by the endoscope 300 to facilitate the endoscope 300 to obtain the second image information of the tapered tip 110, where the marker 111 is, for example, a reflective material or a luminous body coated on the surface of the tapered tip 110. Further, as shown in fig. 4, the target pose includes a first target position and a target pose, and the motion scheme at least includes a global motion path L planned according to the second hole position and the first target position 1 . It can be understood that the global motion path L 1 Is the first target position, the global motion path L 1 The starting point of the second hole position may be determined according to specific situations, and may also be other positions related to the second hole position, which is not limited in the present invention, as long as the endoscope 300 is inserted into the abdominal cavity of the surgical object and follows the global motion path L 1 When moved, the endoscope 300 may be moved to the first target position. It should also be noted that the global motion path L 1 Planning based on the second hole location and the first target location does not indicate that the second hole location has been determined before planning, but rather indicates that the global motion path is associated with the second hole location, e.g., it may start at the second hole location. In practice, the second hole position may be on the global motion path L 1 The global path L may be determined before planning, or may be determined 1 While determining the second hole location.
Therefore, after the endoscope 300 is inserted into the abdominal cavity of the subject,the control unit drives the image arm 21 to move so as to drive the endoscope 300 to move along the global movement path L 1 And (4) moving. The image arm 21 comprises at least one joint, as is known to the person skilled in the art. Preferably, the endoscope 300 follows the global motion path L 1 Before moving, the control unit is further configured to plan the motion trajectory of the endoscope 300, i.e. to the global motion path L 1 Time constraints are applied and the global motion path L of the endoscope 300 is obtained 1 When the endoscope moves, the change relationship of the pose of the endoscope 300 with time is further obtained by inverse kinematics calculation of the mechanical arm, that is, the change relationship of the speed, the acceleration and the position of the joint of the image arm 21 with time is obtained, that is, the speed information, the acceleration information and the position information of the image arm are obtained, so that the control unit controls each joint of the image arm 21 to move according to the speed information, the acceleration information and the position information.
In some embodiments, as shown in FIG. 4, the global motion path L 1 Completely avoid human tissue so that the endoscope 300 can directly follow the global motion path L 1 And moving to the target position. In other embodiments, however, the global motion path L is as shown in FIG. 5 1 Through a portion of human tissue, then the portion of human tissue constitutes an obstacle P that impedes movement of the endoscope 300 along the global motion path. That is to say the global motion path L 1 Has an obstacle, in which case the endoscope 300 should be temporarily deviated from the global motion path L 1 Until the obstacle P is avoided. Thus, the motion scheme further comprises a motion estimation according to the global motion path L 1 And the local motion path L planned by the three-dimensional model 2 Said local movement path L 2 Is arranged outside the boundary of the obstacle P, and the local motion path L 2 Is in the global motion path L 1 Such that when the endoscope 300 is along the local motion path L 2 After moving to deviate from the obstacle P, the movement returns to the global movement path L again 1 The above.
When the endoscope 300 is moved to the first target position, the first hole position M is within the visual field S of the endoscope 300 if the endoscope 300 is just in the target posture. However, if the current posture of the endoscope 300 at the time when the endoscope 300 reaches the first target position deviates from the target posture, the motion scheme further includes a rotation scheme, so that the endoscope 300 is rotated to the target posture according to the rotation scheme. In other words, when planning the motion scheme and driving the endoscope to move according to the motion scheme, the control unit is further configured to plan the rotation scheme according to the current posture and the target posture when the endoscope 300 reaches the first target position, and drive the endoscope 300 to rotate according to the rotation scheme to the target posture. Wherein the rotation scheme includes a rotation direction and a rotation angle of the endoscope 300 around the second hole site.
Typically, the endoscope 300 is inserted into the abdominal cavity from the second aperture site at an initial position that is typically random and may not be along the global motion path L 1 Before the control unit drives the image arm 21 to move to drive the endoscope 300 to move according to the motion scheme, the control unit is further configured to plan a desired starting position, drive the endoscope 300 to move from the starting position to the desired starting position, and drive the endoscope 300 to rotate by a predetermined angle after the endoscope 300 reaches the desired starting position to obtain a desired starting posture of the endoscope 300; the endoscope 300 is driven to rotate to the desired starting pose. The desired starting position and the desired starting pose, i.e. the desired starting pose, are a good pose for the endoscope 300 to start moving according to the motion scheme. The expected starting pose is determined by a doctor according to actual conditions.
In this way, as shown in fig. 6, in an exemplary embodiment, the drilling at the first hole site M on the surface of the surgical object by using the surgical robot system may include the following steps:
step S10: the control unit plans the target pose and the global motion path and the desired starting position of the endoscope.
Step S20: the control unit plans a motion track of the endoscope when moving along the global motion path.
Step S30: and inserting the endoscope into the abdominal cavity of the operation object from a second hole on the body surface of the operation object.
Step S40: the control unit actuates the image arm to move and determines the desired starting pose and brings the endoscope to the desired starting pose. The endoscope may acquire first image information within the abdominal cavity during this procedure.
Step S50: the control unit establishes the three-dimensional model according to the first image information of the abdominal cavity acquired by the endoscope.
Step S60: the control unit drives the image arm to move so as to drive the endoscope to move to the target pose. In this step, the endoscope moves along the global motion path from the initial pose according to the motion trajectory, and if the endoscope does not encounter the obstacle all the time in the moving process, the control unit drives the endoscope to move along the global motion path according to the motion trajectory until the endoscope reaches the target position. If at least one obstacle exists on the global motion path, when the endoscope meets the obstacle, the control unit establishes the local motion path according to the three-dimensional model and the global motion path, drives the endoscope to move along the local motion path to avoid the obstacle, and returns to the global motion path after avoiding the obstacle and continues to move until the endoscope reaches the target position.
Step S70: the control unit plans the rotation scheme and drives the endoscope to rotate to the target posture according to the rotation scheme.
Step S80: and perforating the first hole site by using the perforating device, and acquiring the second image information of the conical tip by using the endoscope after the conical tip penetrates through the body surface of the surgical object.
Step S90: and the control unit acquires punching state information according to the second image information and the three-dimensional model and generates guide information to guide punching until the current hole is formed.
As can be understood by those skilled in the art, in a laparoscopic surgery, which may require drilling holes at a plurality of second hole sites on the abdominal cavity of the surgical object, the control unit may plan the target pose of the endoscope 300 for each second hole site, the global motion path L and the target pose of the endoscope 300 for each second hole site at one time in the step S10 of the present embodiment 1 And the second motion profile. And planning a motion trajectory for each of the global motion paths of the endoscope 300 in the step S20. Thus, after the end of the puncturing operation for a second hole site, the operator may return the endoscope 300 to the predetermined position (i.e., the position at which the endoscope was initially inserted into the abdominal cavity), which may be performed using any suitable method, such as retracting the endoscope 300 into a poke card. And then, the steps S40 to S90 are repeatedly executed until all punching operations are finished.
In addition, in other embodiments, the desired starting position may be planned in the step S30, the sequence of the step S20 and the step S30 may be interchanged, the sequence of the step S40 and the step S50 may be interchanged, or the step S50 may be executed synchronously with the step S60 or after the step S60, depending on the actual situation. In detail, when the global motion path L is 1 When the obstacle exists, the three-dimensional model only needs to plan the local motion path L 2 It may be established before, for example, the endoscope is moved but the obstacle is not encountered. When the global motion path L 1 Without any obstacle, the three-dimensional model is only required to be at the tapered tip of the punching device 100110 are established before penetrating the body surface of the surgical object. In addition, the endoscope 300 may also be moving along the global motion path L 1 Along the local movement path L 2 And acquiring the first image information after the target position is reached, so that the control unit can update the three-dimensional model according to the first image information.
Next, the implementation of each step when the surgical robot system performs the punching will be described in detail herein.
FIG. 7 shows the control unit planning the target pose and the global motion path L in one non-limiting embodiment 1 Time-flow diagram, as shown in FIG. 7, the pose of the target and the global motion path L 1 The planning method comprises the following steps:
step S11: the control unit establishes a first feature image model according to first body table data and lesion data of the surgical object in a first state.
Step S12: and planning a first pre-hole position and a second pre-hole position on the first feature image model. In this embodiment, the first pre-hole position may be obtained by simulating, by the control unit, physical signs of a surgical object through three-dimensional simulation, performing simulated drilling by combining a model of the tool arm, and combining a working space of the tool arm and a collision safety manner. Alternatively, in an alternative embodiment, the first pre-hole position and the second pre-hole position may be determined empirically by an operator, and then simulated punching is performed by the control unit to verify whether the pre-punched positions are proper.
Step S13: the control unit plans a predetermined target pose such that the first pre-hole position is within a field of view of the endoscope when the endoscope is in the predetermined target pose, the predetermined target pose including a predetermined target position and a predetermined target pose.
Step S14: the control unit plans a preset global motion path according to the second pre-hole position and the preset target position, so that the endoscope can reach the preset target pose when being inserted into the abdominal cavity of the surgical object from the second pre-hole position and moving along the preset global motion path.
Step S15: the control unit establishes a second volume characteristic image model according to second volume table information of the surgical object in the second state.
Step S16: and performing image registration on the second voxel image model and the first voxel image model, mapping the first pre-hole position, the second pre-hole position, the preset target pose and the preset global path based on the first voxel image model onto the second voxel image model according to a registration result, and obtaining the first hole position, the second hole position, the target pose and the global motion path according to a mapping relation between the second voxel image model and an operation object.
As shown in fig. 8, the first body surface data and the lesion data are acquired by a first imaging device 400, and the first imaging device 400 may include an MRI, CT, or other X-ray device as long as it can simultaneously scan the body surface features and the lesion features of the surgical object. As shown in fig. 9, the second volumetric data is acquired by a second visualization device 500, which includes, but is not limited to, a 3D vision system.
When the operation object is in the first state and the second state respectively, the body positions of the operation object are different. Generally, the first state refers to a state of the surgical subject at a diagnosis stage, and the second state refers to a state of the surgical subject at the time of preoperative preparation. In laparoscopic surgery, the first state is a state before the surgical object develops pneumoperitoneum, and the second state is a state after the surgical object develops pneumoperitoneum. In this embodiment, the control unit first establishes a first pre-hole site, a second pre-hole site, the predetermined target pose, and the predetermined global motion path on a first feature image model, then establishes a second feature image model after pneumoperitoneum, and registers the two feature image models, so as to calibrate the first pre-hole site, the second pre-hole site, the predetermined target pose, and the predetermined global motion path according to a registration result, and map the first pre-hole site, the second pre-hole site, the target pose, and the motion scheme global motion path onto a body of an operation subject to obtain an actual first hole site, second pre-hole site, target pose, and motion scheme global motion path. It will be appreciated by those skilled in the art that in other procedures or in other circumstances, the first state and the second state may also differ due to different states of the subject, such as suffocation, satiety, defecation, etc. Further, the step S15 may be performed between the step S11 and the step S12.
In practice, the first imaging device 400, the second imaging device 500, the patient-side control device and the surgical object are in different coordinate systems, but it is obvious to those skilled in the art that the mapping relationship can be established between different coordinate systems by using a conventional method. In an embodiment, as shown in fig. 8 to 10, when the second imaging device acquires the second volumetric data, a plurality of targets 1 are distributed on the body surface of the surgical object, and the targets 1 may be optical positioning devices including light-reflecting balls. The positions of a plurality of targets 1 are calibrated by an operator, and a first coordinate system F is established according to the positions of the targets 1 1 (i.e., the surgical object coordinate system). The second imaging device 500 is in a second coordinate system F 2 In the second imaging apparatus 500, the second coordinate system F is known by acquiring the coordinates of the target 1 as the second volume table data 2 And the first coordinate system F 1 The mapping relationship of (c). The first imaging device 400 is in a third coordinate system F 3 In step S16, the second coordinate system F is obtained by image registration 2 And the third coordinate system F 3 So as to obtain the target punching position in the first coordinate system F 1 Inner position. The patient-side control device is in a fourth coordinate system F 4 Internal, world coordinate system F 0 In which said fourth coordinate system F can be directly acquired 4 And the first coordinate system F 1 The mapping relationship of (2). Therefore, the conversion relation among the coordinate systems can be established to obtain the mapping relation among the coordinate systems.
In another alternative implementation, the control unit may also plan the predetermined global motion path on the second volumetric characteristic image model. In this embodiment, as shown in fig. 11, the method for planning the first hole location, the second hole location, the target pose, and the global motion path by the control unit includes:
step S11': and establishing the first body characteristic image model according to the first body table data and the focus data of the operation object in the first state.
Step S12': planning the first pre-hole location and the second pre-hole location on the first feature image model;
step S13': and establishing the second body feature image model according to the second body surface data and the focus data of the operation object in the second state.
Step S14': registering the second voxel image model and the first voxel image model, and converting the first pre-hole position and the second pre-hole position based on the first voxel image model into a first target hole position and a second target hole position based on the second voxel image model according to a registration result.
Step S15': the control unit plans a preset target pose of the endoscope according to the first target hole site, so that the first target hole site is in the visual field range of the endoscope when the endoscope is in the preset target pose. The predetermined target pose comprises a predetermined target position.
Step S16': the control unit plans a preset global motion path according to the second target hole site and a preset target position, so that the endoscope can be inserted into the abdominal cavity of the surgical object at the second target hole site and can reach the preset target position when moving according to the preset global motion path.
Step S17': and the control unit obtains the first hole position, the second hole position, the target pose and the global motion path according to the mapping relation between the second body feature image model and the operation object.
Wherein the order of the step S12 'and the step S13' may be interchanged.
Furthermore, in the step S10, whenThe global motion path L 1 When the starting point of (b) is the second hole location, the desired starting position may be located at the second hole location. In this manner, the endoscope 300 can be withdrawn from the initial position in a direction outward of the body to the second aperture site. Typically, a poke card is provided at the second aperture site, and when the endoscope 300 is retracted to the second aperture site, the endoscope 300 is actually retracted into the poke card. Further, in the step S30, when the endoscope 300 selects a predetermined angle at the desired start position and determines the desired start posture, the predetermined angle is generally the maximum rotation angle of the endoscope 300. Motion scenario the process of moving the endoscope 300 from the initial position and the corresponding initial pose (which is random) to the desired start pose is actually a safe start-up scenario, so that the endoscope 300 can start executing the motion scenario smoothly at the desired start pose, avoiding the problem that the endoscope 300 cannot execute the motion scenario due to poor initial pose.
In step S20, the planning of the motion trajectory is to add a time constraint to the global motion path. That is, the control unit first plans the endoscope 300 along the global motion path L 1 In the process, the relation between the position of the endoscope 300 and the time is interpolated to obtain the position information, the speed information and the acceleration information of the endoscope 300, and then the corresponding information of the joint of the image arm 21 is obtained through the inverse kinematics calculation of the mechanical arm. In this embodiment, the control unit may perform the motion trajectory planning by using any one of a plurality of manners, such as an S-type trajectory planning method, a trigonometric function trajectory planning method, a polynomial trajectory planning method, an exponential function trajectory planning method, a trapezoidal trajectory planning method, and a spline planning method. Next, the movement trajectory planning is performed by using an S-shaped trajectory planning method as an example, and fig. 11 shows a time-dependent change relationship among the velocity vel, the acceleration acc, the jerk, and the position pos in the S-shaped trajectory planning method.
In the S-trajectory planning method, the motion of the endoscope 300 is divided into seven sections, each of which is an acceleration sectionThe device comprises a uniform acceleration section, a deceleration section, a uniform speed section, an acceleration and deceleration section, a uniform deceleration section and a deceleration and deceleration section. Wherein, the maximum speed of the endoscope 300 is set as V during the whole movement process by human max Maximum acceleration of a max Thus, in combination with the position of the endoscope 300 versus time, velocity information and acceleration information (where the integral of velocity is the position) of the endoscope 300 at various points in time of the motion can be obtained.
Preferably, the control unit may plan a suitable trajectory planning method according to predetermined conditions, which are determined by the operator according to actual conditions. For example, the predetermined condition includes any one of time-optimal, path-most flexible, or energy-optimal, wherein the time-optimal may refer to the most appropriate movement time, and the movement time of the endoscope 300 is not too long to increase the operation time, and is not too short to cause too fast movement. The most flexible path may mean that the endoscope 300 has minimal impact on human tissue during motion. The energy optimization may be a minimization of power supplied to the image arm 21, etc.
Next, the operator inserts the endoscope 300 from the second hole site to a predetermined position in the abdominal cavity of the surgical object, and causes the endoscope 300 to acquire first image information of the abdominal cavity.
Referring to fig. 13 and 14, the endoscope 300 includes a scope arm 310 and an image capturing element 320, wherein a proximal end of the scope arm 310 is connected to the image arm 200, and the image capturing element 320 is disposed at a distal end of the scope arm 310. In this embodiment, the endoscope 300 may be a straight tube type endoscope as shown in fig. 13, and in this case, the scope arm 310 is a rigid arm and is not bent or deformed. Alternatively, the endoscope 300 is a bendable endoscope as shown in fig. 14, in which case the arm 310 includes a first rigid segment 311, a controllably bendable segment 312, and a second rigid segment 313 connected in series, and the image capturing element 320 is disposed at the distal end of the second rigid segment 313. The steerable curved segment 312 comprises a snake bone, and the endoscope 300 further comprises a pull cord disposed within the scope arm 310, whereby controlled retraction and release of the pull cord allows bending and straightening of the endoscope 300. For the bendable endoscope 300, the angle of rotation in the motion profile includes the degree of self-rotation of the endoscope 300 about its axis, and also includes the angle of bending of the controllable bend segment 312. Since the endoscope 300 of either the straight tube type or the bendable type is a conventional endoscope 300, other detailed structures of the endoscope 300 will not be described herein. In addition, the "proximal end" and the "distal end" refer to relative positions and relative directions of elements on the endoscope 300 with respect to each other from the perspective of an operator using the endoscope 300, and although the "proximal end" and the "distal end" are not limitative, the "distal end" herein refers to an end of the endoscope 300 that is first inserted into a body of a surgical object, and the "proximal end" refers to an end away from the body of the surgical object.
In this embodiment, the image capturing element 320 of the endoscope 300 is a binocular camera, and fig. 15 shows the imaging principle of the binocular camera.
As shown in fig. 15, the binocular camera includes a first camera 321 and a second camera 322, the first camera 321 acquires first sub-image information, and the second camera 322 acquires second sub-image information. Therefore, the step S30 includes the first camera 321 acquiring the first sub-image information, and the second camera 322 acquiring the second sub-image information. In the figure, f is the focal length of the camera, b is the base line of the first camera and the second camera, and P (x, y, z) is the coordinate of the spatial point to be photographed. Then f, b and P (x, y, z) satisfy the following relationship:
those skilled in the art will appreciate that in alternative embodiments, the image capture element 320 may also be a laser scanner or a 3D structured light camera.
Thus, as shown in fig. 16, when the control unit executes the step S50 to create the three-dimensional model, the method may include:
step S51: the control unit extracts feature points from the first sub-image information and the second sub-image information.
Step S52: the control unit pairs the feature points on the first sub-image information and the second sub-image information to form a feature point pair.
Step S53: and the control unit positions the characteristic point pairs to the three-dimensional space position under the camera coordinate system according to the epipolar line constraint. Since the endoscope is moved by the image arm, the position of the camera coordinate system in the coordinate system of the control unit can be calculated from the positive kinematics of the image arm and the camera coordinate system parameters calibrated in advance. And positioning all the characteristic points to obtain a point cloud model of the characteristic points.
Step S54: the control unit establishes the three-dimensional model according to the point cloud model.
Next, the control unit executes the step S60 and the step S70 to move the endoscope 300 from the initial pose to the target pose in accordance with the motion scheme.
Further, when the step S60 is executed, even if the human tissue does not constitute the obstacle P, the local position of the endoscope 300 may collide with the human tissue and may damage the human tissue. Therefore, the control unit is further configured to acquire a collision torque of the endoscope 300 with human tissue during the movement of the endoscope 300, and determine whether the collision torque is greater than a preset torque threshold value,if so, the image arm 21 is controlled to stop moving so as to stop moving the endoscope, and prompt information is generated so as to prompt intervention operation. The "intervention operation" may be, for example, a manual operation, and specifically may include manually inserting the endoscope 300 into a card, moving the image arm 21 to drive the endoscope 300 to avoid the human tissue, then extending the endoscope 300 out of the card, and positioning the endoscope 300 at a safe position, which is determined by a doctor according to actual conditions and may be on the global motion path L 1 Besides, it is also possible to provide the global motion path L 1 The above.
Optionally, the principle of the control unit acquiring the collision torque is as shown in fig. 17, a joint sensor 210 is disposed on the image arm 21, and the joint sensor 210 is in communication connection with the control unit to acquire an actual joint torque of the image arm 200 and transmit the actual joint torque to the control unit. The control unit is configured to receive the actual joint moment, calculate a theoretical joint moment of the image arm 21 according to the mechanical arm dynamics, and calculate a difference between the actual joint moment and the theoretical joint moment as the collision moment. Preferably, the control unit further comprises a low pass filter 601 to filter the theoretical joint moment and the actual joint moment. In alternative embodiments, the collision torque may be obtained by a deviation between an actual position and a set position of a joint on the image arm 21, or may be obtained by a kinematic OBB method and a momentum observer collision detection method, which are not limited in this embodiment.
When the endoscope 300 is in the safe position and the safe distance is located in the global motion path L 1 In addition, the control unit performs the following steps to return the endoscope 300 to the global motion path L 1 :
And acquiring a position on the global motion path, which is closest to the safety position, as a second target position. The image arm 21 is actuated to move the endoscope 300 to the second target position. The control unit then continues to drive the endoscope 300 along the global motion path.
After the endoscope 300 is moved to the target pose, the operator may begin perforating. When the tapered tip 110 of the puncturing device 100 penetrates the body surface of the surgical object into the abdominal cavity, the endoscope 300 starts to acquire second image information of the tapered tip 110. As described previously, when the endoscope 300 is in the target pose, the puncture point M is located within the visual field of the endoscope 300, and therefore, once the tapered tip 110 penetrates the body surface, the endoscope 300 can immediately recognize the tapered tip 110 to acquire the second image information. However, as the perforation progresses, the perforating device 100 enters the abdominal cavity more and more, and if the endoscope 300 is kept in the target posture all the time, the tapered tip 110 may move out of the visual field of the endoscope 300. To solve this problem, in the step S80, the control unit further performs the steps of: the attitude of the endoscope 300 is adjusted so that the tapered tip 110 is always located within the visual field of the endoscope 300. The specific adjustment can be realized by a visual servo method. If the endoscope 300 cannot be adjusted by the control unit so that the tapered tip 110 is always kept within the visual field of the endoscope 300, the perforation needs to be stopped, and then the operator manually adjusts the posture of the endoscope 300.
In the step S90, the punching-state information acquired by the control unit includes the position of the tapered tip 110 and the speed of the tapered tip 110. The guidance information includes at least one of a punching progress, a collision reminder, and an expected punching direction.
The control unit is configured for obtaining the position of the conical tip 110 from the second image information and the three-dimensional model. The speed of the tapered tip 110 is obtained from the change in the position of the tapered tip 110. The drilling progress is obtained based on the position of the tapered tip 110 and a predetermined drilling depth. And acquiring the collision probability according to the position of the conical tip 110, the speed of the conical tip 110 and the three-dimensional model, and generating a collision prompt. The desired drilling direction is obtained from the position of the conical tip 110 and the three-dimensional model.
As shown in fig. 18, the method for acquiring the puncturing procedure includes:
step S91: the control unit acquires the current drilling depth z according to the position of the conical tip 1 。
Step S92: the control unit compares the current punching depth z 1 With the expected drilling depth z 0 And obtaining the ratio of the two to be used as the punching process.
As shown in fig. 19, the method for acquiring the collision reminder includes the following steps:
step S93: the control unit obtains the target tissue closest to the perforating device according to the position of the conical tip and the three-dimensional model.
Step S94: the control unit calculates a distance of the target tissue from the punch device.
Step S95: the control unit calculates the collision occurrence time t according to the speed of the conical tip and the distance, wherein the speed of the conical tip is v in the embodiment 1 And if the distance is D, the collision occurrence time t satisfies the following condition: t ═ D/v 1 。
Step S96: the control unit judges whether the time t is greater than a set time threshold t 0 And if not, judging that the collision probability is high, and generating the collision prompt, and if so, judging that the collision probability is low, and not generating the collision prompt.
Generally, the expected punching direction is generated only after the control unit determines that the collision probability is large, and the expected punching direction does not need to be generated when the collision probability is small. With continuing reference to fig. 19 in conjunction with fig. 20 and 21, the method for the control unit to generate the expected puncturing direction is as follows:
step S97: the control unit obtains the cutting plane Q of the target tissue and the direction vector of the tapered tip 110
Step S98: the control unit converts the direction vectorProjecting onto the tangent plane Q of the target tissue to obtain the desired perforation direction vector
Those skilled in the art will understand that, in the present embodiment, the control unit may establish the three-dimensional model only once, and at this time, the three-dimensional model is a global three-dimensional model of the abdominal cavity, which is used for planning the local motion path in step S60 and for acquiring the puncture status information and the guidance information in step S90. However, in an alternative embodiment, the control unit may build a secondary three-dimensional model more than once, specifically, the control unit may first build a local three-dimensional model of the barrier region according to the first image information, which is used for planning the local motion path in step S60, and then, the control unit may build a global three-dimensional model of the abdominal cavity according to the first image information, which is used for acquiring the punching state information and the guiding information in step S90.
Furthermore, the surgical robot system further comprises a prompting device, wherein the prompting device is used for being in communication connection with the control unit so as to receive the guide information and the prompting information and perform corresponding prompting. Alternatively, the prompting device may have a plurality of options, for example, the guiding device may include a buzzer alarm, and the collision warning is prompted by the buzzer alarm. The prompting device can also comprise a voice prompting device which is used for broadcasting the collision prompt, the punching process and the expected punching direction. The prompting device can also comprise a display device for displaying the collision prompt, the punching process and the expected punching direction through characters and images. This embodiment is not limited to this.
An embodiment of the present invention further provides a computer-readable storage medium, where a program is stored in the computer-readable storage medium, and when the program is executed, all steps performed by the foregoing control unit are performed.
An embodiment of the present invention further provides an electronic device, where the electronic device includes a processor and the computer-readable storage medium, and the processor is configured to execute the program stored on the computer-readable storage medium.
Still further, an embodiment of the present invention further provides a motion scheme planning method, which is used for planning a motion scheme of the image capturing apparatus, and includes the step of planning a motion scheme of an endoscope performed by the control unit.
Although the present invention is disclosed above, it is not limited thereto. Various modifications and alterations of this invention may be made by those skilled in the art without departing from the spirit and scope of this invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (23)
1. A computer-readable storage medium on which a program is stored, characterized in that, when the program is executed, the program performs the steps of:
planning a target pose of an image acquisition device according to a first hole site on the body surface of an operation object, so that the first hole site is positioned in the visual field range of the image acquisition device when the image acquisition device is in the target pose; and (c) a second step of,
and planning a motion scheme of the image acquisition device so that the image acquisition device can move to the target pose according to the motion scheme after being inserted into the body of the surgical object from a second hole position on the body surface of the surgical object.
2. The computer-readable storage medium according to claim 1, wherein the program further performs the steps of:
establishing a three-dimensional model according to first image information in the operation object body acquired by the image acquisition device; and the number of the first and second groups,
and acquiring punching state information according to the three-dimensional model and second image information of the punching tail end of the punching device penetrating through the body surface of the surgical object at the first hole position and acquired by the image acquisition device when the image acquisition device is in the target pose, and generating guide information.
3. The computer-readable storage medium of claim 2, wherein the target pose comprises a first target position and a target pose;
the motion scheme at least comprises a global motion path planned according to the second hole position and the first target position; the image capture device is capable of reaching the first target location when the image capture device is moved along the global motion path.
4. The computer-readable storage medium of claim 3, wherein when an obstacle exists on the global motion path, the motion scheme further comprises a local motion path planned according to the global motion path and the three-dimensional model; the local motion path is disposed outside a boundary of the obstacle, and a start point and an end point of the local motion path are both on the global motion path.
5. The computer-readable storage medium of claim 3 or 4, wherein the motion plan further comprises a rotation plan planned based on a current pose of the image capture device when reaching the first target position and the target pose, such that the image capture device can reach the target pose when moving according to the rotation plan.
6. The computer-readable storage medium according to claim 3, wherein during the movement of the image pickup device in accordance with the movement pattern, the program further performs the steps of:
and acquiring the collision torque of the image acquisition device and the target tissue, judging whether the collision torque is greater than a preset torque threshold value, if so, controlling the image acquisition device to stop moving, and generating prompt information to prompt execution of intervention operation.
7. The computer-readable storage medium of claim 6, wherein after performing an interventional operation to place the image capture device in a safe position clear of the target tissue, the program further performs the steps of:
and acquiring a position which is closest to the safety position on the global motion path as a second target position, and driving the image acquisition device to move to the second target position so as to enable the image acquisition device to return to the global motion path.
8. The computer-readable storage medium of claim 6, wherein the image capturing device is mounted on an image arm, and the image arm is provided with a joint sensor for capturing an actual joint moment of the image arm;
the program executes the following steps to acquire the collision torque:
receiving the actual joint moment;
acquiring a theoretical joint moment of the image arm;
calculating a difference value between the actual joint moment and the theoretical joint moment as the collision moment.
9. The computer-readable storage medium of claim 1, wherein the image capture device is mounted on an image arm, and wherein the program, when executed, is configured to cause the image arm to move the image capture device according to the motion profile and further configured to:
planning a motion track of the image acquisition device to obtain a change relation of a pose of the image acquisition device along with time when the image acquisition device moves according to the motion scheme, and acquiring speed information, acceleration information and position information of a joint of the image arm;
and driving the image arm to move according to the speed information, the acceleration information and the position information so as to enable the image acquisition device to move according to the movement track.
10. The computer-readable storage medium according to claim 9, wherein the program is configured to select a motion trajectory planning method according to a predetermined condition.
11. The computer-readable storage medium of claim 10, wherein the predetermined condition comprises at least one of time-optimal, path-most flexible, or energy-optimal.
12. The computer-readable storage medium according to claim 1, wherein the program is configured to execute the steps of:
establishing a first body sign image model according to first body table data and focus data of a surgical object in a first state so as to plan a first pre-hole position and a second pre-hole position on the first body sign image;
establishing a second body feature image model according to second body surface data and focus data of the operation object in a second state;
registering the second volume feature image model and the first volume feature image model to obtain a first target hole site and a second target hole site on the second volume feature image model;
planning a preset target pose of the image acquisition device according to the first target hole site, so that the second target hole site is in the visual field range of the image acquisition device when the image acquisition device is in the preset target pose; the predetermined target pose comprises a predetermined target position;
planning a preset global motion path according to the second target hole site and the preset target position so that the image acquisition device can reach the preset target position when being inserted into the body of the operation object at the second target hole site and moving along the preset global motion path;
and obtaining the first hole position, the second hole position, the target pose and the global motion path according to the mapping relation between the second body feature image model and the operation object.
13. The computer-readable storage medium according to claim 3, wherein the program performs the steps of:
establishing a first feature image model according to first body table data and focus data of an operation object in a first state so as to plan a first pre-hole position and a second pre-hole position on the first feature image model;
planning a preset target pose of the image acquisition device according to the first pre-hole position, so that the first pre-hole position is located in the visual field range of the image acquisition device when the image acquisition device is in the preset target pose; the predetermined target pose comprises a predetermined target position;
planning a preset global motion path according to the second pre-hole position and the preset target position, so that the image acquisition device can reach the preset target position when being inserted into the body of the surgical object from the second pre-hole position and moving along the preset global motion path;
establishing a second body feature image model according to second body surface information of the operation object in a second state;
and registering the second volume feature image model and the first volume feature image model, and obtaining the first hole site, the second hole site, the target pose and the global motion path according to a registration result and a mapping relation between the second volume feature image model and an operation object.
14. The computer-readable storage medium of claim 3, wherein the image acquisition device is inserted into the surgical object from the second hole site and is in an initial position;
before executing the motion scheme, the program further performs the steps of:
planning a desired starting position and driving the image acquisition device to move from the initial position to the desired starting position;
driving the image acquisition device to rotate by a predetermined angle at the starting desired position to obtain a desired starting posture of the image acquisition device;
driving the image acquisition device to rotate to the desired starting posture;
the desired start position and the desired start pose are start positions and start poses when the image acquisition device moves along the global motion path.
15. The computer-readable storage medium according to claim 2, wherein after the punch tip of the punch device has penetrated the surface of the surgical object at the first hole site, the program further performs the steps of:
and adjusting the pose of the image acquisition device so that the punching tail end is always positioned in the visual field of the image acquisition device.
16. The computer-readable storage medium of claim 2, wherein the puncture status information includes position information of the puncture tip and speed information of the puncture tip; the guiding information comprises at least one of punching progress information, collision reminding information and expected punching direction information;
the program performs the steps of:
acquiring the position information of the punching tail end in real time according to the second image information and the three-dimensional model;
acquiring the speed of the punching tail end according to the change of the position of the punching tail end;
the program also performs at least one of the following steps:
generating the punching progress information according to the current position information of the punching tail end and a preset punching depth;
acquiring collision probability according to the position information of the punching tail end, the speed information of the punching tail end and the three-dimensional model, and generating collision reminding information;
and acquiring the expected punching direction information according to the position information of the punching tail end and the three-dimensional model.
17. The computer-readable storage medium of claim 16,
the program performs the following steps to obtain the collision alert information:
obtaining a target tissue closest to the punching device according to the position information of the punching tail end and the three-dimensional model;
calculating a distance between the perforating device and the target tissue;
and calculating collision occurrence time according to the speed information of the punching tail end and the distance, judging whether the collision occurrence time is larger than a set time threshold value, if not, judging that the collision probability is high, and generating the collision reminding information.
18. The computer-readable storage medium according to claim 16, wherein the program performs the steps of:
acquiring a tangent plane of the target tissue;
acquiring a direction vector of a punching tail end of the punching device;
and projecting the direction vector onto the tangent plane, and obtaining the expected punching direction information.
19. An electronic device comprising a processor and the computer-readable storage medium of any of claims 1-18, the processor to execute a program stored on the computer-readable storage medium.
20. A surgical robotic system, comprising:
the tool arm is used for being connected with a punching device, the punching device comprises a punching tail end, and the tool arm is used for driving the punching tail end to penetrate through the body surface of the surgical object at a first hole position on the body surface of the surgical object and enter the body of the surgical object;
the image arm is used for driving the image acquisition device to be inserted into the body of the surgical object at a second hole site on the body surface of the surgical object, acquiring first image information in the body of the surgical object and acquiring second image information of the punching tail end entering the body of the surgical object;
a control unit communicatively coupled to the tool arm and the image arm and configured to execute a program stored on the computer readable storage medium of any of claims 1-18.
21. A surgical robotic system as claimed in claim 20, wherein the control unit is further configured to generate guidance information and/or prompt information for prompting an intervention operation;
the surgical robot system further comprises a prompting unit in communication connection with the control unit and configured to receive the guidance information and to perform guidance and/or to receive the prompting information to prompt an intervention operation.
22. The surgical robotic system as claimed in claim 20, further comprising the perforating device, wherein a marker is disposed on the perforating tip of the perforating device, and the image capturing device is configured to identify the marker and capture the second image information of the perforating tip.
23. A surgical robotic system as claimed in claim 20, further comprising the image capturing device, the image capturing device being either a steerable curved image capturing device or an uncontrollable curved image capturing device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110315595.XA CN115120349A (en) | 2021-03-24 | 2021-03-24 | Computer-readable storage medium, electronic device, and surgical robot system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110315595.XA CN115120349A (en) | 2021-03-24 | 2021-03-24 | Computer-readable storage medium, electronic device, and surgical robot system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115120349A true CN115120349A (en) | 2022-09-30 |
Family
ID=83374079
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110315595.XA Pending CN115120349A (en) | 2021-03-24 | 2021-03-24 | Computer-readable storage medium, electronic device, and surgical robot system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115120349A (en) |
-
2021
- 2021-03-24 CN CN202110315595.XA patent/CN115120349A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10905506B2 (en) | Systems and methods for rendering onscreen identification of instruments in a teleoperational medical system | |
US11903665B2 (en) | Systems and methods for offscreen indication of instruments in a teleoperational medical system | |
US11872006B2 (en) | Systems and methods for onscreen identification of instruments in a teleoperational medical system | |
JP7469120B2 (en) | Robotic surgery support system, operation method of robotic surgery support system, and program | |
JP2024009240A (en) | Graphical user interface for defining anatomical border | |
WO2022199650A1 (en) | Computer-readable storage medium, electronic device, and surgical robot system | |
KR101758740B1 (en) | Guiding method of interventional procedure using medical images and system for interventional procedure for the same | |
CN113081273B (en) | Punching auxiliary system and surgical robot system | |
WO2022199649A1 (en) | Computer-readable storage medium, electronic device, and surgical robot system | |
CN117615724A (en) | Medical instrument guidance system and associated methods | |
JP2020058672A (en) | Robotic surgery support apparatus, robotic surgery support method, and program | |
CN115177365A (en) | Computer-readable storage medium, electronic device and surgical robot system | |
CN115120349A (en) | Computer-readable storage medium, electronic device, and surgical robot system | |
WO2022199651A1 (en) | Computer readable storage medium, electronic device, surgical robot, and positioning system | |
US20240164765A1 (en) | Systems and methods for estimating needle pose | |
CN115005979A (en) | Computer-readable storage medium, electronic device, and surgical robot system | |
CN115998439A (en) | Collision detection method for surgical robot, readable storage medium, and surgical robot | |
CN115998427A (en) | Surgical robot system, safety control method, slave device, and readable medium | |
KR102720969B1 (en) | Systems and methods for rendering onscreen identification of instruments in a teleoperational medical system | |
CN115120341A (en) | Computer readable storage medium, electronic equipment and surgical robot system | |
KR20170030688A (en) | Guiding method of interventional procedure using medical images and system for interventional procedure for the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |