CN115192092A - Robot autonomous biopsy sampling method oriented to in-vivo flexible dynamic environment - Google Patents
Robot autonomous biopsy sampling method oriented to in-vivo flexible dynamic environment Download PDFInfo
- Publication number
- CN115192092A CN115192092A CN202210779466.0A CN202210779466A CN115192092A CN 115192092 A CN115192092 A CN 115192092A CN 202210779466 A CN202210779466 A CN 202210779466A CN 115192092 A CN115192092 A CN 115192092A
- Authority
- CN
- China
- Prior art keywords
- target
- controller
- sampling
- collision avoidance
- control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005070 sampling Methods 0.000 title claims abstract description 136
- 238000001574 biopsy Methods 0.000 title claims abstract description 117
- 238000000034 method Methods 0.000 title claims abstract description 79
- 238000001727 in vivo Methods 0.000 title claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 27
- 230000008569 process Effects 0.000 claims abstract description 15
- 230000006870 function Effects 0.000 claims description 69
- 238000011156 evaluation Methods 0.000 claims description 42
- 239000011159 matrix material Substances 0.000 claims description 27
- 238000005520 cutting process Methods 0.000 claims description 11
- 239000000126 substance Substances 0.000 claims description 10
- 230000000007 visual effect Effects 0.000 claims description 8
- 238000002357 laparoscopic surgery Methods 0.000 claims description 7
- 238000013459 approach Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 206010034719 Personality change Diseases 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000000977 initiatory effect Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000006641 stabilisation Effects 0.000 claims description 3
- 238000011105 stabilization Methods 0.000 claims description 3
- 238000013517 stratification Methods 0.000 claims description 3
- 238000003745 diagnosis Methods 0.000 abstract description 4
- 230000009471 action Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000002427 irreversible effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000001846 repelling effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B10/00—Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
- A61B10/02—Instruments for taking cell samples or for biopsy
- A61B10/06—Biopsy forceps, e.g. with cup-shaped jaws
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B10/00—Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
- A61B10/02—Instruments for taking cell samples or for biopsy
- A61B10/04—Endoscopic instruments
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/70—Manipulators specially adapted for use in surgery
- A61B34/76—Manipulators having means for providing feel, e.g. force or tactile feedback
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2072—Reference field transducer attached to an instrument or patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B2034/302—Surgical robots specifically adapted for manipulations within body cavities, e.g. within abdominal or thoracic cavities
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Robotics (AREA)
- Pathology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Radiology & Medical Imaging (AREA)
- Manipulator (AREA)
Abstract
The invention provides a robot autonomous biopsy sampling method, a system, a storage medium and electronic equipment for an in-vivo flexible dynamic environment, and relates to the technical field of biopsy histology diagnosis. In the invention, the three-dimensional positions of the focus point and the collision avoidance area selected by a doctor on a three-dimensional image and the three-dimensional position of the tail end of the multi-freedom-degree biopsy forceps are respectively converted into a robot base coordinate system; the method comprises the following steps of (1) controlling the tail end of the biopsy forceps to move from an initial position to a focus position by adopting a multi-target motion fusion control method; after the biopsy forceps reach the position close to the focus point, the contact force between the tail end of the biopsy forceps and the tissue is obtained according to a preset autonomous sampling control system, and the clamping and sampling operation is completed; by adopting the multi-target motion fusion control method, the tail end of the multi-freedom-degree biopsy forceps is controlled to return to the initial position from the sampling end position. The three-dimensional dynamic tracking of the sampling points marked by the doctor can be realized, the autonomy of the sampling process can be realized, the sampling time is shortened, and the sampling quality and efficiency are improved.
Description
Technical Field
The invention relates to the technical field of biopsy histology diagnosis, in particular to a robot autonomous biopsy sampling method and system oriented to an in-vivo flexible dynamic environment, a storage medium and electronic equipment.
Background
The biopsy histology diagnosis has important functions in the aspects of clinical disease confirmation, lesion assessment and the like, and can find local lesions of organs in time, so that patients can be treated early. At present, the biopsy mode in clinic is mainly to extend a sampler into a designated position in a patient body along with an endoscope and clamp focus point cells in the region.
Currently, conventional biopsy sampling is performed by a surgeon to determine depth information of a fixed position at a specific time according to a preoperative image or an intraoperative endoscopic image. However, the sampling area changes dynamically with time during the whole operation, and the repeated location calibration affects not only the efficiency of biopsy sampling but also the sampling accuracy. In addition, the operation time and sampling accuracy of biopsy sampling depend on the experience and operation level of the surgeon, and the damage to the patient due to repeated sampling by misoperation and missed diagnosis caused by sampling deviation is irreversible.
In view of the above, there is a need to provide a robotic autonomous biopsy sampling solution that can ensure sampling quality and efficiency.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a robot autonomous biopsy sampling method, a system, a storage medium and electronic equipment for an in-vivo flexible dynamic environment, and solves the technical problem that the sampling quality and efficiency cannot be ensured.
(II) technical scheme
In order to realize the purpose, the invention is realized by the following technical scheme:
a robotic autonomous biopsy sampling method oriented to an in vivo flexible dynamic environment, comprising:
s1, reading a laparoscope image, marking a focus point to be sampled and a collision avoidance area on an image frame according to the selection of a doctor, and positioning the focus point and the collision avoidance area on a three-dimensional image;
s2, respectively converting the three-dimensional positions of the focus point and the collision avoidance area on the three-dimensional image and the three-dimensional position of the tail end of the multi-degree-of-freedom biopsy forceps into a robot base coordinate system;
s3, controlling the tail end of the biopsy forceps to move from the initial position to the focus position by adopting a multi-target motion fusion control method; the multiple targets comprise planned path tracking from an initial position to a target point, target point tracking and collision avoidance;
s4, after the focus point is approached, stopping moving the tail end of the biopsy forceps and opening the biopsy forceps to obtain the contact force between the tail end of the biopsy forceps and the tissue to finish clamping and sampling operation;
and S5, controlling the tail end of the multi-freedom-degree biopsy forceps to return to the initial position from the sampling end position by adopting the multi-target motion fusion control method again.
Preferably, the S1 includes:
s11, reading the laparoscope image, and marking a focus point xp to be sampled on the initial image frame according to the selection of a doctor 0 And a collision avoidance area Ad;
s12, marking a plurality of key characteristic points Px and Pa near the focus point and the collision avoidance area respectively, and determining an area with the size of L multiplied by L by taking the focus point and the collision avoidance area as centers respectively;
s13, carrying out depth estimation on the laparoscope image to obtain a depth image corresponding to the endoscope image; acquiring spatial information and color information of each pixel point from the depth image and the endoscope image to construct a three-dimensional point cloud;
s14, registering continuous three-dimensional point clouds through feature extraction and matching to obtain a coordinate transformation parameter rotation matrix R and a translational vector t, and converting the source point clouds into a target point cloud under the same coordinate system;
s15, performing feature matching of a focus point and a collision avoidance area on two adjacent frames of images by using an optical flow method, acquiring the moving direction and distance between two frames by using the average difference value of pixel coordinates of a focus point and a collision avoidance area point pair, and acquiring a focus point endoscope visual image position xp changing along with time c (t) and Collision avoidance zone endoscopic video position Ad c (t)。
Preferably, the S2 includes:
s21, measuring and calculating a space coordinate transformation matrix of the coordinate system of the laparoscope and the robot through an optical locatorThree-dimensional coordinate xp of endoscope visual image of focus point c (t) converting the focus point into a sampling robot base coordinate system, and acquiring a three-dimensional pose xp of the focus point under a robot coordinate system r (t) and endoscopic vision three-dimensional coordinates Ad of collision avoidance area c (t)) converting the focus point into a sampling robot base coordinate system, and acquiring a three-dimensional pose Ad of the focus point under a robot coordinate system r (t);
S22, converting the matrix according to the coordinate from the tail end of the multi-degree-of-freedom biopsy forceps to the tail end of the robotObtaining the initial time t of the end of the multi-freedom biopsy forceps s Three-dimensional pose xb in robot base coordinate system r (t s );
Wherein, xr r (t s ) For the initial moment t of the end of the mechanical arm s Marking the three-dimensional pose of the robot base system;
setting a fixed RCM point xm according to the spatial position of the focus point, and limiting the posture of the robot in the autonomous sampling process to be q under the constraint of the RCM point t ;
Y t =Z t ×X t-1
q t =[X t Y t Z t ]
Wherein xb (t) is the position of the end of the biopsy forceps movement, X t X-axis vector, Y, representing attitude matrix t Y-axis vector, Z, representing attitude matrix t Z-axis vector, xb (t), representing attitude matrix s ) Is xb r (t s ) At an initial time t s The position vector of (2).
Preferably, the S3 includes:
s31, constructing a total linear control system and establishing a state equation, and designing target controllers respectively according to the requirements of each target realization in the laparoscopic surgery scene; the target controller comprises a planning path tracking controller, a target guiding controller and a collision avoidance controller;
s32, respectively establishing a corresponding motion control prediction model and a target evaluation function for the planned path tracking controller, the target guidance controller and the collision avoidance controller, estimating the motion state of a future prediction time interval on the basis of the system motion state at the current moment, and calculating a corresponding target evaluation function accumulated value;
s33, respectively calculating the target gradient of the target evaluation function accumulated value of each controller at the current moment; sequentially nesting and fusing target gradient values corresponding to the controllers from low to high according to a preset weight function and a weight hierarchical sequence, and adding the target gradient values into the total control input of the system;
and S34, converting the fused position output into a joint angle of the surgical robot, and realizing the autonomous cutting operation of the robot in the process of moving from the initial position to the focus position.
Preferably, the S31 includes:
s311, constructing a total linear control system and establishing a state equation;
y(t)=Cx(t)+Du(t)
wherein x (t) is the motion state of the system at the moment t,the motion speed of the system at the time t, u (t) is the total control input of the system at the time t, y (t) is the total control output of the system at the time t, and A, B, C, D is a calculation parameter of a state equation;
s312, respectively designing a target controller according to the target realization requirements in the laparoscopic surgery scene;
u i (t)=f(y(t),r i (t))
wherein u is i (t) is the control input to controller i, a function of the total control output of the system at time t and the desired control objective, r i (t) is an expected value at time t of the control target.
The target controller comprises a planning path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller;
wherein, the controller is set for planned path tracking control:
e s (t)=r s (t)-y(t)
r s (t)=ψ(xb(t s ),xp(t))
wherein u is s (t) is an input to the planned path tracking controller,proportional and differential coefficients, r, controlled by the PD controller for path following planning s (t) desired state of the planned path tracking controller for time t, e s (t) deviation of the desired position of the object controlled by the planned path tracking controller from the total control output of the system;
for the target guidance controller:
r o (t)=xp(t)
wherein u is o (t) is an input to the target boot controller,proportional and differential coefficients, r, for the control of the target lead controller PD o (t) is the position of the target point, e o (t) speed at which the object controlled by the target guidance controller approaches the target, t f Time for the target to guide the controller in anticipation of completing the cutting task;
setting a controller for collision avoidance control:
cd(t)=‖y(t)-r a (t)‖
r a (t)=H(Ad(t))
wherein u is a (t) is an input of the collision avoidance controller,proportional coefficient and differential coefficient, r, for collision avoidance controller PD control a (t) is the position of the central point of the obstacle Ad (t) at the moment t, R is the radius of a collision detection area taking the central point of the obstacle as a sphere, R is the radius of a collision avoidance area taking the central point of the obstacle as a sphere, cd (t) is the distance between the total control output of the system and the central point of the obstacle, and epsilon is a small constant;
preferably, the S32 includes:
s321, respectively establishing corresponding motion control prediction models for the planned path tracking controller, the target guidance controller and the collision avoidance controller;
wherein the content of the first and second substances,to predict the motion state of controller i at time t within the prediction interval,to predict the speed of movement of controller i at time t within the interval,outputting the prediction control of the control target i at the time t in the prediction interval;
s322, respectively establishing a target evaluation function for the planned path tracking controller, the target guidance controller and the collision avoidance controller, and combining the corresponding motion control prediction models to perform the current time t 0 Estimating the motion state of a future section of prediction time interval on the basis of the system motion state, and calculating a corresponding target evaluation function accumulated value;
wherein, J i (t 0 ) For the ith controller t 0 Cumulative value of objective function at time, G i Predicting interval t for the ith controller 0 To t 0 + T is the target evaluation function value of a single time point, and T is the prediction time interval;
wherein, in the S322:
establishing a target evaluation function aiming at a planned path tracking control target, and evaluating the effectiveness of a planned path tracking controller in a prediction interval;
wherein, J s (t 0 ) Is a planned path tracking controller t 0 Value of objective function at time, G s The planning path tracking controller is in the prediction interval t 0 To t 0 The target evaluation function value of a single time point within + T,the predicted control output of the planned path tracking control target at the time t in the prediction interval is obtained;
establishing a target evaluation function aiming at a target guide control target, and evaluating the effectiveness of a target guide controller in a prediction interval;
wherein, J o (t 0 ) Is the target guidance controller t 0 Value of objective function at time, G o Is that the target directs the controller to predict the interval t 0 To t 0 The target evaluation function value of a single time point within + T,is the predictive control output of the target guidance control target at time t within the prediction interval;
establishing a target evaluation function aiming at a collision avoidance control target, and evaluating the effectiveness of a collision avoidance controller in a prediction interval;
wherein, J a (t 0 ) Is a collision avoidance controller t 0 Value of objective function at time, G a Is that the collision avoidance controller is in the prediction section t 0 To t 0 The target evaluation function value of a single time point within + T,the predicted control output is the predicted control output of the collision avoidance control target at time t within the prediction interval, and rz is a small constant.
Preferably, the S33 includes:
s331, respectively calculating the current state t of each controller target evaluation function in an optimization mode 0 A decreasing gradient of time;
wherein, g s (t 0 )、g o (t 0 )、g a (t 0 ) A planned path tracking controller, a target guidance controller and a collision avoidance controller t, respectively 0 The descending gradient of the target evaluation function at the moment;
s332, inputting parameters of all control targets and the importance degrees of the control targets, and sequencing the controllers according to the importance degrees of the targets to determine the priority of each controller;
M=[g s ,g o ,g a ]
wherein M is a weight hierarchy sequence of the target controller;
s333, sequentially nesting and calculating fused target gradient values from low to high according to the weight hierarchy to obtain 3 controller nested and fused target gradient values;
wherein the content of the first and second substances,is a normalized function with respect to the gradient g, alpha denotes a stratification parameter, w L (t 0 ) Is a target gradient value after 3 controllers are nested and fused;
s334, adding the nested and fused target gradient values to the fused controller, so as to realize the motion fusion of various different target motion controllers;
u(t)=u s (t)+u o (t)+u a (t)-Kw L (t)
wherein K is a proportionality coefficient.
Preferably, the converting the fused position output into the joint angle of the surgical robot in S34 specifically includes:
xb(t)=y(t)
θ(t)=ζ(q(t))
wherein, the function ζ represents the Euler angle of the posture rotation matrix converted into the Cartesian coordinate system, θ (t) is the Euler angle of the Cartesian coordinate system,is the speed at which the euler angle changes,rotational speed of joint angle of robot, J -1 (Θ) is the jacobian matrix.
Preferably, the S4 includes:
is opened after approaching the focus positionBiopsy forceps, starting robotic autonomous sampling controller, along q t Direction-giving biopsy forceps v 0 The speed is constant, the biopsy forceps approach to a focus point area, and when the force sensor detects force, the biopsy forceps start to decelerate; the force sensor measures a force f of the bioptome contacting the tissue surface when the bioptome contacts the surface of the focal zone d (t) initiating deceleration of the bioptome by PID control until the force sensor achieves the desired stabilization f e Stopping the operation;
the autonomous sampling control system is as follows:
y f (t)=C f x f (t)+D f u f (t)
e(t)=f e -f d (t)
wherein the content of the first and second substances,is the speed, x, of the autonomous sampling control system f (t) is the state of the autonomous sampling control system, u f (t) is the control input of the autonomous sampling control system, y f (t) is the control output of the autonomous sampling control system, A f 、B f 、C f And D f Is a parameter of the equation of state of the control system, u f Is the control input to the system, e (t) is the error between the desired force and the actual detected force, k p 、k i And k d Is the PID parameter of the system.
The speed of the motion speed output and the attitude change obtained by the controller is converted into the joint angle of the robot, so that the robot can independently sample:
xb(t)=y f (t)
θ(t)=ζ(q(t))
wherein the content of the first and second substances,rotational speed of joint angle of robot, J -1 (Θ) is the jacobian matrix.
A robotic autonomous biopsy sampling system oriented to an in vivo flexible dynamic environment, comprising:
the marking module is used for reading the laparoscope image, marking a focus point to be sampled and a collision avoidance area on the image frame according to the selection of a doctor, and positioning the focus point to be sampled and the collision avoidance area on the three-dimensional image;
the conversion module is used for respectively converting the three-dimensional positions of the focus point and the collision avoidance area on the three-dimensional image and the three-dimensional position of the tail end of the multi-degree-of-freedom biopsy forceps into a robot base coordinate system;
the moving module is used for controlling the tail end of the biopsy forceps to move from the initial position to the focus position by adopting a multi-target motion fusion control method; the multiple targets comprise planned path tracking from an initial position to a target point, target point tracking and collision avoidance;
the sampling module is used for acquiring the contact force between the tail end of the biopsy forceps and the tissue according to a preset autonomous sampling control system after the sampling module reaches the position close to the focus point, and clamping and sampling operations are completed;
and the returning module is used for controlling the tail end of the multi-degree-of-freedom biopsy forceps to return to the initial position from the sampling end position by adopting the multi-target motion fusion control method again.
A storage medium storing a computer program for robotic autonomous biopsy sampling oriented to an in vivo flexible dynamic environment, wherein the computer program causes a computer to perform the robotic autonomous biopsy sampling method as described above.
An electronic device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the robotic autonomous biopsy sampling method as described above.
(III) advantageous effects
The invention provides a robot autonomous biopsy sampling method, a system, a storage medium and an electronic device oriented to an in-vivo flexible dynamic environment. Compared with the prior art, the method has the following beneficial effects:
according to the method, a focus point to be sampled and a collision avoidance area are marked on an image frame according to selection of a doctor, and the focus point and the collision avoidance area are positioned on a three-dimensional image; respectively converting the three-dimensional positions of the focus point and the collision avoidance area on the three-dimensional image and the three-dimensional position of the tail end of the biopsy forceps with multiple degrees of freedom to be under a robot base coordinate system; controlling the tail end of the biopsy forceps to move from an initial position to a focus position by adopting a multi-target motion fusion control method; after the biopsy forceps reach the position close to the focus point, the contact force between the tail end of the biopsy forceps and the tissue is obtained according to a preset autonomous sampling control system, and the clamping and sampling operation is completed; and controlling the tail end of the multi-degree-of-freedom biopsy forceps to return to the initial position from the sampling end position by adopting the multi-target motion fusion control method again. The method can realize three-dimensional dynamic tracking of the sampling points marked by the doctor, can realize autonomy of the sampling process, shortens the sampling time and improves the sampling quality and efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a robotic autonomous biopsy sampling method oriented to an in-vivo flexible dynamic environment according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application solves the technical problem that the sampling quality and efficiency cannot be guaranteed by providing the robot autonomous biopsy sampling method, the robot autonomous biopsy sampling system, the storage medium and the electronic equipment facing the in-vivo flexible dynamic environment.
In order to solve the technical problems, the general idea of the embodiment of the application is as follows:
aiming at the defects that the existing biopsy sampling points can not be dynamically updated along with images and the sampling effect is highly related to the experience of doctors, a set of robot autonomous biopsy sampling method facing to the in-vivo flexible dynamic environment is constructed, not only can the three-dimensional dynamic tracking of the sampling points marked by the doctors be realized, but also the autonomy of the sampling process can be realized, the sampling time is shortened, and the sampling quality and efficiency are improved.
Specifically, according to the embodiment of the invention, a focus point to be sampled and a collision avoidance area are marked on an image frame according to the selection of a doctor, and the focus point and the collision avoidance area are positioned on a three-dimensional image; respectively converting the three-dimensional positions of the focus point and the collision avoidance area on the three-dimensional image and the three-dimensional position of the tail end of the biopsy forceps with multiple degrees of freedom to be under a robot base coordinate system; the method comprises the following steps of (1) controlling the tail end of the biopsy forceps to move from an initial position to a focus position by adopting a multi-target motion fusion control method; after the biopsy forceps reach the position close to the focus point, the contact force between the tail end of the biopsy forceps and the tissue is obtained according to a preset autonomous sampling control system, and the clamping and sampling operation is completed; and controlling the tail end of the multi-degree-of-freedom biopsy forceps to return to the initial position from the sampling end position by adopting the multi-target motion fusion control method again.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Example (b):
as shown in fig. 1, an embodiment of the present invention provides a robotic autonomous biopsy sampling method facing an in vivo flexible dynamic environment, comprising:
s1, reading a laparoscope image, marking a focus point to be sampled and a collision avoidance area on an image frame according to the selection of a doctor, and positioning the focus point and the collision avoidance area on a three-dimensional image;
s2, respectively converting the three-dimensional positions of the focus point and the collision avoidance area on the three-dimensional image and the three-dimensional position of the tail end of the multi-degree-of-freedom biopsy forceps into a robot base coordinate system;
s3, controlling the tail end of the biopsy forceps to move from the initial position to the focus position by adopting a multi-target motion fusion control method; the multiple targets comprise planning path tracking, target point tracking and collision avoidance from an initial position to a target point;
s4, after the focus point is approached, stopping moving the tail end of the biopsy forceps and opening the biopsy forceps to obtain the contact force between the tail end of the biopsy forceps and the tissue to finish clamping and sampling operation;
and S5, controlling the tail end of the multi-degree-of-freedom biopsy forceps to return to the initial position from the sampling end position by adopting the multi-target motion fusion control method again.
The embodiment of the invention not only can realize the three-dimensional dynamic tracking of the sampling point marked by the doctor, but also can realize the autonomy of the sampling process, shorten the sampling time and improve the sampling quality and efficiency.
The following will describe each step of the above technical solution in detail with reference to specific contents:
in step S1, reading a laparoscopic image, marking a lesion point to be sampled and a collision avoidance area on an image frame according to a selection of a doctor, and positioning to a three-dimensional image, including:
s11, reading the laparoscope image, and marking a focus point xp to be sampled on the initial image frame according to the selection of a doctor 0 And a collision avoidance area Ad;
s12, marking a plurality of key characteristic points Px and Pa near the focus point and the collision avoidance area respectively, and determining an area with the size of L multiplied by L by taking the focus point and the collision avoidance area as centers respectively;
s13, carrying out depth estimation on the laparoscope image to obtain a depth image corresponding to the endoscope image; acquiring spatial information and color information of each pixel point from the depth image and the endoscope image to construct a three-dimensional point cloud;
s14, registering continuous three-dimensional point clouds through feature extraction and matching to obtain a coordinate transformation parameter rotation matrix R and a translational vector t, and converting the source point clouds into a target point cloud under the same coordinate system;
s15, performing feature matching of a focus point and a collision avoidance area on two adjacent frames of images by using an optical flow method, acquiring the moving direction and distance between two frames by using the average difference value of pixel coordinates of a focus point and a collision avoidance area point pair, and acquiring a focus point endoscope visual image position xp changing along with time c (t) and Collision avoidance zone endoscopic video position Ad c (t)。
In the step, a doctor can mark a focus point to be sampled on the image, so that the method is visual, convenient and simple, and can improve the working efficiency of the doctor.
In step S2, converting the three-dimensional positions of the focus point and the collision avoidance region on the three-dimensional image and the three-dimensional position of the end of the multi-degree-of-freedom bioptome into a robot-based coordinate system, respectively, includes:
s21, measuring and calculating a space coordinate transformation matrix of the coordinate system of the laparoscope and the robot through an optical locatorThree-dimensional coordinate xp of endoscope visual image of focus point c (t) converting the focal point into a sampling robot base coordinate system, and acquiring a three-dimensional pose xp of the focal point under a robot coordinate system r (t) and endoscopic vision three-dimensional coordinates Ad of collision avoidance area c (t)) converting the focus point into a sampling robot base coordinate system, and acquiring a three-dimensional pose Ad of the focus point under a robot coordinate system r (t);
S22, converting the matrix according to the coordinate from the tail end of the multi-degree-of-freedom biopsy forceps to the tail end of the robotObtaining the initial time t of the end of the multi-freedom biopsy forceps s Three-dimensional pose xb at robot base coordinate system r (t s );
Wherein, xr r (t s ) For the initial moment t of the end of the mechanical arm s Marking the three-dimensional pose of the robot base;
setting a fixed RCM point xm according to the spatial position of the focus point, and limiting the posture of the robot in the autonomous sampling process to be q under the constraint of the RCM point t ;
Y t =Z t ×X t-1
q t =[X t Y t Z t ]
Wherein xb (t) is the position of the end of the bioptome, X t X-axis vector, Y, representing attitude matrix t Y-axis vector, Z, representing attitude matrix t Z-axis vector, xb (t), representing attitude matrix s ) Is xb r (t s ) At an initial time t s The position vector of (2).
In the step S3, a multi-target motion fusion control method is adopted to control the tail end of the biopsy forceps to move from the initial position to the focus position; the multiple targets comprise planning path tracking, target point tracking and collision avoidance from an initial position to a target point; the method comprises the following steps:
s31, constructing a total linear control system and establishing a state equation, and designing target controllers respectively according to the requirements of each target realization in the laparoscopic surgery scene; the target controller comprises a planned path tracking controller, a target guiding controller and a collision avoidance controller.
S311, constructing a total linear control system and establishing a state equation;
y(t)=Cx(t)+Du(t)
wherein x (t) is the motion state of the system at the moment t,the motion speed of the system at the time t, u (t) is the total control input of the system at the time t, y (t) is the total control output of the system at the time t, and A, B, C, D is a calculation parameter of a state equation;
s312, respectively designing a target controller according to the target realization requirements in the laparoscopic surgery scene;
u i (t)=f(y(t),r i (t))
wherein u is i (t) is the control input to controller i, a function of the total control output of the system at time t and the desired control objective, r i (t) is an expected value at time t of the control target.
The target controller comprises a planning path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller;
in order to ensure that the autonomous cutting execution process of the robot meets the setting of a planned track, the planned path tracking control setting controller specifically comprises:
e s (t)=r s (t)-y(t)
r s (t)=ψ(xb(t s ),xp(t))
wherein u is s (t) is an input to the planned path tracking controller,proportional and differential coefficients, r, controlled by the PD controller for path following planning s (t) desired state of the controller for planned path tracking at time t, e s (t) deviation of the desired position of the object controlled by the planned path tracking controller from the total control output of the system;
in order to ensure the tracking speed of the robot along the cutting path to the target point, the control input is a function of the current position and the target position, and the target guide controller is specifically:
r o (t)=xp(t)
wherein u is o (t) is an input to the target boot controller,for the target to guide the proportional and derivative coefficients, r, controlled by the controller PD o (t) is a targetPosition of the point, e o (t) speed at which the object controlled by the target guidance controller approaches the target, t f Time for the target to guide the controller in anticipation of completing the cutting task;
in order to ensure that the distance between the tail end of the instrument and a collision avoidance area is shortest and the safety of a non-target area is ensured to the maximum extent, the control input is a function related to the current position and the position of an obstacle, and the collision avoidance control setting controller specifically comprises:
cd(t)=‖y(t)-r a (t)‖
r a (t)=H(Ad(t))
wherein u is a (t) is an input of the collision avoidance controller,proportional coefficient and differential coefficient, r, for collision avoidance controller PD control a And (t) is the position of the central point of the obstacle Ad (t) at the moment t, R is the radius of a collision detection area taking the central point of the obstacle as a sphere, R is the radius of a collision avoidance area taking the central point of the obstacle as a sphere, cd (t) is the distance between the total control output of the system and the central point of the obstacle, and epsilon is a small constant, so that a large outward repelling speed is ensured when the system enters the collision avoidance area.
And S32, respectively establishing a corresponding motion control prediction model and a target evaluation function for the planned path tracking controller, the target guidance controller and the collision avoidance controller, estimating the motion state of a future prediction time interval on the basis of the system motion state at the current moment, and calculating a corresponding target evaluation function accumulated value.
S321, respectively establishing corresponding motion control prediction models for the planned path tracking controller, the target guidance controller and the collision avoidance controller;
wherein, the first and the second end of the pipe are connected with each other,to predict the motion state of controller i at time t within the prediction interval,to predict the speed of movement of controller i at time t within the interval,outputting the prediction control of the control target i at the time t in the prediction interval;
s322, respectively establishing a target evaluation function for the planned path tracking controller, the target guidance controller and the collision avoidance controller, and combining the corresponding motion control prediction models to perform the current time t 0 Estimating the motion state of a future section of prediction time interval on the basis of the system motion state, and calculating a corresponding target evaluation function accumulated value;
wherein, J i (t 0 ) For the ith controller t 0 Cumulative value of objective function at time, G i Predicting interval t for ith controller 0 To t 0 + T is the target evaluation function value of a single time point, and T is the prediction time interval;
wherein, in S322:
establishing a target evaluation function aiming at the planned path tracking control target, and evaluating the effectiveness of the planned path tracking controller in a prediction interval;
wherein, J s (t 0 ) Is a planned path tracking controller t 0 Value of objective function at time, G s The planning path tracking controller is in the prediction interval t 0 To t 0 The target evaluation function value of a single time point within + T,the predicted control output of the planned path tracking control target at the time t in the prediction interval is obtained;
establishing a target evaluation function aiming at a target guide control target, and evaluating the effectiveness of a target guide controller in a prediction interval;
wherein, J o (t 0 ) Is the target guidance controller t 0 Value of objective function at time, G o Is that the target guidance controller is in the prediction interval t 0 To t 0 The target evaluation function value of a single time point within + T,is the predictive control output of the target guidance control target at time t within the prediction interval;
establishing a target evaluation function aiming at a collision avoidance control target, and evaluating the effectiveness of a collision avoidance controller in a prediction interval;
wherein, J a (t 0 ) Is a collision avoidance controller t 0 Value of objective function at time, G a Is that the collision avoidance controller is in the prediction interval t 0 To t 0 The target evaluation function value of a single time point within + T,the predicted control output is the predicted control output of the collision avoidance control target at time t within the prediction interval, and rz is a small constant.
S33, respectively calculating the target gradient of the target evaluation function accumulated value of each controller at the current moment; and sequentially nesting and fusing target gradient values corresponding to the controllers from low to high according to a preset weight function and a weight hierarchical sequence, and adding the target gradient values into the total control input of the system.
S331, respectively calculating the current state t of each controller target evaluation function in an optimization mode 0 A decreasing gradient of time;
wherein, g s (t 0 )、g o (t 0 )、g a (t 0 ) A planned path tracking controller, a target guidance controller and a collision avoidance controller t, respectively 0 Target evaluation function of timeReducing the gradient;
s332, inputting parameters of all control targets and importance degrees of the control targets, and sequencing the controllers according to the importance degrees of the targets to determine priorities of all the controllers (in the aspect of priority, collision avoidance controller > target guide controller > planning path tracking controller);
M=[g s ,g o ,g a ]
wherein M is a weight hierarchy sequence of the target controller;
s333, sequentially nesting and calculating fused target gradient values from low to high according to the weight hierarchy, and obtaining the target gradient values after nesting and fusion of 3 controllers;
wherein the content of the first and second substances,is a normalized function with respect to the gradient g, alpha denotes a stratification parameter, w L (t 0 ) Is a target gradient value after nesting and fusion of 3 controllers;
s334, the nested and fused target gradient values are added to the fused controller, so that the motion fusion of various different target motion controllers is realized;
u(t)=u s (t)+u o (t)+u a (t)-Kw L (t)
wherein K is a proportionality coefficient.
And S34, converting the fused position output into a joint angle of the surgical robot, and realizing the autonomous cutting operation of the robot in the process of moving from the initial position to the focus position. Wherein, converting the fused position output into the joint angle of the surgical robot specifically means:
xb(t)=y(t)
θ(t)=ζ(q(t))
wherein, the function ζ represents the Euler angle of the posture rotation matrix converted into the Cartesian coordinate system, θ (t) is the Euler angle of the Cartesian coordinate system,is the speed of the change in the euler angle,rotational speed of joint angle of robot, J -1 (Θ) is a Jacobian matrix.
In step S4, after the focus point is approached, stopping moving the tail end of the biopsy forceps and opening the biopsy forceps to obtain the contact force between the tail end of the biopsy forceps and the tissue, and finishing the clamping and sampling operation; the method comprises the following steps:
opening the biopsy forceps after approaching the focus position, starting the robot to automatically sample the controller along q t Direction-giving biopsy forceps v 0 The speed is constant, the biopsy forceps approach to a focus point area, and when the force sensor detects force, the biopsy forceps start to decelerate; the force sensor measures a force f of the bioptome contacting the tissue surface when the bioptome contacts the surface of the focal zone d (t) initiating deceleration of the bioptome by PID control until the force sensor achieves the desired stabilization f e Stopping the operation;
the autonomous sampling control system is as follows:
y f (t)=C f x f (t)+D f u f (t)
e(t)=f e -f d (t)
wherein the content of the first and second substances,is the speed, x, of the autonomous sampling control system f (t) is the state of the autonomous sampling control system, u f (t) is the control input of the autonomous sampling control system, y f (t) is the control output of the autonomous sampling control system, A f 、B f 、C f And D f Is a parameter of the equation of state of the control system, u f Is the control input to the system, e (t) is the error between the desired force and the actual detected force, k p 、k i And k d Is the PID parameter of the system.
The speed of the motion speed output and the attitude change obtained by the controller is converted into the joint angle of the robot, so that the robot can independently sample:
xb(t)=y f (t)
θ(t)=ζ(q(t))
wherein, the first and the second end of the pipe are connected with each other,rotational speed of the joint angle of the robot, J -1 (Θ) is the jacobian matrix.
In the step S5, the multi-target motion fusion control method is adopted again to control the tail end of the multi-degree-of-freedom biopsy forceps to return to the initial position from the sampling end position; comprises that
The biopsy forceps complete the holding sampling operation, exit the sampling control system, and perform a return operation based on the initial position xb (t) of the end of the biopsy forceps s ) The robot executes the non-collision target tracking control from the tail end of the current multi-degree-of-freedom biopsy forceps to the initial position of the tail end of the biopsy forceps, and the multi-target motion fusion method introduced in the step S3 is adopted to realize the purpose of tracking and controlling the target from the sampling end position xb (t) e ) To xb (t) s ) Path tracking, targetPoint xb (t) s ) Tracking and collision avoid the fusion control of Ad (t), thereby realizing intraoperative self-service sampling operation of laparoscopic surgery, and the details are not repeated here.
Therefore, the autonomous sampling method provided by the embodiment of the invention gets rid of the dependence of the traditional sampling on the experience and the operation capability of doctors, and improves the efficiency and the accuracy of biopsy sampling.
The embodiment of the invention provides a robot autonomous biopsy sampling system facing to an in-vivo flexible dynamic environment, which comprises:
the marking module is used for reading the laparoscope image, marking a focus point to be sampled and a collision avoidance area on the image frame according to the selection of a doctor, and positioning the focus point to be sampled and the collision avoidance area on the three-dimensional image;
the conversion module is used for respectively converting the three-dimensional positions of the focus point and the collision avoidance area on the three-dimensional image and the three-dimensional position of the tail end of the multi-degree-of-freedom biopsy forceps into a robot base coordinate system;
the moving module is used for controlling the tail end of the biopsy forceps to move from the initial position to the focus position by adopting a multi-target motion fusion control method; the multiple targets comprise planned path tracking from an initial position to a target point, target point tracking and collision avoidance;
the sampling module is used for acquiring the contact force between the tail end of the biopsy forceps and tissues according to a preset autonomous sampling control system after the sampling module reaches the position close to the focus point, so as to finish the clamping and sampling operation;
and the returning module is used for controlling the tail end of the multi-degree-of-freedom biopsy forceps to return to the initial position from the sampling end position by adopting the multi-target motion fusion control method again.
Embodiments of the present invention provide a storage medium storing a computer program for robotic autonomous biopsy sampling oriented to an in vivo flexible dynamic environment, wherein the computer program causes a computer to perform a robotic autonomous biopsy sampling method as described above.
An embodiment of the present invention further provides an electronic device, including:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the robotic autonomous biopsy sampling method as described above
It can be understood that the system, the storage medium, and the electronic device for robotic autonomous biopsy sampling oriented to in vivo flexible dynamic environment provided by the embodiment of the present invention correspond to the method for robotic autonomous biopsy sampling oriented to in vivo flexible dynamic environment provided by the embodiment of the present invention, and for explanation, examples, and beneficial effects of the relevant contents, reference may be made to corresponding parts in the method for robotic autonomous biopsy sampling, and details are not described here.
In summary, compared with the prior art, the method has the following beneficial effects:
1. the embodiment of the invention not only can realize the three-dimensional dynamic tracking of the sampling point marked by the doctor, but also can realize the autonomy of the sampling process, shorten the sampling time and improve the sampling quality and efficiency.
2. In the embodiment of the invention, a doctor can mark the focus point to be sampled on the image, so that the method is visual, convenient and concise, and can improve the working efficiency of the doctor.
3. The autonomous sampling method provided by the embodiment of the invention gets rid of the dependence of the traditional sampling on the experience and the operation capability of doctors, and improves the efficiency and the accuracy of biopsy sampling.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A robotic autonomous biopsy sampling method oriented to an in vivo flexible dynamic environment, comprising:
s1, reading a laparoscope image, marking a focus point to be sampled and a collision avoidance area on an image frame according to the selection of a doctor, and positioning the focus point and the collision avoidance area on a three-dimensional image;
s2, respectively converting the three-dimensional positions of the focus point and the collision avoidance area on the three-dimensional image and the three-dimensional position of the tail end of the multi-degree-of-freedom biopsy forceps into a robot base coordinate system;
s3, controlling the tail end of the biopsy forceps to move from the initial position to the focus position by adopting a multi-target motion fusion control method; the multiple targets comprise planned path tracking from an initial position to a target point, target point tracking and collision avoidance;
s4, after the focus position is approached, stopping moving the tail end of the biopsy forceps, opening the biopsy forceps, and acquiring the contact force between the tail end of the biopsy forceps and the tissue according to a preset autonomous sampling control system to finish clamping and sampling operation;
and S5, controlling the tail end of the multi-degree-of-freedom biopsy forceps to return to the initial position from the sampling end position by adopting the multi-target motion fusion control method again.
2. The robotic autonomous biopsy sampling method of claim 1, wherein the S1 comprises:
s11, reading the laparoscope image, and marking a focus point xp to be sampled on the initial image frame according to the selection of a doctor 0 And a collision avoidance area Ad;
s12, marking a plurality of key characteristic points Px and Pa near the focus point and the collision avoidance area respectively, and determining an area with the size of L multiplied by L by taking the focus point and the collision avoidance area as centers respectively;
s13, depth estimation is carried out on the laparoscope image to obtain a depth image corresponding to the endoscope image; acquiring spatial information and color information of each pixel point from the depth image and the endoscope image to construct a three-dimensional point cloud;
s14, registering continuous three-dimensional point clouds through feature extraction and matching to obtain a coordinate transformation parameter rotation matrix R and a translational vector t, and converting the source point clouds into a target point cloud under the same coordinate system;
s15, performing feature matching of a focus point and a collision avoidance area on two adjacent frames of images by using an optical flow method, acquiring the moving direction and distance between two frames by using the average difference value of pixel coordinates of a focus point and a collision avoidance area point pair, and acquiring a focus point endoscope visual image position xp changing along with time c (t) and Collision avoidance zone endoscopic video position Ad c (t)。
3. The robotic autonomous biopsy sampling method of claim 2, wherein the S2 comprises:
s21, measuring and calculating a space coordinate transformation matrix of the coordinate system of the laparoscope and the robot through an optical locatorThree-dimensional coordinate xp of endoscope visual image of focus point c (t) converting the focus point into a sampling robot base coordinate system, and acquiring a three-dimensional pose xp of the focus point under a robot coordinate system r (t) and endoscopic vision three-dimensional coordinates Ad of collision avoidance area c (t)) converting the focus point into a sampling robot base coordinate system, and acquiring a three-dimensional pose Ad of the focus point under a robot coordinate system r (t);
S22, converting the matrix according to the coordinate from the tail end of the multi-degree-of-freedom biopsy forceps to the tail end of the robotObtaining the initial time t of the end of the multi-freedom biopsy forceps s Three-dimensional pose xb at robot base coordinate system r (t s );
Wherein, xr r (t s ) For the initial moment t of the end of the mechanical arm s Marking the three-dimensional pose of the robot base system;
setting a fixed RCM point xm according to the spatial position of the focus point, and limiting the posture of the robot in the autonomous sampling process to be q under the constraint of the RCM point t ;
Y t =Z t ×X t-1
q t =[X t Y t Z t ]
Wherein xb (t) is the position of the end of the bioptome, X t X-axis vector, Y, representing attitude matrix t Y-axis vector, Z, representing attitude matrix t Z-axis vector, xb (t), representing attitude matrix s ) Is xb r (t s ) At an initial time t s The position vector of (2).
4. The robotic autonomous biopsy sampling method of claim 3, wherein the S3 comprises:
s31, constructing a total linear control system and establishing a state equation, and designing target controllers respectively according to the requirements of each target realization in the laparoscopic surgery scene; the target controller comprises a planned path tracking controller, a target guiding controller and a collision avoidance controller;
s32, respectively establishing a corresponding motion control prediction model and a target evaluation function for the planned path tracking controller, the target guidance controller and the collision avoidance controller, estimating the motion state of a future prediction time interval on the basis of the system motion state at the current moment, and calculating a corresponding target evaluation function accumulated value;
s33, respectively calculating the target gradient of the target evaluation function accumulated value of each controller at the current moment; sequentially nesting and fusing target gradient values corresponding to the controllers from low to high according to a preset weight function and a weight hierarchical sequence, and adding the target gradient values into the total control input of the system;
and S34, converting the fused position output into a joint angle of the surgical robot, and realizing the autonomous cutting operation of the robot in the process of moving from the initial position to the focus position.
5. The robotic autonomous biopsy sampling method of claim 4,
the S31 includes:
s311, constructing a total linear control system and establishing a state equation;
y(t)=Cx(t)+Du(t)
wherein x (t) is the motion state of the system at the moment t,the motion speed of the system at the time t, u (t) is the total control input of the system at the time t, y (t) is the total control output of the system at the time t, and A, B, C, D is a calculation parameter of a state equation;
s312, respectively designing a target controller according to the requirements of each target under the laparoscopic surgery scene;
u i (t)=f(y(t),r i (t))
wherein u is i (t) is the control input to controller i, a function of the total control output of the system at time t and the desired control objective, r i (t) is an expected value of the control target at the moment t;
the target controller comprises a planning path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller;
wherein, the controller is set for planned path tracking control:
e s (t)=r s (t)-y(t)
r s (t)=ψ(xb(t s ),xp(t))
wherein u is s (t) is an input to the planned path tracking controller,proportional and differential coefficients, r, controlled by the PD controller for path following planning s (t) expectation of planning a path tracking controller for time tState e s (t) deviation of the desired position of the object controlled by the planned path tracking controller from the total control output of the system;
for the target guidance controller:
r o (t)=xp(t)
wherein u is o (t) is an input to the target boot controller,for the target to guide the proportional and derivative coefficients, r, controlled by the controller PD o (t) is the position of the target point, e o (t) speed at which the object controlled by the target guidance controller approaches the target, t f Time for the target to guide the controller in anticipation of completing the cutting task;
setting a controller for collision avoidance control:
cd(t)=‖y(t)-r a (t)‖
r a (t)=H(Ad(t))
wherein u is a (t) is an input of the collision avoidance controller,proportional coefficient and differential coefficient, r, for collision avoidance controller PD control a (t) is the position of the central point of the obstacle Ad (t) at the moment t, R is the radius taking the central point of the obstacle as a sphere collision detection area, and R is a sphere collision avoidance area taking the central point of the obstacle as the sphere collision avoidance areaThe radius of the domain, cd (t), is the distance between the total control output of the system and the center point of the obstacle, and epsilon is a very small constant;
and/or said S32 comprises:
s321, respectively establishing corresponding motion control prediction models for a planned path tracking controller, a target guidance controller and a collision avoidance controller;
wherein the content of the first and second substances,to predict the motion state of controller i at time t within the prediction interval,to predict the speed of movement of controller i at time t within the interval,outputting the prediction control of the control target i at the time t in the prediction interval;
s322, respectively establishing a target evaluation function for the planned path tracking controller, the target guidance controller and the collision avoidance controller, and combining the corresponding motion control prediction models to perform the current time t 0 Estimating the motion state of a future section of prediction time interval on the basis of the system motion state, and calculating a corresponding target evaluation function accumulated value;
wherein, J i (t 0 ) For the ith controller t 0 Cumulative value of objective function at time, G i Predicting interval t for ith controller 0 To t 0 + T is the target evaluation function value of a single time point, and T is the prediction time interval;
wherein, in S322:
establishing a target evaluation function aiming at the planned path tracking control target, and evaluating the effectiveness of the planned path tracking controller in a prediction interval;
wherein, J s (t 0 ) Is a planned path tracking controller t 0 Value of objective function at time, G s The planning path tracking controller is in the prediction interval t 0 To t 0 The target evaluation function value of a single time point within + T,the predicted control output of the planned path tracking control target at the time t in the prediction interval is obtained;
establishing a target evaluation function aiming at a target guide control target, and evaluating the effectiveness of a target guide controller in a prediction interval;
wherein, J o (t 0 ) Is the target guidance controller t 0 Value of objective function at time, G o Is that the target guidance controller is in the prediction interval t 0 To t 0 The target evaluation function value of a single time point within + T,is the predictive control output of the target guidance control target at time t within the prediction interval;
establishing a target evaluation function aiming at a collision avoidance control target, and evaluating the effectiveness of a collision avoidance controller in a prediction interval;
6. The robotic autonomous biopsy sampling method of claim 5,
the S33 includes:
s331, respectively calculating the current state t of each controller target evaluation function in an optimization mode 0 A decreasing gradient of time;
wherein, g s (t 0 )、g o (t 0 )、g a (t 0 ) A planned path tracking controller, a target guidance controller and a collision avoidance controller t, respectively 0 The descending gradient of the target evaluation function at the moment;
s332, inputting parameters of each control target and the importance degree of the control target, and sequencing the controllers according to the importance degree of the target to determine the priority of each controller;
M=[g s ,g o ,g a ]
wherein M is a weight hierarchy sequence of the target controller;
s333, sequentially nesting and calculating fused target gradient values from low to high according to the weight hierarchy to obtain 3 controller nested and fused target gradient values;
wherein the content of the first and second substances,is a normalized function with respect to the gradient f, alpha denotes a stratification parameter, w L (t 0 ) Is a target gradient value after nesting and fusion of 3 controllers;
s334, adding the nested and fused target gradient values to the fused controller, so as to realize the motion fusion of various different target motion controllers;
u(t)=u s (t)+u o (t)+u a (t)-Kw L (t)
wherein K is a proportionality coefficient;
and/or converting the fused position output into a joint angle of the surgical robot in S34 specifically means:
xb(t)=y(t)
θ(t)=ζ(q(t))
wherein, the function ζ represents the Euler angle of the posture rotation matrix converted into the Cartesian coordinate system, θ (t) is the Euler angle of the Cartesian coordinate system,is the speed at which the euler angle changes,rotational speed of joint angle of robot, J -1 (Θ) is the jacobian matrix.
7. The robotic autonomous biopsy sampling method of any one of claims 3 to 6, wherein the S4 comprises:
opening the biopsy forceps after approaching the focus position, starting the robot to independently sample the controller along q t Direction-setting biopsy forceps v 0 The speed is constant, the biopsy forceps approach to a focus point area, and when the force sensor detects force, the biopsy forceps start to decelerate; the force sensor measures a force f of the bioptome contacting the tissue surface when the bioptome contacts the surface of the focal zone d (t) initiating deceleration of the bioptome by PID control until the force sensor achieves the desired stabilization f e Stopping the operation;
the autonomous sampling control system is as follows:
y f (t)=C f x f (t)+D f u f (t)
e(t)=f e -f d (t)
wherein the content of the first and second substances,is the speed, x, of the autonomous sampling control system f (t) is the state of the autonomous sampling control system, u f (t) is the control input of the autonomous sampling control system, y f (t) is the control output of the autonomous sampling control system, A f 、B f 、C f And D f Is a parameter of the equation of state of the control system, u f Is the control input to the system, e (t) is the error between the desired force and the actual sensed force, k p 、k i And k d Is the PID parameter of the system;
the speed of the motion speed output and the attitude change obtained by the controller is converted into the joint angle of the robot, so that the robot can independently sample:
xb(t)=y f (t)
θ(t)=ζ(q(t))
8. A robotic autonomous biopsy sampling system oriented to an in vivo flexible dynamic environment, comprising:
the marking module is used for reading the laparoscope image, marking a focus point to be sampled and a collision avoidance area on an image frame according to the selection of a doctor, and positioning the focus point and the collision avoidance area on a three-dimensional image;
the conversion module is used for respectively converting the three-dimensional positions of the focus point and the collision avoidance area on the three-dimensional image and the three-dimensional position of the tail end of the multi-degree-of-freedom biopsy forceps into a robot base coordinate system;
the moving module is used for controlling the tail end of the biopsy forceps to move from the initial position to the focus position by adopting a multi-target motion fusion control method; the multiple targets comprise planned path tracking from an initial position to a target point, target point tracking and collision avoidance;
the sampling module is used for acquiring the contact force between the tail end of the biopsy forceps and the tissue according to a preset autonomous sampling control system after the sampling module reaches the position close to the focus point, and clamping and sampling operations are completed;
and the returning module is used for controlling the tail end of the multi-degree-of-freedom biopsy forceps to return to the initial position from the sampling end position by adopting the multi-target motion fusion control method again.
9. A storage medium, characterized in that it stores a computer program for robotic autonomous biopsy sampling oriented to an in vivo flexible dynamic environment, wherein the computer program causes a computer to perform the robotic autonomous biopsy sampling method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the robotic autonomous biopsy sampling method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210779466.0A CN115192092B (en) | 2022-07-04 | Robot autonomous biopsy sampling method oriented to in-vivo flexible dynamic environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210779466.0A CN115192092B (en) | 2022-07-04 | Robot autonomous biopsy sampling method oriented to in-vivo flexible dynamic environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115192092A true CN115192092A (en) | 2022-10-18 |
CN115192092B CN115192092B (en) | 2024-06-25 |
Family
ID=
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120302878A1 (en) * | 2010-02-18 | 2012-11-29 | Koninklijke Philips Electronics N.V. | System and method for tumor motion simulation and motion compensation using tracked bronchoscopy |
US20160174817A1 (en) * | 2011-08-21 | 2016-06-23 | M.S.T. Medical Surgery Technologies Ltd | Device and method for asissting laparoscopic surgery rule based approach |
US20180099408A1 (en) * | 2016-10-11 | 2018-04-12 | Fanuc Corporation | Control device for controlling robot by learning action of person, robot system, and production system |
CN108601626A (en) * | 2015-12-30 | 2018-09-28 | 皇家飞利浦有限公司 | Robot guiding based on image |
CN109343350A (en) * | 2018-11-20 | 2019-02-15 | 清华大学 | A kind of underwater robot path tracking control method based on Model Predictive Control |
CN110161995A (en) * | 2019-06-10 | 2019-08-23 | 北京工业大学 | Municipal sewage treatment procedure optimization control method based on dynamic multi-objective particle swarm algorithm |
CN113490464A (en) * | 2019-02-28 | 2021-10-08 | 皇家飞利浦有限公司 | Feedforward continuous positioning control for end effector |
CN113633387A (en) * | 2021-06-21 | 2021-11-12 | 安徽理工大学 | Surgical field tracking supporting laparoscopic minimally invasive robot touch force interaction method and system |
CN114367986A (en) * | 2022-01-14 | 2022-04-19 | 上海立升医疗科技有限公司 | Intelligent robot low-temperature biopsy method, device and control system |
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120302878A1 (en) * | 2010-02-18 | 2012-11-29 | Koninklijke Philips Electronics N.V. | System and method for tumor motion simulation and motion compensation using tracked bronchoscopy |
US20160174817A1 (en) * | 2011-08-21 | 2016-06-23 | M.S.T. Medical Surgery Technologies Ltd | Device and method for asissting laparoscopic surgery rule based approach |
CN108601626A (en) * | 2015-12-30 | 2018-09-28 | 皇家飞利浦有限公司 | Robot guiding based on image |
US20180099408A1 (en) * | 2016-10-11 | 2018-04-12 | Fanuc Corporation | Control device for controlling robot by learning action of person, robot system, and production system |
CN109343350A (en) * | 2018-11-20 | 2019-02-15 | 清华大学 | A kind of underwater robot path tracking control method based on Model Predictive Control |
CN113490464A (en) * | 2019-02-28 | 2021-10-08 | 皇家飞利浦有限公司 | Feedforward continuous positioning control for end effector |
CN110161995A (en) * | 2019-06-10 | 2019-08-23 | 北京工业大学 | Municipal sewage treatment procedure optimization control method based on dynamic multi-objective particle swarm algorithm |
CN113633387A (en) * | 2021-06-21 | 2021-11-12 | 安徽理工大学 | Surgical field tracking supporting laparoscopic minimally invasive robot touch force interaction method and system |
CN114367986A (en) * | 2022-01-14 | 2022-04-19 | 上海立升医疗科技有限公司 | Intelligent robot low-temperature biopsy method, device and control system |
Non-Patent Citations (2)
Title |
---|
刘少丽;杨向东;冯涛;陈恳;梁萍;: "三维超声影像导航机器人系统的临床应用", 中国生物医学工程学报, no. 06, 20 December 2009 (2009-12-20), pages 80 - 86 * |
朱国昕;程浩;: "微创血管手术导管机器人系统无模型自适应控制", 伺服控制, no. 2, 15 April 2015 (2015-04-15), pages 61 - 63 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11864850B2 (en) | Path-based navigation of tubular networks | |
US11403759B2 (en) | Navigation of tubular networks | |
US11779400B2 (en) | Combining strain-based shape sensing with catheter control | |
US11504187B2 (en) | Systems and methods for localizing, tracking and/or controlling medical instruments | |
CN104114337B (en) | Once apparatus enters the viewing area can observed by the operator of input equipment and the control of apparatus will be switched to input equipment | |
JP5384108B2 (en) | Remote control system | |
CN106572887B (en) | Image integration and robotic endoscope control in an X-ray suite | |
US20150287236A1 (en) | Imaging system, operating device with the imaging system and method for imaging | |
JP6706576B2 (en) | Shape-Sensitive Robotic Ultrasound for Minimally Invasive Interventions | |
US20220415006A1 (en) | Robotic surgical safety via video processing | |
Zhang et al. | Image-guided control of an endoscopic robot for OCT path scanning | |
US20230172438A1 (en) | Medical arm control system, medical arm control method, medical arm simulator, medical arm learning model, and associated programs | |
Elek et al. | Robotic platforms for ultrasound diagnostics and treatment | |
WO2022024559A1 (en) | Medical assistance system, medical assistance method, and computer program | |
CN115192092B (en) | Robot autonomous biopsy sampling method oriented to in-vivo flexible dynamic environment | |
CN115192092A (en) | Robot autonomous biopsy sampling method oriented to in-vivo flexible dynamic environment | |
Marahrens et al. | Towards autonomous robotic minimally invasive ultrasound scanning and vessel reconstruction on non-planar surfaces | |
Krupa et al. | Control of an ultrasound probe by adaptive visual servoing | |
Doignon et al. | The role of insertion points in the detection and positioning of instruments in laparoscopy for robotic tasks | |
Wang et al. | Towards autonomous control of surgical instruments using adaptive-fusion tracking and robot self-calibration | |
CN115252140A (en) | Surgical instrument guiding method, surgical robot, and medium | |
WO2022162668A1 (en) | Multi-arm robotic systems for identifying a target | |
Andreff et al. | Epipolar geometry for vision-guided laser surgery | |
WO2020232406A1 (en) | Confidence-based robotically-assisted surgery system | |
US20230115849A1 (en) | Systems and methods for defining object geometry using robotic arms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |