CN114748169A - Autonomous endoscope moving method of laparoscopic surgery robot based on image experience - Google Patents
Autonomous endoscope moving method of laparoscopic surgery robot based on image experience Download PDFInfo
- Publication number
- CN114748169A CN114748169A CN202210338000.7A CN202210338000A CN114748169A CN 114748169 A CN114748169 A CN 114748169A CN 202210338000 A CN202210338000 A CN 202210338000A CN 114748169 A CN114748169 A CN 114748169A
- Authority
- CN
- China
- Prior art keywords
- laparoscope
- instrument
- pose
- laparoscopic
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/32—Surgical robots operating autonomously
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B2034/301—Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Robotics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Endoscopes (AREA)
Abstract
The invention belongs to the technical field of laparoscope endoscope moving, and discloses an autonomous endoscope moving method of a laparoscope surgical robot based on image experience. The method comprises the following steps: marking the instrument, performing multi-point calibration on a laparoscope camera, and estimating the three-dimensional coordinate of the instrument marker according to the image acquired by the laparoscope under the RCM constraint condition; acquiring an instrument frequency diagram in a surgical scene to obtain an evaluation function of the instrument on the instrument position under a laparoscopic view; constructing the posture of a laparoscope camera coordinate system in a visual coordinate system, and constructing an intuitive constraint model required to be met by the laparoscope posture; constructing a laparoscope pose optimization model according to RCM constraint and an intuitiveness constraint model by taking the evaluation function as a target so as to obtain an optimal laparoscope pose; and taking the optimal laparoscope pose as an expected endoscope pose, and converting the expected endoscope pose into a robot end pose. The endoscope moving strategy of the invention better conforms to the intention of a doctor with a main knife, and the method is simple and has strong generalization capability.
Description
Technical Field
The invention belongs to the technical field of laparoscope endoscope moving, and particularly relates to an autonomous endoscope moving method of a laparoscopic surgery robot based on image experience.
Background
The abdominal cavity minimally invasive surgery has the advantages of small trauma to patients, short recovery time of the patients and the like, and is widely popularized in the surgery. The abdominal cavity minimally invasive surgery needs to drill a plurality of small holes on the abdomen of a patient, a laparoscope and surgical instruments go deep from the small holes to the vicinity of an affected part, and a doctor operates outside the body to complete the surgery. During the operation, the laparoscope is held by the endoscope holding assistant, and the main surgeon orders the endoscope holding assistant to move the laparoscope by means of dictation, so that the main surgeon is provided with a desired view. However, in the actual surgical procedure, there are several problems: 1. the inefficiency of dictation, while aggravating the psychological burden on the primary surgeon; 2. the operation time is long, sometimes even reaches 10 hours, and psychological and physiological burden is brought to a scope holding assistant; 3. an assistant who has excellent culture needs a long time of actual operation experience, and the culture cost of a hospital is high.
In order to solve the above problems, in the prior art, the laparorm EH product uses a mechanical stabilizer to replace a laparoscope assistant, and at the same time, the rcm (remote centre of motion) constraint is mechanically ensured, so that the main surgeon can easily move the laparoscope. In addition, the following methods are mainly adopted in the prior art to carry the laparoscope:
(1) The laparoscope is teleoperated to move in a manner of foot control by a primary surgeon, and specific motions of the foot correspond to specific motions of the laparoscope, so that a laparoscope-holding assistant is replaced. This method is still not intuitive for foot control and the master surgeon needs to be trained to dexterously operate the laparoscope. That is, the method requires a person to perform the procedure and does not completely free the primary surgeon and the scope holding assistant.
(2) The marker points of the two surgical instruments are tracked by using a visual servo method, so that the laparoscopic view is ensured to be in the centers of the two instruments. The method adopts a fixed lens moving strategy, so that the system has very limited operation scenes, namely the lens moving strategy is fixed, cannot be competent for complex operation scenes, and has no generalization capability.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides an autonomous laparoscope-moving method of a laparoscopic surgery robot based on image experience, wherein a laparoscope-moving strategy of the laparoscopic surgery robot is combined, instruments are marked by adopting an image method, three-dimensional coordinates of the instruments are obtained, meanwhile, an evaluation function of the instruments under a laparoscopic view about the positions of the instruments is obtained according to an instrument frequency diagram in historical data, the projection of an X axis in a camera coordinate system on a Y axis in a visual coordinate system is taken as constraint, an intuitive constraint model required to be met by a laparoscopic pose is constructed, meanwhile, double constraint is carried out through RCM constraint and the intuitive constraint model to obtain the optimal laparoscopic pose, the optimal laparoscopic pose is further taken as an expected endoscope pose and is converted into a robot terminal pose, and in this way, the laparoscopic telescope-moving strategy is made to accord with an operation scene, the lens-moving strategy is more in line with the intention of the doctor. Meanwhile, the operation video data volume is sufficient, the method for obtaining the endoscope moving strategy is simple, and the method is applied to different types of operation scenes and has strong generalization capability.
In order to achieve the purpose, the invention provides an autonomous endoscope moving method of a laparoscopic surgical robot based on image experience, which comprises the following steps of:
s100, marking the instrument, carrying out multi-point calibration on the laparoscopic camera, and determining camera parameters and a homogeneous transformation matrix of the camera relative to the tail end of the robot;
s200, under the RCM constraint condition, estimating a three-dimensional coordinate of an instrument mark according to an image acquired by the laparoscope;
s300, acquiring an instrument frequency graph in an operation scene to obtain an evaluation function of the instrument on the instrument position under a laparoscope view;
s400, establishing the posture of the laparoscope camera coordinate system in a visual coordinate system, and meanwhile, establishing an intuitive constraint model which needs to be met by the laparoscope posture by taking the projection of an X axis in the camera coordinate system in a Y axis of the visual coordinate system as constraint;
s500, constructing a laparoscope pose optimization model by taking the evaluation function as a target according to RCM constraint and an intuition constraint model so as to obtain an optimal laparoscope pose;
s600, the optimal laparoscopic pose is used as an expected laparoscopic pose, and the expected laparoscopic pose is converted into a robot end pose.
Further preferably, in step S100, the camera parameters include a laparoscopic camera internal parameter and a laparoscopic camera distortion coefficient;
Preferably, the internal reference coefficients of the laparoscopic camera are as follows:
in the formula, fx,fyIs the focal length of the camera, [ u ]0,v0]Pixel coordinates of an optical axis of the camera in an image coordinate system;
preferably, the distortion coefficient of the laparoscopic camera is as follows:
xu=xd+(xd-xc)(K1r2+K2r4)+P1(r2+2(xd-xc)2)+2P2(xd-xc)(yd-yc)
yu=yd+(yd-yc)(K1r2+K2r4)+P2(r2+2(yd-yc)2)+2P1(xd-xc)(yd-yc)
wherein [ x ]u,yu]For the distortion corrected pixel, [ x ]d,yd]Is a pixel point before distortion, [ x ]c,yc]Is center of distortion, K1,K2Is a radial distortion coefficient, P1,P2Is the tangential distortion coefficient.
Preferably, in step S100, a homogeneous transformation matrix of the camera with respect to the end of the robot is constructed by using a hand-eye calibration transformation matrix:
the calculation model of the homogeneous transformation matrix is as follows:
in the formula (I), the compound is shown in the specification,is a homogeneous transformation matrix of the camera with respect to the robot end,in order to be a matrix of rotations,for displacement vectors, SE (3) is a special Euclidean group.
Preferably, in step S200, a point of the instrument at the RCM point is denoted as a point D, three points a, B, and C are calibrated on the instrument, distances and spatial coordinates of the three points a, B, and C are known, and the point a is located at the top end of the instrument;
respectively acquiring projection points A ', B', C 'and D' of A, B, C and D on a camera image plane;
according to the principle that the cross ratio in projection is unchanged, the following results can be obtained:
from the camera projection equation, one can obtain:
Thereby solving the three-dimensional coordinate A ═ X of the mark A point of the instrumentA,YA,ZA]T;
Wherein [ u ]A,vA]TIs the pixel coordinate of the point A; k is the projection matrix, ZAMarking the depth of the A point in the laparoscope coordinate system for the instrument, [ X ]D,YD,ZD]TIs the three-dimensional coordinate of the D point.
Further preferably, step S300 specifically includes the following steps:
s31, performing semantic segmentation on the operation video, and extracting the mask at the tip of the instrument;
s32, overlapping the extracted mask images to obtain a frequency distribution map of the instrument on the image;
s33, fitting and normalizing the frequency distribution graph of the instrument by Gaussian distribution to obtain parameters of the Gaussian distribution;
s34, taking the three-position of the instrument as input, and constructing an evaluation function for judging the quality of a mirror-moving field of view according to the weight of different instruments;
preferably, the evaluation function is:
J(x)=-WS·G(pi)
wherein J (x) is an evaluation function, WSThe weight of an instrument, generally speaking, the weight of the left and right instruments, G is a statistically derived Gaussian distribution, piFor surgical instrument tips in laparoscopic imagesThe pixel location of (2).
More preferably, in step S400, the intuitive constraint model is:
in the formula, θ is a desired amount and represents an angle of image rotation;is the transformation moment from the visual coordinate system to the coordinate system of the laparoscope camera; x ═ 1,0 ]TIs the direction of X in the operating plane; α and β are constants satisfying the sum of 1; qx=[1,0,0]TFor extracting the matrix in X-direction in the rotation matrix, Qz=[0,0,1]TExtracting a matrix in the Y direction in the rotation matrix; rz(theta) represents a rotation of the coordinate system around the Z-axis theta.
Preferably, in step S500, the laparoscope pose optimization model is:
min J(x)=-Ws·G
s.t.f(x)=0
h(Si,pi,x)=0
wherein J (x) is an evaluation function, WSWeight of instrument, f (x) intuition criterion model; h (S)i,siX) is the transformation of the three-dimensional coordinates of the instrument tip to the image pixel coordinates, SiFor three-dimensional coordinates, p, of the instrument tip in the laparoscopic coordinate systemiPixel locations of the surgical instrument tip in the laparoscopic image; d is an accessible area of the laparoscope meeting RCM constraints; x is the degree of freedom variable of the laparoscope.
Further preferably, in step S600:
the RCM position error is converged using the following control law:
wherein, the [ alpha ], [ beta ] -arvx,rvy]TIs the velocity of the instrument in the X, Y direction of the RCM coordinate system; λ is a gain parameter;is the distance vector from the RCM point to the laparoscopic axis.
Control of the RCM position error is achieved by the following equation:
wherein, the [ alpha ], [ beta ] -acvx,cvy]T,[cωx,cωy]TThe speed and the angular speed of the laparoscope, respectively, and L is the distance from the front end point of the laparoscope to the RCM pointrvx,rvy]TIs the velocity of the instrument in the X, Y direction of the RCM coordinate system.
Further preferably, step S600 further includes the steps of:
under the constraint of an intuition constraint model, combining an expected laparoscope terminal pose and a current laparoscope terminal pose into a target pose by controlling the RCM position error, and converting the target pose into a robot target pose:
wherein, the first and the second end of the pipe are connected with each other,a laparoscopic target pose;real-time pose for laparoscope;is a robot target pose;a transformation matrix from a robot base coordinate system to an RCM coordinate system;is a transformation matrix from a laparoscope coordinate system to a robot end coordinate system,the transformation matrix from the real-time pose of the laparoscope to the target pose is obtained.
More preferably, in step S600, the robot motion is controlled by the following control law:
xi+1=xi+Δx
in the formula, xi,xi+1Is the pose, x, of the robot at the i, i +1 th momentdFor the desired robot pose, λ is the gain parameter, Δ x0Is a speed threshold.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
1. the invention combines a laparoscope operation robot endoscope moving strategy, adopts an image method to mark instruments and acquire three-dimensional coordinates of the instruments, obtains an evaluation function of the instruments under a laparoscope view according to an instrument frequency diagram in historical data, constructs an intuitive constraint model required to be met by the posture of the laparoscope by taking projection of an X axis in a camera coordinate system on a Y axis in a visual coordinate system as constraint, and simultaneously carries out double constraint through RCM constraint and the intuitive constraint model to acquire an optimal laparoscope posture, so that the optimal laparoscope posture is taken as an expected endoscope posture and is converted into a robot terminal posture. Meanwhile, the operation video data volume is sufficient, the method for obtaining the endoscope moving strategy is simple, and the method is applied to different types of operation scenes and has strong generalization capability.
2. The invention extracts the lens moving strategy according with the operation scene through analyzing the operation video, so that the lens moving strategy is more in accordance with the intention of a doctor. Meanwhile, the operation video data volume is sufficient, the method for obtaining the endoscope moving strategy is simple, and the method is applied to different types of operation scenes and has strong generalization capability.
3. The invention separates out the instrument mark by processing the laparoscope image, estimates the three-dimensional coordinate of the image mark point by using the property of unchanged cross ratio in projection, and combines the evaluation function obtained in advance, thereby evaluating the quality of the endoscope at each moment.
4. The invention provides an intuitive criterion, so that the view provided by the laparoscope is more in line with the intuition of the operation of the master surgeon.
5. The invention uniformly considers various constraints and target postures, thereby ensuring that the robot motion meets the constraints at any moment, providing safe and stable laparoscope motion and ensuring the safety of patients.
Drawings
FIG. 1 is a flowchart illustrating a method for autonomous endoscope transportation by a laparoscopic surgical robot based on image experience according to a preferred embodiment of the present invention;
FIG. 2 is a schematic illustration of the principle of estimating three-dimensional coordinates of instrument markers as is contemplated in a preferred embodiment of the present invention;
FIG. 3 is a flow chart of a mirror-moving strategy involved in a preferred embodiment of the present invention;
FIG. 4 is a flow chart of robot motion control involved in a preferred embodiment of the present invention;
fig. 5 is a block diagram of a robot control involved in the preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in figure 1, the invention provides an autonomous endoscope moving method of a laparoscopic surgery robot based on image experience, which comprises the steps of processing a laparoscopic image, separating out instrument marks, estimating three-dimensional coordinates of image mark points by adopting a projection mode, constructing an evaluation function for judging the quality of an endoscope moving field by combining with a frequency distribution diagram of instruments in historical laparoscopic surgery, and extracting an endoscope moving strategy according with a surgical scene according to RCM (Radar Cross-section) constraint so that the endoscope moving strategy better meets the intention of a master surgeon. Specifically, the method comprises the following steps:
S100, marking the instrument, carrying out multi-point calibration on the laparoscopic camera, and determining camera parameters and a homogeneous transformation matrix of the camera relative to the tail end of the robot.
I.e. the instrument needs to be marked before the system works. In one embodiment of the invention, the indicia is identifiable by affixing indicia to the instrument. Of course, in other embodiments of the present invention, other calibration forms can be used, which can be recognized by the camera, and the calibration position is fixed, so that the position information of the instrument can be accurately calculated.
By marking the instruments, the operations of machine set information measurement, laparoscope camera internal parameter calibration, hand-eye conversion matrix calibration and the like are realized.
In the invention, the internal reference coefficient of the laparoscope camera is as follows:
wherein, fx,fyIs the focal length of the camera, [ u ]0,v0]Is the pixel coordinate of the optical axis of the camera in the image coordinate system.
Distortion coefficient of the laparoscopic camera:
wherein, [ x ]u,yu]For the distortion corrected pixel, [ x ]d,yd]For pixels before distortion, [ x ]c,yc]Is center of distortion, K1,K2Is a radial distortion coefficient, P1,P2Is the tangential distortion coefficient.
The hand-eye calibration transformation matrix is obtained,a homogeneous transformation matrix representing the camera relative to the robot tip:
wherein the content of the first and second substances, ctT is a homogeneous transformation matrix of the camera with respect to the robot end,in order to rotate the matrix of the matrix,SE (3) is a special Euclidean group for the displacement vector.
S200, under the RCM constraint condition, estimating the three-dimensional coordinates of the instrument mark according to the image acquired by the laparoscope.
In the invention, at least 3 points are marked on the instrument, in order to more accurately identify the tail end position of the instrument, the tail end of the instrument is marked, namely, the point of the instrument, which is positioned at the RCM point, is marked as a D point, three points A, B and C are marked on the instrument, the distances and the space coordinates of the three points A, B and C are known, and the point A is positioned at the top end of the instrument. As shown in FIG. 2, A, B, and C are instrument markers and distances are known, and D is the RCM point whose coordinates in three-dimensional space are known. A ', B', C 'and D' are projection points of A, B, C and D on the image plane of the camera.
According to the principle that the cross ratio in the projection is unchanged, the following results can be obtained:
from the camera projection equation, one can get:
thereby solving the three-dimensional coordinate A ═ X of the mark A point of the instrumentA,YA,ZA]T;
In the formula, [ u ]A,vA]TIs the pixel coordinate of the point A; k is the projection matrix, ZAMarking the depth of the A point in the laparoscope coordinate system for the instrument, [ X ]D,YD,ZD]TIs the three-dimensional coordinate of the D point.
In this step, other methods for tracking the three-dimensional coordinates of the point a are also applicable to the present invention, such as a deep neural network for self-supervised learning, a method based on contour and optical flow features, and the like.
S300, acquiring an instrument frequency map in the surgical scene to obtain an evaluation function of the instrument on the instrument position under the laparoscopic view. Performing semantic segmentation on the operation video, extracting a mask at the tip of the instrument, superposing the extracted mask images to obtain a frequency distribution map of the instrument on the image, fitting and normalizing the frequency distribution map of the instrument by using Gaussian distribution to obtain parameters of the Gaussian distribution, and constructing an evaluation function for judging the quality of a scope-moving field according to the weights of different instruments by taking the three-position of the instrument as input.
Specifically, as shown in fig. 3, video data of a certain operation is collected, and a mask of an instrument tip in an image is obtained by using a general semantic segmentation scheme. The method comprises the steps of dividing an operation video in a period of time into image sets according to a certain number of frames, counting instrument masks in the image sets, and obtaining a frequency distribution graph of instruments on the images, wherein the frequency distribution graph represents a scope moving strategy of a scope supporting assistant. Fitting is carried out by using Gaussian distribution, the frequency distribution diagram is smoothed, and then normalization is carried out, so that parameters of the Gaussian distribution are obtained. Finally, an evaluation function which takes the position of the instrument as input and judges whether the field of view of the endoscope is good or bad is formed:
J(x)=-WS·G(pi) (6)
Wherein J (x) is an evaluation function, WSThe weight of an instrument, generally speaking, the weight of the left and right instruments, G is a statistically derived Gaussian distribution, piIs the pixel location of the surgical instrument tip in the laparoscopic image.
Of course, in the embodiment of the present invention, the frequency distribution map of the instrument on the image may also be obtained by other algorithms, such as extracting the outline of the instrument image or according to the energy value distribution of the instrument pixels in the image, and the like, and the purpose is to obtain the frequency distribution map of the instrument on the image in the historical laparoscopic surgery, so as to provide the endoscope operation strategy of the endoscope assistant.
S400, establishing the posture of the laparoscope camera coordinate system in the visual coordinate system, and meanwhile, establishing an intuitive constraint model which needs to be met by the laparoscope posture by taking the projection of an X axis in the camera coordinate system in a Y axis of the visual coordinate system as constraint.
In this step, the laparoscopic pose cannot be completely matched to the master surgeon's visual coordinate system due to the presence of RCM constraints. The present invention provides an intuitive guideline that enables a laparoscope to be more intuitive for the surgeon to operate under the RCM constraints.
The intuitive criterion is expressed as follows:
wherein θ is a desired amount and represents an angle of image rotation; Is the transformation moment from the visual coordinate system to the coordinate system of the laparoscope camera; x is [1,0 ]]TIs the direction of X in the operating plane; α and β are constants that satisfy the sum 1, i.e., α + β is 1, and when the laparoscope is in different poses, the values of α and β will be different; qx=[1,0,0]TFor extracting the matrix in X-direction in the rotation matrix, Qz=[0,0,1]TExtracting a matrix in the Y direction in the rotation matrix; r isz(θ) represents a rotation of the coordinate system about the Z-axis by θ.
S500, with the evaluation function as a target, constructing a laparoscope pose optimization model according to RCM constraint and the intuition constraint model so as to obtain an optimal laparoscope pose.
In this step, based on the intuitive criteria provided in S400, the following optimization problem can be obtained by taking the evaluation function for determining the quality of the endoscope in S300 as an optimization target and considering the RCM constraint in the minimally invasive laparoscopic surgery:
wherein J (x) is an evaluation function, WSWeight of instrument, f (x) intuition criterion model; h (S)i,siX) is the transformation of the three-dimensional coordinates of the instrument tip to the image pixel coordinates, SiFor three-dimensional coordinates, p, of the instrument tip in the laparoscopic coordinate systemiPixel locations of the surgical instrument tip in the laparoscopic image; d is an accessible area of the laparoscope meeting RCM constraints; x is the degree of freedom variable of the laparoscope.
By using a heuristic algorithm, a solution of the optimization problem can be obtained, and the laparoscopic pose reflected by the solution is the pose according with the endoscope moving strategy.
S600, the optimal laparoscopic pose is used as an expected laparoscopic pose, and the expected laparoscopic pose is converted into a robot end pose. After the optimization solution is completed, as shown in fig. 1 and 4, it is necessary to use a control strategy to make the laparoscope reach the desired pose smoothly,
and obtaining the expected laparoscopic pose by solving, and obtaining the hand-eye transformation matrix in S100 to obtain the expected robot tail end pose. To ensure that robot motion does not cause damage to the patient, the present invention uses the following control law to converge the RCM position error:
wherein, the [ alpha ], [ beta ] -arvx,rvy]TIs the velocity of the instrument in the X, Y direction of the RCM coordinate system; λ is a gain parameter;is the distance vector from the RCM point to the laparoscopic axis.
In addition, in this step, the control of the RCM position error is realized by the following formula:
wherein, the [ alpha ], [ beta ] -acvx,cvy]T,[cωx,cωy]TThe speed and the angular speed of the laparoscope, respectively, and L is the distance from the front end point of the laparoscope to the RCM pointrvx,rvy]TIs the velocity of the instrument in the X, Y direction of the RCM coordinate system.
Intuitive criteria constraints are used to ensure that the scope-moving process conforms to the intuition of the master knife physician.
In a preferred embodiment of the present invention, the desired laparoscopic end pose is combined with the current laparoscopic end pose as a goal pose under the constraint of an intuitive constraint model through the control of the RCM position error, and the goal pose is transformed into a robotic goal pose:
wherein, the first and the second end of the pipe are connected with each other,a laparoscopic target pose;real-time pose for laparoscope;is a robot target pose;a transformation matrix from a robot base coordinate system to an RCM coordinate system;is a transformation matrix from a laparoscope coordinate system to a robot end coordinate system,the transformation matrix from the real-time pose of the laparoscope to the target pose is obtained.
As shown in FIG. 5, in order to avoid the mirror-moving too frequently to cause discomfort to the master surgeon, and at the same time to control the mirror-moving speed, the present invention controls the robot motion using the following control law:
xi+1=xi+Δx
wherein x isi,xi+1Is the pose, x, of the robot at the i, i +1 th momentdFor the desired robot pose, λ is the gain parameter, Δ x0Is a speed threshold set to prevent the laparoscope from being too fast.
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, since various modifications, substitutions and improvements within the spirit and scope of the invention are possible and within the scope of the appended claims.
Claims (10)
1. An autonomous endoscope moving method of a laparoscopic surgery robot based on image experience is characterized by comprising the following steps of:
s100, marking the instrument, performing multi-point calibration on the laparoscopic camera, and determining camera parameters and a homogeneous transformation matrix of the camera relative to the tail end of the robot;
s200, under the RCM constraint condition, estimating a three-dimensional coordinate of an instrument mark according to an image acquired by the laparoscope;
s300, acquiring an instrument frequency diagram in an operation scene to obtain an evaluation function of the instrument on the instrument position under a laparoscopic view;
s400, establishing the posture of the laparoscope camera coordinate system in a visual coordinate system, and meanwhile, establishing an intuitive constraint model which needs to be met by the laparoscope posture by taking the projection of an X axis in the camera coordinate system in a Y axis of the visual coordinate system as constraint;
s500, constructing a laparoscope pose optimization model by taking the evaluation function as a target according to RCM constraint and an intuitiveness constraint model so as to obtain an optimal laparoscope pose;
s600, the optimal laparoscopic pose is used as an expected laparoscopic pose, and the expected laparoscopic pose is converted into a robot end pose.
2. The autonomous endoscope moving method of the laparoscopic surgery robot based on image experience of claim 1, wherein in step S100, the camera parameters comprise laparoscopic camera internal parameter coefficients and laparoscopic camera distortion coefficients;
Preferably, the internal reference coefficients of the laparoscopic camera are as follows:
in the formula, fx,fyIs the focal length of the camera, [ u ]0,v0]Pixel coordinates of the optical axis of the camera in an image coordinate system;
preferably, the distortion coefficient of the laparoscopic camera is as follows:
xu=xd+(xd-xc)(K1r2+K2r4)+P1(r2+2(xd-xc)2)+2P2(xd-xc)(yd-yc)
yu=yd+(yd-yc)(K1r2+K2r4)+P2(r2+2(yd-yc)2)+2P1(xd-xc)(yd-yc)
wherein [ x ]u,yu]For the distortion corrected pixel, [ x ]d,yd]For pixels before distortion, [ x ]c,yc]Is center of distortion, K1,K2Is a radial distortion coefficient, P1,P2Is the tangential distortion coefficient.
3. The autonomous endoscope moving method of the laparoscopic surgery robot based on image experience of claim 1, wherein in step S100, a hand-eye calibration transformation matrix is used to construct a homogeneous transformation matrix of the camera with respect to the end of the robot, and the computation model of the homogeneous transformation matrix is as follows:
4. The autonomous endoscope moving method of the laparoscopic surgery robot based on image experience of claim 1, wherein in step S200, the point of the instrument at the RCM point is marked as point D, three points a, B and C are marked on the instrument, the distances and the spatial coordinates of the three points a, B and C are known, and the point a is located at the top end of the instrument;
Respectively acquiring projection points A ', B', C 'and D' of A, B, C and D on a camera image plane;
according to the principle that the cross ratio in the projection is unchanged, the following results can be obtained:
from the camera projection equation, one can obtain:
thereby solving the three-dimensional coordinate A ═ X of the mark A point of the instrumentA,YA,ZA]T;
In the formula, [ u ]A,vA]TIs the pixel coordinate of the point A; k is the projection matrix, ZAThe depth of point a in the laparoscopic coordinate system is marked for the instrument.
5. The autonomous endoscope moving method of the laparoscopic surgical robot based on image experience of claim 1, wherein the step S300 specifically comprises the following steps:
s31, performing semantic segmentation on the operation video, and extracting the mask at the tip of the instrument;
s32, overlapping the extracted mask images to obtain a frequency distribution map of the instrument on the image;
s33, fitting and normalizing the frequency distribution graph of the instrument by Gaussian distribution to obtain parameters of the Gaussian distribution;
s34, taking the three-position of the instrument as input, and constructing an evaluation function for judging the quality of a mirror-moving field of view according to the weight of different instruments;
preferably, the evaluation function is:
J(x)=-WS·G(pi)。
wherein J (x) is an evaluation function, WSIs the weight of the instrument, G is the statistically derived Gaussian distribution, piIs the pixel location of the surgical instrument tip in the laparoscopic image.
6. The autonomous endoscope moving method of the laparoscopic surgery robot based on image experience of claim 1, wherein in step S400, the intuitive constraint model is:
in the formula, theta is the angle of image rotation;is the transformation moment from the visual coordinate system to the coordinate system of the laparoscope camera; x ═ 1,0]TIs the direction of X in the operating plane; alpha and betaA constant satisfying the sum of 1; qx=[1,0,0]TFor extracting the matrix in X-direction in the rotation matrix, Qz=[0,0,1]TExtracting a matrix in the Y direction in the rotation matrix; rz(theta) represents a rotation of the coordinate system around the Z-axis theta.
7. The autonomous endoscope transporting method of the laparoscopic surgery robot based on image experience of claim 1, wherein in step S500, the laparoscopic pose optimization model is:
min J(x)=-Ws·G(pi)
s.t.f(x)=0
h(Si,pi,x)=0
wherein J (x) is an evaluation function, WSWeight of instrument, f (x) intuition criterion model; h (S)i,siX) is the transformation of the three-dimensional coordinates of the instrument tip to the image pixel coordinates, SiFor three-dimensional coordinates, p, of the instrument tip in the laparoscopic coordinate systemiPixel locations of the surgical instrument tip in the laparoscopic image; d is an accessible area of the laparoscope meeting RCM constraints; x is the degree of freedom variable of the laparoscope.
8. The autonomous endoscope moving method of the laparoscopic surgical robot based on image experience according to any one of claims 1 to 7, wherein in the step S600:
The RCM position error is converged using the following control law:
wherein, the [ alpha ], [ beta ] -arvx,rvy]TIs the velocity of the instrument in the X, Y direction of the RCM coordinate system; λ is a gain parameter;Is the distance vector from the RCM point to the laparoscopic axis.
Control of the RCM position error is achieved by the following equation:
wherein, the [ alpha ], [ beta ] -acvx,cvy]T,[cωx,cωy]TThe speed and the angular speed of the laparoscope, respectively, and L is the distance from the front end point of the laparoscope to the RCM pointrvx,rvy]TIs the velocity of the instrument in the X, Y direction of the RCM coordinate system.
9. The autonomous endoscope moving method of the laparoscopic surgery robot based on image experience of claim 8, wherein the step S600 further comprises the steps of:
under the constraint of an intuitive constraint model, combining an expected laparoscope terminal pose and a current laparoscope terminal pose into a target pose by controlling the RCM position error, and converting the target pose into a robot target pose:
wherein the content of the first and second substances,is a laparoscope target pose;real-time pose of the laparoscope;is a robot target pose;a transformation matrix from a robot base coordinate system to an RCM coordinate system;is a transformation matrix from a laparoscope coordinate system to a robot end coordinate system,the transformation matrix from the real-time pose of the laparoscope to the target pose is obtained.
10. The autonomous endoscope moving method of the laparoscopic surgical robot based on image experience according to any one of claims 1 to 8, wherein in step S600, the following control law is adopted to control the robot movement:
xi+1=xi+Δx
In the formula, xi,xi+1Is the pose, x, of the robot at the i, i +1 th momentdFor the desired robot pose, λ is the gain parameter, Δ x0Is a speed threshold.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210338000.7A CN114748169A (en) | 2022-03-31 | 2022-03-31 | Autonomous endoscope moving method of laparoscopic surgery robot based on image experience |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210338000.7A CN114748169A (en) | 2022-03-31 | 2022-03-31 | Autonomous endoscope moving method of laparoscopic surgery robot based on image experience |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114748169A true CN114748169A (en) | 2022-07-15 |
Family
ID=82329880
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210338000.7A Pending CN114748169A (en) | 2022-03-31 | 2022-03-31 | Autonomous endoscope moving method of laparoscopic surgery robot based on image experience |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114748169A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117103286A (en) * | 2023-10-25 | 2023-11-24 | 杭州汇萃智能科技有限公司 | Manipulator eye calibration method and system and readable storage medium |
-
2022
- 2022-03-31 CN CN202210338000.7A patent/CN114748169A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117103286A (en) * | 2023-10-25 | 2023-11-24 | 杭州汇萃智能科技有限公司 | Manipulator eye calibration method and system and readable storage medium |
CN117103286B (en) * | 2023-10-25 | 2024-03-19 | 杭州汇萃智能科技有限公司 | Manipulator eye calibration method and system and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230107693A1 (en) | Systems and methods for localizing, tracking and/or controlling medical instruments | |
US8792963B2 (en) | Methods of determining tissue distances using both kinematic robotic tool position information and image-derived position information | |
US8147503B2 (en) | Methods of locating and tracking robotic instruments in robotic surgical systems | |
CN102791214B (en) | Adopt the visual servo without calibration that real-time speed is optimized | |
CN108042092B (en) | Determining a position of a medical device in a branched anatomical structure | |
US20230000565A1 (en) | Systems and methods for autonomous suturing | |
Doignon et al. | Segmentation and guidance of multiple rigid objects for intra-operative endoscopic vision | |
US11284777B2 (en) | Robotic control of an endoscope from anatomical features | |
CN106890031B (en) | Marker identification and marking point positioning method and operation navigation system | |
Soper et al. | In vivo validation of a hybrid tracking system for navigation of an ultrathin bronchoscope within peripheral airways | |
CN113143461B (en) | Man-machine cooperative minimally invasive endoscope holding robot system | |
WO2023066072A1 (en) | Catheter positioning method, interventional surgery system, electronic device and storage medium | |
CN107049496A (en) | A kind of Visual servoing control method of multitask operating robot | |
CN114748169A (en) | Autonomous endoscope moving method of laparoscopic surgery robot based on image experience | |
CN116322555A (en) | Alerting and mitigating deviation of anatomical feature orientation from previous images to real-time interrogation | |
CN113557547A (en) | System and method for connecting segmented structures | |
Yang et al. | Autonomous laparoscope control for minimally invasive surgery with intuition and RCM constraints | |
Cabras et al. | Comparison of methods for estimating the position of actuated instruments in flexible endoscopic surgery | |
Doignon et al. | The role of insertion points in the detection and positioning of instruments in laparoscopy for robotic tasks | |
US20230030343A1 (en) | Methods and systems for using multi view pose estimation | |
US20220354380A1 (en) | Endoscope navigation system with updating anatomy model | |
CN112315582A (en) | Positioning method, system and device of surgical instrument | |
CN114496197A (en) | Endoscope image registration system and method | |
US10984585B2 (en) | Systems, methods, and computer-readable media for automatic computed tomography to computed tomography registration | |
CN111743628A (en) | Automatic puncture mechanical arm path planning method based on computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |