CN117017493A - Method and device for determining sleeve pose of surgical robot system - Google Patents

Method and device for determining sleeve pose of surgical robot system Download PDF

Info

Publication number
CN117017493A
CN117017493A CN202311099892.0A CN202311099892A CN117017493A CN 117017493 A CN117017493 A CN 117017493A CN 202311099892 A CN202311099892 A CN 202311099892A CN 117017493 A CN117017493 A CN 117017493A
Authority
CN
China
Prior art keywords
sleeve
dimensional data
cannula
pose
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311099892.0A
Other languages
Chinese (zh)
Inventor
虞苏璞
张阳
谢强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202311099892.0A priority Critical patent/CN117017493A/en
Publication of CN117017493A publication Critical patent/CN117017493A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/302Surgical robots specifically adapted for manipulations within body cavities, e.g. within abdominal or thoracic cavities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Robotics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The application relates to a method and a device for determining the pose of a sleeve of a surgical robot system, wherein the method comprises the following steps: acquiring three-dimensional data of at least one sleeve; registering the three-dimensional data of the at least one sleeve with the three-dimensional data of the reference sleeve, and respectively determining the pose of the at least one sleeve. By utilizing the technical scheme provided by the embodiments of the application, the automatic alignment between the mechanical arm and the sleeve can be realized, the complexity of the alignment between the mechanical arm and the sleeve is reduced, and the alignment efficiency is improved.

Description

Method and device for determining sleeve pose of surgical robot system
Technical Field
The application relates to the technical field of medical instruments, in particular to a method and a device for determining the pose of a sleeve of a surgical robot system.
Background
Currently, surgical robots, particularly laparoscopic surgical robots, are in the market of medical instruments. The surgical operation is implemented by using the surgical robot, and the operation can be completed by only cutting 3-4 wounds with the size of 5-10 mm on the body surface of the patient, extending the endoscope into the patient and operating the surgical instrument under the guidance of the visual image. Compared with the traditional open surgery, the surgical operation implemented by the surgical robot has the advantages of small wound, less pain, quick recovery, low infection rate and the like.
In order to prevent a surgical instrument or an imaging device from damaging a wound of a patient during movement, a surgical robot often needs to provide a cannula (Trocar) between the surgical instrument or the imaging device and the wound. One end of the sleeve is inserted into the body, and the other end of the sleeve is positioned outside the body and connected with the mechanical arm. Because the one end of the sleeve is detachably connected with the mechanical arm, before each operation, a worker needs to manually connect the mechanical arm with the one end of the sleeve. Specifically, the worker needs to spread the plurality of mechanical arms one by one, and then repeatedly adjust the height and the posture of the mechanical arms until each mechanical arm is respectively connected with the corresponding sleeve in an opposite way. The above operations are not only cumbersome but also very time consuming.
Accordingly, there is a need in the art for an efficient and automated method of docking a robotic arm with a cannula.
Disclosure of Invention
The embodiment of the application provides a sleeve pose determining method and device of a surgical robot system, which are used for at least solving the problems of complex manual sleeve connecting and low efficiency in the related technology.
In a first aspect, an embodiment of the present application provides a method for determining a cannula pose of a surgical robot system, including:
Acquiring three-dimensional data of at least one sleeve;
registering the three-dimensional data of the at least one sleeve with the three-dimensional data of the reference sleeve, and respectively determining the pose of the at least one sleeve.
In the embodiment of the application, the three-dimensional data of the sleeve and the three-dimensional data of the reference sleeve can be registered to determine the pose of the sleeve. In the technical scheme of the application, a reference model of the sleeve, namely the reference sleeve, is constructed, so that the pose of the sleeve relative to the reference sleeve can be acquired based on three-dimensional data of the sleeve and the reference sleeve, thereby realizing automatic acquisition of the sleeve pose. In addition, the determined sleeve pose can provide a navigation basis for adjusting the automatic positioning of the mechanical arm, so that the automatic alignment between the mechanical arm and the sleeve is realized, the complicated degree of the alignment of the mechanical arm and the sleeve is reduced, and the alignment efficiency is improved.
Optionally, in one embodiment of the present application, the acquiring three-dimensional data of at least one cannula includes:
acquiring three-dimensional data of a surgical environment, wherein the surgical environment comprises at least one sleeve;
three-dimensional data of the at least one cannula is extracted from the three-dimensional data of the surgical environment.
Optionally, in an embodiment of the present application, the registering the three-dimensional point cloud data of the at least one cannula with the three-dimensional point cloud data of the reference cannula, respectively determining the pose of the at least one cannula includes:
registering the three-dimensional data of the at least one sleeve with the three-dimensional data of the reference sleeve to obtain registration results respectively;
determining a registration error of the registration result;
under the condition that the registration error is larger than a preset threshold value, carrying out self-adaptive adjustment on the three-dimensional data of the reference sleeve, so that the three-dimensional data of the reference sleeve is matched with morphological parameters of the three-dimensional data of the sleeve;
registering the three-dimensional data of the sleeve and the three-dimensional data of the reference sleeve after self-adaptive adjustment until the registration error is smaller than or equal to the preset threshold value, and respectively determining the pose of the at least one sleeve.
Optionally, in an embodiment of the present application, the registering the three-dimensional data of the at least one cannula with the three-dimensional data of the reference cannula, respectively determining the pose of the at least one cannula includes:
performing self-adaptive adjustment on the three-dimensional data of the reference sleeve, so that the three-dimensional data of the reference sleeve is matched with morphological parameters of the three-dimensional data of the sleeve;
Registering the three-dimensional data of the sleeve and the three-dimensional data of the reference sleeve after self-adaptive adjustment, and respectively determining the pose of the at least one sleeve.
Optionally, in an embodiment of the present application, the registering the three-dimensional data of the at least one cannula with the three-dimensional data of the reference cannula, respectively determining the pose of the at least one cannula includes:
performing rough registration on the three-dimensional data of the at least one sleeve and the three-dimensional data of the reference sleeve, and determining a rough registration conversion relation;
and carrying out fine registration on the three-dimensional data of the at least one sleeve and the three-dimensional data of the reference sleeve based on the coarse registration conversion relation, and determining a fine registration conversion relation.
Optionally, in an embodiment of the present application, after the determining the pose of the at least one cannula, the method further includes:
and adjusting the pose of at least one mechanical arm of the surgical robot so that the pose of the at least one mechanical arm is matched with the pose of the corresponding sleeve.
Optionally, in an embodiment of the present application, after the determining the pose of the at least one cannula, the method further includes:
And respectively determining the positions of the surgical ports penetrated by the at least one sleeve relative to the corresponding sleeves.
Optionally, in an embodiment of the present application, the determining the position of the surgical port through which the at least one cannula is inserted relative to the corresponding cannula includes:
respectively determining the curved surface of the surgical port penetrated by the at least one sleeve;
and respectively determining the intersection point of the main shaft of the at least one sleeve and the corresponding curved surface.
In a second aspect, an embodiment of the present application provides a cannula pose determining apparatus of a surgical robot system, including:
the data acquisition module is used for acquiring three-dimensional data of at least one sleeve;
and the data registration module is used for registering the three-dimensional point cloud data of the at least one sleeve and the three-dimensional point cloud data of the reference sleeve, and respectively determining the pose of the at least one sleeve.
In a third aspect, embodiments of the present application provide a processing device comprising a memory having a computer program stored therein and a processor configured to run the computer program to perform the cannula pose determination method of the surgical robotic system.
In a fourth aspect, embodiments of the present application provide a surgical robotic system comprising at least one surgical instrument, at least one cannula, at least one robotic arm, and said treatment device, wherein,
the sleeve is arranged in the surgical port in a penetrating way and is used for sleeving the surgical instrument;
the surgical instrument is connected to the end part of the mechanical arm and is used for performing surgery;
optionally, in an embodiment of the present application, the robotic arm is coupled to the cannula for controlling movement of the surgical instrument.
The driving system is used for planning a motion path of the mechanical arm according to the pose of the sleeve and the initial pose of the mechanical arm, so that the pose of the at least one mechanical arm is matched with the pose of the corresponding sleeve;
correspondingly, the mechanical arm is also used for moving according to the movement path.
In a fifth aspect, embodiments of the present application provide a computer storage medium having a computer program stored therein, wherein the computer program is configured to perform the cannula pose determination method of the surgical robotic system at run-time.
In a sixth aspect, embodiments of the present application provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the method of cannula pose determination of a surgical robotic system.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a method flow diagram of a method for determining the pose of a cannula of a surgical robotic system according to one embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 4 is a block diagram of a cannula pose determination device of a surgical robotic system according to an embodiment of the present application;
FIG. 5 is a block diagram of a processing apparatus according to one embodiment of the present application;
fig. 6 is a block diagram of a computer program product according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means greater than or equal to two. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
In addition, numerous specific details are set forth in the following description in order to provide a better illustration of the application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, devices, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present application.
In order to clearly show the technical solutions of the various embodiments of the present application, one of the exemplary scenarios of the embodiments of the present application is described below by means of fig. 1.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a cannula pose determining system of a surgical robot system according to an embodiment of the present application, where the system includes an acquisition device 101 and a cannula pose determining device 103, where the acquisition device 101 and the cannula pose determining device 103 may communicate to send an acquired surgical environment image to the cannula pose determining device 103, and the cannula pose determining device 103 completes the determination of the pose of the cannula 105.
The acquisition device 101 may be an electronic device having data acquisition capabilities and data transceiving capabilities. For example, the acquisition device 101 may comprise an electronic device capable of acquiring three-dimensional information in a surgical environment, such as a depth camera, which may also include cameras based on time of flight (TOF), structured light, binocular depth vision, etc., a computed tomography (Computed Tomography, CT) device, a magnetic resonance imaging device, etc., as the application is not limited in this regard. The acquisition device 101 is primarily used to acquire three-dimensional data of the cannula 105, and the cannula 105 is already in a state of being inserted into a surgical port of the patient 107. Based on this, the collection device 101 may be aligned with the position of the cannula 105, for example, the collection device 101 may be mounted on the surgical robot 109 shown in fig. 1, or of course, the collection device 101 may be mounted on a surgical lamp or any other position capable of aligning with the cannula 105, which is not limited in this regard.
The cannula pose determining device 103 may be an electronic device with data processing capability and data transceiving capability, may be a physical device such as a host, a rack-mounted server, a blade server, etc., may be a virtual device such as a virtual machine, a container, etc., or may be a control end of the surgical robot 109. After the three-dimensional data of the cannula 105 is acquired, the cannula pose determining device 103 may register the three-dimensional data of the cannula 105 with the three-dimensional data of the reference cannula 301, and determine pose information of the cannula 105.
It should be noted that, the cannula pose determining device 103 may also be integrated in the acquiring device 101, for example, the depth camera may be used to complete the workflow of acquiring the surgical environment image and determining the cannula pose of the surgical robot system, which is not limited in the embodiment of the present application.
The method for determining the cannula position of the surgical robot system according to the present application will be described in detail with reference to the accompanying drawings. Fig. 2 is a flow chart of an embodiment of a cannula pose determination method of a surgical robot system provided by the application. Although the application provides the method steps shown in the examples or figures described below, more or fewer steps may be included in the method, either on a routine or non-inventive basis. In the steps where there is logically no necessary causal relationship, the execution order of the steps is not limited to the execution order provided by the embodiment of the present application. The method may be sequentially or concurrently performed (e.g., in a parallel processor or multi-threaded processing environment) according to the method shown in the embodiments or the drawings during the cannula pose determination process of an actual surgical robot system or when the method is performed.
Specifically, as shown in fig. 2, an embodiment of a method for determining a cannula pose of a surgical robot system according to the present application may include:
s201: three-dimensional data of at least one cannula 105 is acquired.
In embodiments of the present application, three-dimensional data of the cannula 105 may be the basis for determining the pose of the cannula 105. In an actual surgical scenario, a plurality of different surgical instruments are often required to be involved in a patient to perform a surgery together, so as shown in fig. 1, one or more cannulas 105 may be worn at corresponding parts of the patient according to the surgical requirements, where the wear of the cannulas 105 forms a surgical port on the surface of the patient. After the position of the at least one cannula 105 is fixed, three-dimensional data of the at least one cannula 105 may be acquired.
As described above, the acquisition device 101 may include, for example, a depth camera, a computed tomography (Computed Tomography, CT) device, a magnetic resonance imaging device, or a plurality of different kinds of electronic devices. The three-dimensional data may then correspondingly comprise depth image data, CT volume images, magnetic resonance images, etc., but may of course also comprise three-dimensional data obtained by processing data acquired with other acquisition devices 101, such as three-dimensional geometric models, etc., the application is not limited thereto.
In a practical application scenario, the three-dimensional data acquired by the acquisition device 101 may be three-dimensional data of a surgical environment including the at least one cannula 105. The surgical environment may include not only the at least one cannula 105, but also other objects such as surgical sites, surgical beds, sterile cloths, floors, surgical lights, etc. Based on this, in one embodiment of the present application, the acquiring three-dimensional data of the at least one cannula 105 may include:
s301: acquiring three-dimensional data of a surgical environment including at least one cannula 105 therein;
s303: three-dimensional data of the at least one cannula 105 is extracted from the three-dimensional data of the surgical environment.
In an embodiment of the present application, after the cannula pose determining apparatus 103 acquires the three-dimensional data of the surgical environment, the three-dimensional data of the at least one cannula 105 may be extracted from the three-dimensional data of the surgical environment. Wherein in one embodiment of the present application, a cluster segmentation algorithm may be employed to extract the three-dimensional data of the at least one cannula 105, which may include, for example, a cluster segmentation algorithm based on features such as color, normal vector, feature operator, and the like. It should be noted that, the manner of extracting the three-dimensional data of the cannula 105 is not limited to the above example, and for example, a semantic-based image segmentation algorithm, such as a panorama segmentation algorithm, an instance segmentation algorithm, a semantic segmentation algorithm, and the like, may also be used. Other variations will occur to those skilled in the art upon consideration of the present teachings, and are intended to be encompassed within the scope of the present application as long as the functions and effects achieved are the same or similar to those of the present application.
S203: the three-dimensional data of the at least one cannula 105 is registered with the three-dimensional data of the reference cannula 301, and the pose of the at least one cannula 105 is determined separately.
In an embodiment of the present application, after the three-dimensional data of the at least one cannula 105 is acquired, the three-dimensional data of the at least one cannula 105 may be registered with the three-dimensional data of the reference cannula 301. Wherein the reference casing 301 may comprise a physical casing or casing model having the same appearance parameters as the appearance parameters of the at least one casing 105. The three-dimensional data of the at least one cannula 105 is registered with the three-dimensional data of the reference cannula 301 as reference data, and the positional conversion relations between the respective cannulas 105 and the reference cannula 301, for example, including the rotational translation matrix T, are determined, respectively. That is, after the three-dimensional data of the sleeve 105 is converted in accordance with the positional conversion relation, it may coincide with the three-dimensional data of the reference sleeve 301. Thus, the positional conversion relationship can be regarded as the pose of the sleeve 105. In one embodiment of the present application, in the case where the three-dimensional data includes three-dimensional point cloud data, the manner of registering the two three-dimensional point cloud data may include a principal component analysis method, a characteristic point method, and the like. In other embodiments, when the three-dimensional data is non-point cloud data, the three-dimensional data may be sampled, the three-dimensional data is converted into point cloud data, and on the basis, the two point cloud data are registered.
In one embodiment of the application, the process of registration may include two steps, coarse registration and fine registration. Specifically, the registering the three-dimensional data of the at least one cannula 105 with the three-dimensional data of the reference cannula 301, and determining the pose of the at least one cannula 105 respectively may include:
s401: performing rough registration on the three-dimensional data of the at least one sleeve 105 and the three-dimensional data of the reference sleeve 301, and determining a rough registration conversion relation;
s403: based on the coarse registration transformation relationship, the three-dimensional data of the at least one cannula 105 and the three-dimensional data of the reference cannula 301 are subjected to fine registration, and a fine registration transformation relationship is determined.
In an embodiment of the present application, first, the three-dimensional data of the cannula 105 and the three-dimensional data of the reference cannula 301 may be coarsely registered. The coarse registration may include registering the three-dimensional data with an unknown relative positional relationship of the three-dimensional data, which may provide an initial value for a subsequent fine registration. In some embodiments of the application, the algorithm implementing the coarse registration may include an exhaustive search based registration algorithm, a feature matching based registration algorithm, and so on. The registration algorithm based on exhaustive search may traverse the whole transformation space to choose a transformation relation that minimizes an error function or enumerates a transformation relation that satisfies the most point pairs, including, for example, a RANSAC registration algorithm, a four-point consistent set registration algorithm (4-Point Congruent Set,4 PCS), a Super4PCS algorithm, and the like. The registration algorithm based on feature matching may construct matching correspondence between three-dimensional data according to morphological characteristics of the sleeve 105, and then estimate a transformation relationship by adopting a correlation algorithm, for example, including an initial registration algorithm (Sample Consensus Initial Alignment, SAC-IA) based on a sampling consistency of a point fast point feature histogram (Fast Point Feature Histograms, FPFH), an algorithm such as a fast global registration (Fast Global Registration, FGR), an AO algorithm based on a point direction histogram feature (Signatures of Histograms of OrienTations, SHOT), an iterative closest line (Iterative Closest Line, ICL) algorithm based on a line feature, and the like, which is not limited in the present application.
After determining the coarse registration transformation relationship, the three-dimensional data of the cannula 105 and the three-dimensional relationship of the reference cannula 301 may be fine registered based on the coarse registration transformation relationship. The fine registration includes minimizing spatial position differences between the three-dimensional data based on the coarse registration. In some embodiments, the algorithm that implements the fine registration may include iterative closest points (Iterative Closest Point, ICP) as well as various variant algorithms of ICP, such as robust ICP, point to Plane ICP, point to Line ICP, MBICP, GICP, NICP, and so on.
In practical applications, the three-dimensional data collected for the at least one cannula 105 may be only a partial data of the cannula 105 due to problems such as occlusion, pose, etc. of the cannula 105. Based on this, in order to improve accuracy of registration between three-dimensional data, in an embodiment of the present application, the registering the three-dimensional point cloud data of the at least one cannula 105 with the three-dimensional point cloud data of the reference cannula 301, respectively determining the pose of the at least one cannula 105 may include:
s501: registering the three-dimensional data of the at least one sleeve 105 with the three-dimensional data of the reference sleeve 301 to obtain registration results respectively;
S503: determining a registration error of the registration result;
s505: in the case that the registration error is determined to be greater than a preset threshold value, performing self-adaptive adjustment on the three-dimensional data of the reference sleeve 301, so that the three-dimensional data of the reference sleeve 301 is matched with morphological parameters of the three-dimensional data of the sleeve 105;
s507: registering the three-dimensional data of the sleeve 105 with the three-dimensional data of the reference sleeve 301 after self-adapting adjustment until the registration error is smaller than or equal to the preset threshold value, and determining the pose of the at least one sleeve 105 respectively.
In the embodiment of the present application, when the three-dimensional data of the at least one casing 105 and the three-dimensional data of the reference casing 301 are registered, the registration results corresponding to the at least one casing 105 may be obtained respectively. Here, a registration error corresponding to the registration result may be determined. The registration error may be determined using Root Mean Square Error (RMSE) or the like. Specifically, the three-dimensional data of the cannula 105 may be converted according to the registration result by rotation and/or translation, etc., to obtain converted three-dimensional data. Then, a root mean square error between the converted three-dimensional data and the three-dimensional data of the reference sleeve 301 may be determined. In case it is determined that the registration error is greater than the preset error, the three-dimensional data of the reference cannula 301 may be adaptively adjusted such that the three-dimensional data of the reference cannula 301 matches the morphological parameters of the three-dimensional data of the cannula 105. The morphological parameters include, for example, size, shape, volume, etc., the application is not limited herein. In the following, the above embodiment will be specifically illustrated by means of the example shown in fig. 3, where in the case where it is determined that the registration error is greater than the preset threshold, the length of the reference cannula 301 is determined to be 20cm, and the cannula 105 whose pose is to be determined is only 5cm in length due to occlusion, angle, etc., based on which the three-dimensional data of the reference cannula 301 can be cut to 5cm, and the remaining portion is consistent with the three-dimensional data of the cannula 105, and finally the registration result 303 can be determined. Of course, in other embodiments, not only the length of the reference sleeve 301 may be cut, but any portion of the reference sleeve 301 may be cut, for example, the funnel portion of the sleeve 105 may be beveled and removed, and the present application is not limited to adjusting the morphological parameters of the reference sleeve 301. Since the registration of the three-dimensional data is an iterative process, after the three-dimensional data of the reference sleeve 301 is adaptively adjusted, the three-dimensional data of the sleeve 105 and the three-dimensional data of the reference sleeve 301 after the adaptive adjustment may be registered until the registration error is less than or equal to the preset threshold.
In the above embodiment, in the case where it is determined that the registration error is greater than the preset threshold, the reference cannula 301 may be adjusted so that the three-dimensional data of the reference cannula 301 and the morphological parameters of the three-dimensional data of the cannula 105 match. In the case where morphological parameters of three-dimensional data of the fiducial sleeve 301 and the sleeve 105 are matched, a more accurate registration result can be obtained.
Of course, in an embodiment of the present application, the three-dimensional data of the reference cannula 301 may also be adapted before the registration, specifically, the registering the three-dimensional data of the at least one cannula 105 with the three-dimensional data of the reference cannula 301, and determining the pose of the at least one cannula 105 respectively may include:
s601: performing self-adaptive adjustment on the three-dimensional data of the reference sleeve 301, so that the three-dimensional data of the reference sleeve 301 is matched with morphological parameters of the three-dimensional data of the sleeve 105;
s603: registering the three-dimensional data of the sleeve 105 with the three-dimensional data of the reference sleeve 301 after the adaptive adjustment, and determining the pose of the at least one sleeve 105 respectively.
The description of the adaptive adjustment and registration manner in the embodiments of the present application may refer to the above embodiments, and will not be repeated here. The shape of the reference sleeve 301 is adaptively adjusted before registration, so that the technical requirement that the sleeve is easily shielded in an operation scene to enable the acquired three-dimensional data to be incomplete is met, and the accuracy and the efficiency of registration can be further improved.
In the embodiment of the present application, after determining the pose of the at least one cannula 105, the pose of the at least one mechanical arm 111 of the surgical robot 109 may also be adjusted, so that the pose of the at least one mechanical arm 111 matches the pose of the corresponding cannula. Specifically, after determining the pose of the at least one cannula 105, the pose may be sent to an active system of the surgical robot 109, which completes the planning of the movement path of the robotic arm 111. Then, the driving system may send the planned motion path to a driven system where the mechanical arm 111 is located, and the driven system adjusts the pose of the mechanical arm 111 according to the motion path, so that the pose of the final mechanical arm 111 matches the pose of the sleeve 105. Of course, in the subsequent process, the coupling between the mechanical arm 111 and the sleeve 105 may be automatically completed, or the coupling between the mechanical arm 111 and the sleeve 105 may be manually completed, which is not limited herein. Note that the pose matching of the robot arm 111 and the pose matching of the sleeve 105 may include position matching and pose matching. The position matching may include that a distance between a sleeve connection port provided at the end of the mechanical arm 111 and a sleeve port satisfies a preset condition, where the preset condition includes, for example, that a distance between a center point of the sleeve connection port and a center point of the sleeve port is smaller than a preset threshold, for example, the preset threshold may be set to 5cm,3cm, and so on. The posture matching may include that an included angle between an orientation of the sleeve connection port and a main shaft direction of the sleeve is within a preset angle range, the orientation of the sleeve connection port may include an outward direction of the main shaft of the sleeve connection port, and the preset angle range may be set to [ -5 °,5 ° ].
As shown in fig. 1, the surgical instrument 113 is attached to the end of the mechanical arm 111, and during the surgical procedure, the surgical robot 109 needs to control the mechanical arm 111 so that the position of the surgical instrument 113 at the surgical port remains unchanged, preventing tearing of the surgical port. Based on this, in one embodiment of the present application, based on the pose of at least one cannula 105, the position of the surgical port through which at least one cannula 105 is threaded relative to the corresponding cannula may be determined, respectively. The determined position of the surgical port includes, for example, the position of the surgical port on the main shaft of the cannula 105. Specifically, the center of the surgical port may include the center of the intersecting cross-section of the surgical site and cannula 105, which may be expressed mathematically as the intersection of the principal axis vector of cannula 105 and the surgical site curvature. Based on this, in one embodiment of the present application, the determining the position of the surgical port through which the at least one cannula is inserted, respectively, with respect to the corresponding cannula may include:
s701: respectively determining the curved surface of the surgical port penetrated by the at least one sleeve;
s703: and respectively determining the intersection point of the main shaft of the at least one sleeve and the corresponding curved surface.
In the embodiment of the present application, the curved surface where the surgical port through which the cannula 105 is inserted is first determined. In one example, the surface equation for the surface may be iterated using a least squares method or the like. An intersection between the surface equation and the principal vector of the cannula 105, which is the center position of the surgical port, may then be determined.
In a practical scenario, due to the acquisition angle, occlusion, etc., defects may occur in the three-dimensional data acquired at the surgical site at the cannula 105. Based on the method, the three-dimensional data at the operation part can be subjected to completion processing, for example, the three-dimensional data at the defect part is interpolated by using a related curved surface interpolation method, so as to obtain a more accurate curved surface equation.
Of course, in other embodiments, the information of the cannula 105 may also be used to determine the location of the surgical port. In practical application, the depth of the sleeve inserted into the human body can be set to be a fixed value, for example, an annular mark can be arranged on the sleeve, and the position of the annular mark is the position of the surgical port. Since the location of the annular marks on the cannula 105 is known, the location of the surgical port can be determined from the port location of the cannula 105 in case the port location of the cannula 105 is obtained. The embodiment of the application does not limit the way in which the surgical port position is determined.
The method for determining the cannula pose of the surgical robot system provided by the present application is described in detail above with reference to fig. 1 to 3, and the cannula pose determining apparatus 103 of the surgical robot system provided by the present application will be described below with reference to fig. 4, including:
a data acquisition module 401 for acquiring three-dimensional data of at least one casing;
the data registration module 403 is configured to register the three-dimensional point cloud data of the at least one sleeve with the three-dimensional point cloud data of the reference sleeve, and determine the pose of the at least one sleeve respectively.
Optionally, in one embodiment of the present application, the acquiring three-dimensional data of at least one cannula includes:
acquiring three-dimensional data of a surgical environment, wherein the surgical environment comprises at least one sleeve;
three-dimensional data of the at least one cannula is extracted from the three-dimensional data of the surgical environment.
Optionally, in an embodiment of the present application, the registering the three-dimensional point cloud data of the at least one cannula with the three-dimensional point cloud data of the reference cannula, respectively determining the pose of the at least one cannula includes:
registering the three-dimensional data of the at least one sleeve with the three-dimensional data of the reference sleeve to obtain registration results respectively;
Determining a registration error of the registration result;
under the condition that the registration error is larger than a preset threshold value, carrying out self-adaptive adjustment on the three-dimensional data of the reference sleeve, so that the three-dimensional data of the reference sleeve is matched with morphological parameters of the three-dimensional data of the sleeve;
registering the three-dimensional data of the sleeve and the three-dimensional data of the reference sleeve after self-adaptive adjustment until the registration error is smaller than or equal to the preset threshold value, and respectively determining the pose of the at least one sleeve.
Optionally, in an embodiment of the present application, the registering the three-dimensional data of the at least one cannula with the three-dimensional data of the reference cannula, respectively determining the pose of the at least one cannula includes:
performing self-adaptive adjustment on the three-dimensional data of the reference sleeve, so that the three-dimensional data of the reference sleeve is matched with morphological parameters of the three-dimensional data of the sleeve;
registering the three-dimensional data of the sleeve and the three-dimensional data of the reference sleeve after self-adaptive adjustment, and respectively determining the pose of the at least one sleeve.
Optionally, in an embodiment of the present application, the registering the three-dimensional data of the at least one cannula with the three-dimensional data of the reference cannula, respectively determining the pose of the at least one cannula includes:
Performing rough registration on the three-dimensional data of the at least one sleeve and the three-dimensional data of the reference sleeve, and determining a rough registration conversion relation;
and carrying out fine registration on the three-dimensional data of the at least one sleeve and the three-dimensional data of the reference sleeve based on the coarse registration conversion relation, and determining a fine registration conversion relation.
Optionally, in an embodiment of the present application, after the determining the pose of the at least one cannula, the method further includes:
and adjusting the pose of at least one mechanical arm of the surgical robot so that the pose of the at least one mechanical arm is matched with the pose of the corresponding sleeve.
Optionally, in an embodiment of the present application, after the determining the pose of the at least one cannula, the method further includes:
and respectively determining the positions of the surgical ports penetrated by the at least one sleeve relative to the corresponding sleeves.
Optionally, in an embodiment of the present application, the determining the position of the surgical port through which the at least one cannula is inserted relative to the corresponding cannula includes:
respectively determining the curved surface of the surgical port penetrated by the at least one sleeve;
And respectively determining the intersection point of the main shaft of the at least one sleeve and the corresponding curved surface.
The embodiment of the application also provides a processing device which is used for realizing the functions of the cannula pose determining device of the surgical robot system in the system architecture diagram shown in the figure 1. The processing device 500 may be a physical device or a physical device cluster, or may be a virtualized cloud device, such as at least one cloud computing device in a cloud computing cluster. For ease of understanding, the present application illustrates the structure of the processing device 500 as a stand-alone physical device with respect to the processing device 500.
As shown in fig. 5, the processing apparatus 500 includes: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to implement the above-described apparatus when executing the instructions. The processing device 500 includes a memory 801, a processor 802, a bus 803, and a communication interface 804. The memory 801, the processor 802, and the communication interface 804 communicate via the bus 801. Bus 803 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus. The communication interface 804 is used for communication with the outside.
The processor 802 may be a central processing unit (central processing unit, CPU). The memory 801 may include volatile memory (RAM), such as random access memory (random access memory). The memory 801 may also include a nonvolatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, an HDD, or an SSD.
The memory 801 has stored therein executable code that the processor 802 executes to perform the aforementioned test scenario construction method.
Embodiments of the present application provide a surgical robotic system comprising at least one surgical instrument 113, at least one cannula 105, at least one robotic arm 111, and the treatment device 500, wherein,
the sleeve 105 is arranged in the surgical port in a penetrating way and is used for sleeving the surgical instrument 113;
the surgical instrument 113 is connected to an end of the mechanical arm 111 for performing a surgery;
the robotic arm 111 is coupled to the cannula 105 for controlling movement of the surgical instrument 113.
Optionally, in one embodiment of the present application, the surgical robot system further includes:
the driving system is used for planning a motion path of the mechanical arm according to the pose of the sleeve and the initial pose of the mechanical arm, so that the pose of the at least one mechanical arm is matched with the pose of the corresponding sleeve;
Correspondingly, the mechanical arm is also used for moving according to the movement path.
Embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
Embodiments of the present application provide a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
In some embodiments, the disclosed methods may be implemented as computer program instructions encoded on a computer-readable storage medium in a machine-readable format or encoded on other non-transitory media or articles of manufacture. Fig. 6 schematically illustrates a conceptual partial view of an example computer program product comprising a computer program for executing a computer process on a computing device, arranged in accordance with at least some embodiments presented herein. In one embodiment, the example computer program product 600 is provided using a signal bearing medium 601. The signal bearing medium 601 may include one or more program instructions 602 that when executed by one or more processors may provide the functionality or portions of the functionality described above with respect to fig. 2. Further, the program instructions 602 in fig. 6 also describe example instructions.
In some examples, signal bearing medium 601 may comprise a computer readable medium 603 such as, but not limited to, a hard disk drive, compact Disk (CD), digital Video Disk (DVD), digital tape, memory, read-Only Memory (ROM), or random access Memory (Random Access Memory, RAM), among others. In some implementations, the signal bearing medium 601 may contain a computer recordable medium 604 such as, but not limited to, memory, read/write (R/W) CD, R/W DVD, and the like. In some implementations, the signal bearing medium 601 may include a communication medium 605 such as, but not limited to, a digital and/or analog communication medium (e.g., fiber optic cable, waveguide, wired communications link, wireless communications link, etc.). Thus, for example, the signal bearing medium 601 may be conveyed by a communication medium 605 in wireless form (e.g., a wireless communication medium that complies with the IEEE 802.11 standard or other transmission protocol). The one or more program instructions 602 may be, for example, computer-executable instructions or logic-implemented instructions. In some examples, a computing device, such as the computing device described with respect to fig. 2, may be configured to provide various operations, functions, or actions in response to program instructions 602 communicated to the computing device through one or more of computer readable medium 603, computer recordable medium 604, and/or communication medium 605. It should be understood that the arrangement described herein is for illustrative purposes only. Thus, those skilled in the art will appreciate that other arrangements and other elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) can be used instead, and some elements may be omitted altogether depending on the desired results. In addition, many of the elements described are functional entities that may be implemented as discrete or distributed components, or in any suitable combination and location in conjunction with other components.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware (e.g., circuits or ASICs (Application Specific Integrated Circuit, application specific integrated circuits)) which perform the corresponding functions or acts, or combinations of hardware and software, such as firmware, etc.
Although the application is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a study of the drawings, the disclosure, and the appended claims. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The foregoing description of embodiments of the application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A cannula pose determination method of a surgical robot system, comprising:
acquiring three-dimensional data of at least one sleeve;
registering the three-dimensional data of the at least one sleeve with the three-dimensional data of the reference sleeve, and respectively determining the pose of the at least one sleeve; registering the three-dimensional data of the at least one sleeve with the three-dimensional data of the reference sleeve, and respectively determining the pose of the at least one sleeve, wherein the registering comprises the following steps:
carrying out self-adaptive adjustment on the three-dimensional data of the reference sleeve before registration, so that the three-dimensional data of the reference sleeve is matched with morphological parameters of the three-dimensional data of the sleeve;
Registering the three-dimensional data of the sleeve and the three-dimensional data of the reference sleeve after self-adaptive adjustment, and respectively determining the pose of the at least one sleeve.
2. The method of claim 1, wherein the acquiring three-dimensional data of at least one cannula comprises:
acquiring three-dimensional data of a surgical environment, wherein the surgical environment comprises at least one sleeve;
three-dimensional data of the at least one cannula is extracted from the three-dimensional data of the surgical environment.
3. The method of claim 2, wherein registering the three-dimensional data of the at least one cannula with the three-dimensional data of the reference cannula, respectively, determines the pose of the at least one cannula, comprising:
registering the three-dimensional data of the at least one sleeve with the three-dimensional data of the reference sleeve to obtain registration results respectively;
determining a registration error of the registration result;
under the condition that the registration error is larger than a preset threshold value, carrying out self-adaptive adjustment on the three-dimensional data of the reference sleeve, so that the three-dimensional data of the reference sleeve is matched with morphological parameters of the three-dimensional data of the sleeve;
registering the three-dimensional data of the sleeve and the three-dimensional data of the reference sleeve after self-adaptive adjustment until the registration error is smaller than or equal to the preset threshold value, and respectively determining the pose of the at least one sleeve.
4. The method of claim 1, wherein registering the three-dimensional data of the at least one cannula with the three-dimensional data of the reference cannula, respectively, determines a pose of the at least one cannula, comprising:
performing rough registration on the three-dimensional data of the at least one sleeve and the three-dimensional data of the reference sleeve, and determining a rough registration conversion relation;
and carrying out fine registration on the three-dimensional data of the at least one sleeve and the three-dimensional data of the reference sleeve based on the coarse registration conversion relation, and determining a fine registration conversion relation.
5. The method according to any one of claims 1-4, wherein after said determining the pose of the at least one cannula, respectively, the method further comprises:
and adjusting the pose of at least one mechanical arm of the surgical robot so that the pose of the at least one mechanical arm is matched with the pose of the corresponding sleeve.
6. The method of claim 1, wherein after the determining the pose of the at least one cannula, respectively, the method further comprises:
and determining the position of the surgical port penetrated by the at least one sleeve respectively.
7. The method of claim 6, wherein the determining the location of the surgical port through which the at least one cannula is threaded, respectively, comprises:
Respectively determining the curved surface of the surgical port penetrated by the at least one sleeve;
and respectively determining the intersection point of the main shaft of the at least one sleeve and the corresponding curved surface.
8. A cannula pose determination device of a surgical robot system, comprising:
the data acquisition module is used for acquiring three-dimensional data of at least one sleeve;
the data registration module is used for registering the three-dimensional data of the at least one sleeve and the three-dimensional data of the reference sleeve, and determining the pose of the at least one sleeve respectively; registering the three-dimensional data of the at least one sleeve with the three-dimensional data of the reference sleeve, and respectively determining the pose of the at least one sleeve, wherein the registering comprises the following steps: carrying out self-adaptive adjustment on the three-dimensional data of the reference sleeve before registration, so that the three-dimensional data of the reference sleeve is matched with morphological parameters of the three-dimensional data of the sleeve;
registering the three-dimensional data of the sleeve and the three-dimensional data of the reference sleeve after self-adaptive adjustment, and respectively determining the pose of the at least one sleeve.
9. A processing device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the cannula pose determination method of the surgical robot system according to any of claims 1 to 7.
10. A surgical robotic system comprising at least one surgical instrument, at least one cannula, at least one robotic arm, and the treatment device of claim 9, wherein,
the sleeve is used for sleeving the surgical instrument;
the surgical instrument is connected to the end part of the mechanical arm and is used for performing surgery;
the mechanical arm is connected with the sleeve and used for controlling the movement of the surgical instrument.
11. The surgical robotic system of claim 10, further comprising:
the driving system is used for planning a motion path of the mechanical arm according to the pose of the sleeve and the initial pose of the mechanical arm, so that the pose of the at least one mechanical arm is matched with the pose of the corresponding sleeve;
correspondingly, the mechanical arm is also used for moving according to the movement path.
12. A computer storage medium, characterized in that the storage medium has stored therein a computer program, wherein the computer program is arranged to perform the cannula pose determination method of the surgical robot system according to any of claims 1 to 7 at run-time.
CN202311099892.0A 2021-09-14 2021-09-14 Method and device for determining sleeve pose of surgical robot system Pending CN117017493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311099892.0A CN117017493A (en) 2021-09-14 2021-09-14 Method and device for determining sleeve pose of surgical robot system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111083761.4A CN113974834B (en) 2021-09-14 2021-09-14 Method and device for determining sleeve pose of surgical robot system
CN202311099892.0A CN117017493A (en) 2021-09-14 2021-09-14 Method and device for determining sleeve pose of surgical robot system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202111083761.4A Division CN113974834B (en) 2021-09-14 2021-09-14 Method and device for determining sleeve pose of surgical robot system

Publications (1)

Publication Number Publication Date
CN117017493A true CN117017493A (en) 2023-11-10

Family

ID=79735917

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111083761.4A Active CN113974834B (en) 2021-09-14 2021-09-14 Method and device for determining sleeve pose of surgical robot system
CN202311099892.0A Pending CN117017493A (en) 2021-09-14 2021-09-14 Method and device for determining sleeve pose of surgical robot system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111083761.4A Active CN113974834B (en) 2021-09-14 2021-09-14 Method and device for determining sleeve pose of surgical robot system

Country Status (1)

Country Link
CN (2) CN113974834B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022117010A1 (en) * 2022-07-07 2024-01-18 Karl Storz Se & Co. Kg Medical system and method for operating a medical system for determining the location of an access facility

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7607440B2 (en) * 2001-06-07 2009-10-27 Intuitive Surgical, Inc. Methods and apparatus for surgical planning
DE102010040987A1 (en) * 2010-09-17 2012-03-22 Siemens Aktiengesellschaft Method for placing a laparoscopic robot in a predeterminable relative position to a trocar
US11412951B2 (en) * 2013-03-15 2022-08-16 Syanptive Medical Inc. Systems and methods for navigation and simulation of minimally invasive therapy
US9489738B2 (en) * 2013-04-26 2016-11-08 Navigate Surgical Technologies, Inc. System and method for tracking non-visible structure of a body with multi-element fiducial
KR102536576B1 (en) * 2014-03-17 2023-05-26 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 Surgical cannulas and related systems and methods of identifying surgical cannulas
WO2016109876A1 (en) * 2015-01-07 2016-07-14 Synaptive Medical (Barbados) Inc. Method, system and apparatus for adaptive image acquisition
CN107049380B (en) * 2017-06-03 2023-05-26 成都五义医疗科技有限公司 Standardized puncture outfit series products and use method thereof
CN109544599B (en) * 2018-11-22 2020-06-23 四川大学 Three-dimensional point cloud registration method based on camera pose estimation
WO2020105049A1 (en) * 2018-11-22 2020-05-28 Vuze Medical Ltd. Apparatus and methods for use with image-guided skeletal procedures
US11166774B2 (en) * 2019-04-17 2021-11-09 Cilag Gmbh International Robotic procedure trocar placement visualization
WO2020263520A1 (en) * 2019-06-26 2020-12-30 Auris Health, Inc. Systems and methods for robotic arm alignment and docking
CN112161619B (en) * 2020-09-16 2022-11-15 思看科技(杭州)股份有限公司 Pose detection method, three-dimensional scanning path planning method and detection system
CN112880562A (en) * 2021-01-19 2021-06-01 佛山职业技术学院 Method and system for measuring pose error of tail end of mechanical arm

Also Published As

Publication number Publication date
CN113974834A (en) 2022-01-28
CN113974834B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
KR102013866B1 (en) Method and apparatus for calculating camera location using surgical video
US10653485B2 (en) System and method of intraluminal navigation using a 3D model
US9990744B2 (en) Image registration device, image registration method, and image registration program
EP3007635B1 (en) Computer-implemented technique for determining a coordinate transformation for surgical navigation
EP2573735B1 (en) Endoscopic image processing device, method and program
US20070018975A1 (en) Methods and systems for mapping a virtual model of an object to the object
EP3362990A1 (en) Method and system for calculating resected tissue volume from 2d/2.5d intraoperative image data
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
CN112382359A (en) Patient registration method and device, electronic equipment and computer readable medium
US20220415006A1 (en) Robotic surgical safety via video processing
CN113974834B (en) Method and device for determining sleeve pose of surgical robot system
EP2031559A1 (en) Augmented visualization in two-dimensional images
US10102638B2 (en) Device and method for image registration, and a nontransitory recording medium
CN113591977B (en) Point-to-point matching method, device, electronic equipment and storage medium
US20220020160A1 (en) User interface elements for orientation of remote camera during surgery
CN111743628A (en) Automatic puncture mechanical arm path planning method based on computer vision
US10832422B2 (en) Alignment system for liver surgery
WO2017017498A1 (en) Method, system and apparatus for adjusting image data to compensate for modality-induced distortion
CN116459013B (en) Collaborative robot based on 3D visual recognition
US20230210627A1 (en) Three-dimensional instrument pose estimation
EP4137033A1 (en) System and method for view restoration
US20230252681A1 (en) Method of medical calibration
CN115120345A (en) Navigation positioning method, device, computer equipment and storage medium
CN118203418A (en) Positioning method and device of interventional instrument, readable storage medium and electronic equipment
CN118212172A (en) Object three-dimensional positioning method and device, readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination