WO2023116333A1 - 一种机器人辅助的套管针自动对接方法和装置 - Google Patents

一种机器人辅助的套管针自动对接方法和装置 Download PDF

Info

Publication number
WO2023116333A1
WO2023116333A1 PCT/CN2022/134016 CN2022134016W WO2023116333A1 WO 2023116333 A1 WO2023116333 A1 WO 2023116333A1 CN 2022134016 W CN2022134016 W CN 2022134016W WO 2023116333 A1 WO2023116333 A1 WO 2023116333A1
Authority
WO
WIPO (PCT)
Prior art keywords
trocar
rotation matrix
position information
robot
orientation
Prior art date
Application number
PCT/CN2022/134016
Other languages
English (en)
French (fr)
Inventor
林生智
晏丕松
Original Assignee
广州市微眸医疗器械有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州市微眸医疗器械有限公司 filed Critical 广州市微眸医疗器械有限公司
Publication of WO2023116333A1 publication Critical patent/WO2023116333A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/34Trocars; Puncturing needles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/301Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30021Catheter; Guide wire

Definitions

  • the invention relates to the field of robot control, in particular to a robot-assisted trocar automatic docking method and device.
  • Autonomous navigation is a vital part of robot-assisted surgery.
  • the common automatic navigation technology is mainly based on medical imaging data such as magnetic resonance and CT.
  • image processing technology Through image processing technology, a visualized three-dimensional model is generated to provide a predetermined route for the movement of the robot.
  • this type of off-line modeling method is poorly adaptable, and requires repeated modeling for different samples, and the modeling work needs to be completed before surgery. Therefore, this type of method is only suitable for surgical projects with poor visibility, and requires a complete Image data, so the requirements for image data are higher.
  • the invention provides a robot-assisted trocar automatic docking method and device to solve the technical problem of how to automatically judge the trocar pose.
  • an embodiment of the present invention provides a robot-assisted trocar automatic docking method, including:
  • the orientation of the trocar is obtained, and according to the orientation of the trocar, the instrument at the end of the robot is controlled to dock with the trocar.
  • the outputting the position information of the trocar satisfying preset conditions is specifically:
  • (x, y) is the pixel position of the trocar
  • pred(x, y) is the confidence that the pixel position (x, y) is classified as a trocar
  • max(pred) is the U-Net network The overall maximum value of the output image.
  • the trocar before parameterizing the rotation angle of the trocar, it also includes: further processing the position information of the trocar satisfying preset conditions, specifically:
  • the obtaining the rotation matrix of the trocar is specifically:
  • R1, R2, and R3 are the columns of the six-dimensional rotation matrix R respectively.
  • the orientation of the trocar is obtained according to the rotation matrix of the trocar, specifically:
  • the orientation of the trocar is represented by the rotation matrix of the trocar and the true value of the trocar The angle ⁇ between them is expressed.
  • the present invention also provides a robot-assisted trocar automatic docking device, including a training module, a detection module, a rotation matrix module and a docking module; wherein,
  • the training module is used to obtain a trocar data set, and conduct training through a preset U-Net network to obtain a first model;
  • the detection module is used to detect the position of the trocar in the image to be tested through the first model, and output position information of the trocar satisfying preset conditions;
  • the rotation matrix module is used to parameterize the rotation angle of the trocar according to the position information of the trocar satisfying the preset condition, and obtain the rotation matrix of the trocar;
  • the docking module is used to obtain the orientation of the trocar according to the rotation matrix of the trocar, and control the instrument at the end of the robot to dock with the trocar according to the orientation of the trocar.
  • the detection module outputs the position information of the trocar satisfying preset conditions, specifically:
  • the detection module outputs the position information of the trocar satisfying the following conditions:
  • (x, y) is the pixel position of the trocar
  • pred(x, y) is the confidence that the pixel position (x, y) is classified as a trocar
  • max(pred) is the U-Net network The overall maximum value of the output image.
  • the trocar automatic docking device further includes a screening module, the screening module is used to meet the predetermined requirements of the trocar before the rotation matrix module parameterizes the rotation angle of the trocar.
  • the conditional location information is further processed, specifically:
  • the screening module calculates the median value of the position information in every seven consecutive image frames, and calculates the Euclidean distance between the median value and each of the position information in the seven consecutive image frames, for Euclidean
  • the position information whose distance in several miles is less than or equal to a quarter of the standard deviation is averaged to obtain the final position information of the trocar.
  • the rotation matrix module obtains the rotation matrix of the trocar, specifically:
  • the rotation matrix module obtains the rotation matrix R z of the trocar in the z direction and the rotation matrix R y of the trocar in the y direction, and calculates the six-dimensional rotation matrix R of the trocar:
  • R1, R2, and R3 are the columns of the six-dimensional rotation matrix R respectively.
  • the docking module obtains the orientation of the trocar according to the rotation matrix of the trocar, specifically:
  • the docking module obtains the true value of the trocar in the data set According to the rotation matrix of the trocar Calculate the orientation of the trocar:
  • the orientation of the trocar is represented by the rotation matrix of the trocar and the true value of the trocar The angle ⁇ between them is expressed.
  • An embodiment of the present invention provides a robot-assisted trocar automatic docking method and device, the method comprising: acquiring a trocar data set, performing training through a preset U-Net network, and obtaining a first model;
  • the first model detects the position of the trocar in the image to be tested, and outputs the position information of the trocar satisfying the preset condition; according to the position information of the trocar satisfying the preset condition, the The rotation angle of the trocar is parameterized to obtain the rotation matrix of the trocar; according to the rotation matrix of the trocar, the orientation of the trocar is obtained, and according to the orientation of the trocar, control An instrument at the robotic end interfaces with the trocar.
  • the present invention detects the position of the trocar through the U-Net network, and obtains the orientation of the trocar, can accurately control the docking of the instrument at the end of the robot and the trocar, and realize automatic judgment of the trocar position and posture , able to adapt to internal surgery projects with poor visibility.
  • Fig. 1 A schematic flow chart of an embodiment of a robot-assisted trocar automatic docking method provided by the present invention.
  • Fig. 2 A structural schematic diagram of an embodiment of the robot-assisted trocar automatic docking device provided by the present invention.
  • Figure 1 is a robot-assisted trocar automatic docking method provided by an embodiment of the present invention, including steps S1 to S4; wherein, this embodiment uses a five-degree-of-freedom series-parallel ophthalmic surgical robot, and multiple Function welding magnifying glass camera.
  • the series-parallel ophthalmic surgical robot consists of a first joint for two-axis translation and rotation, a second joint, and a slide rail joint for Z-axis movement of the end effector.
  • the first joint is provided with a first linear motor and a As for the second linear motor, the third linear motor and the fourth linear motor are arranged on the second joint, and the fifth linear motor is arranged on the end slide rail joint.
  • the multifunctional welding magnifying glass camera is rigidly mounted to the syringe at the end of the robot in a preset direction by a 3D printed bracket.
  • step S1 the data set of the trocar is obtained, and the training is performed through the preset U-Net network to obtain the first model.
  • the image frame is captured by a multifunctional welding magnifying glass camera installed on the robot, the RGB image of the trocar is obtained, and a data set of the trocar is generated.
  • the data set includes no less than 2000 images, and the image contains The ground truth information of the trocar in image coordinates and the 3D pose of the trocar relative to the camera in the virtual scene.
  • the U-Net network with Resnet34 as the core feature extractor is selected and trained to form the optimal model.
  • the U-Net network is first pre-trained on the trocar data set, and then fine-tuned on the trocar-marked data set to obtain the first model.
  • the last network layer of the U-Net network uses a sigmoid activation function and a binary cross-entropy loss function.
  • Step S2 using the first model to detect the position of the trocar in the image to be tested, and output the position information of the trocar satisfying a preset condition.
  • (x, y) is the pixel position of the trocar
  • pred(x, y) is the confidence that the pixel position (x, y) is classified as a trocar
  • max(pred) is the U-Net network The overall maximum value of the output image.
  • the position information of the trocar satisfying the preset condition is further processed, specifically:
  • Step S3 parameterize the rotation angle of the trocar according to the position information of the trocar satisfying the preset condition, and obtain the rotation matrix of the trocar.
  • the obtaining the rotation matrix of the trocar is specifically:
  • the center of the cross section of the trocar is used as the origin, the cross section of the trocar is used as the XY plane of the coordinate system, and the normal vector of the cross section is used as the z axis to establish a trocar coordinate system.
  • trocar six-dimensional rotation matrix R 6d can be defined as:
  • R 6d [R Z
  • R z and R y are the rotation matrix of the trocar in the z direction and the rotation matrix in the y direction, respectively.
  • R1, R2, and R3 are the columns of the six-dimensional rotation matrix R respectively, and ⁇ represents a vector normalization operation.
  • step S4 the orientation of the trocar is obtained according to the rotation matrix of the trocar, and the instrument at the end of the robot is controlled to dock with the trocar according to the orientation of the trocar.
  • the normal vector of the cross-section closest to the trocar on the current image plane is estimated to determine a suitable pose of the target trocar.
  • Resnet34 as a feature extractor to convert it into features, and re-represent it as a six-dimensional rotation matrix through a fully connected layer.
  • the orientation of the trocar is represented by the rotation matrix of the trocar and the truth value The angle ⁇ between them is expressed.
  • the loss function L is designed such that it avoids penalizing the network for irrelevant rotations around the trocar z-axis. Therefore, MSE is proportional to the cosine distance of ⁇ , and the loss function L rotation is as follows:
  • a two-stage step-by-step alignment is first used In this method, the direction of the instrument installed at the end of the robot is aligned with the direction of the trocar, and then translation is performed to align the XY of the end of the robot instrument with the trocar, which can compensate for the small movement of the trocar during the operation. Docking is accomplished by approaching the trocar at an adaptive speed while keeping the tip of the instrument always on line with the trocar.
  • a trocar with an infrared reflector is preferably used, and the miniature camera is also equipped with an infrared detector, thereby assisting the detection of the trocar position and helping to lock the position range of the trocar.
  • the robot-assisted trocar automatic docking method described in this embodiment only shows the application in ophthalmic surgery, it is only used as an example, and it can also be applied to other types of minimally invasive robots. in surgery.
  • the present invention also provides a robot-assisted trocar automatic docking device, including a training module 101, a detection module 102, a rotation matrix module 103, and a docking module 104; wherein,
  • the training module 101 is used to obtain a trocar data set, and perform training through a preset U-Net network to obtain a first model;
  • the detection module 102 is configured to detect the position of the trocar in the image to be tested through the first model, and output position information of the trocar satisfying preset conditions;
  • the rotation matrix module 103 is used to parameterize the rotation angle of the trocar according to the position information of the trocar satisfying the preset condition, and obtain the rotation matrix of the trocar;
  • the docking module 104 is used to obtain the orientation of the trocar according to the rotation matrix of the trocar, and control the instrument at the end of the robot to dock with the trocar according to the orientation of the trocar.
  • the detection module 102 outputs the position information of the trocar satisfying preset conditions, specifically:
  • the detection module 102 outputs the position information of the trocar satisfying the following conditions:
  • (x, y) is the pixel position of the trocar
  • pred(x, y) is the confidence that the pixel position (x, y) is classified as a trocar
  • max(pred) is the U-Net network The overall maximum value of the output image.
  • the trocar automatic docking device further includes a screening module, the screening module is used to parameterize the rotation angle of the trocar by the rotation matrix module 103
  • the location information that meets the preset conditions is further processed, specifically:
  • the screening module calculates the median value of the position information in every seven consecutive image frames, and calculates the Euclidean distance between the median value and each of the position information in the seven consecutive image frames, for Euclidean
  • the position information whose distance in several miles is less than or equal to a quarter of the standard deviation is averaged to obtain the final position information of the trocar.
  • the rotation matrix module 103 obtains the rotation matrix of the trocar, specifically:
  • the rotation matrix module 103 obtains the rotation matrix R z of the trocar in the z direction and the rotation matrix R y of the trocar in the y direction, and calculates the six-dimensional rotation matrix R of the trocar:
  • R1, R2, and R3 are the columns of the six-dimensional rotation matrix R respectively.
  • the docking module 104 obtains the orientation of the trocar according to the rotation matrix of the trocar, specifically:
  • the docking module 104 acquires the true value of the trocar in the data set According to the rotation matrix of the trocar Calculate the orientation of the trocar:
  • the orientation of the trocar is represented by the rotation matrix of the trocar and the true value of the trocar The angle ⁇ between them is expressed.
  • An embodiment of the present invention provides a robot-assisted trocar automatic docking method and device, the method comprising: acquiring a trocar data set, performing training through a preset U-Net network, and obtaining a first model;
  • the first model detects the position of the trocar in the image to be tested, and outputs the position information of the trocar satisfying the preset condition; according to the position information of the trocar satisfying the preset condition, the The rotation angle of the trocar is parameterized to obtain the rotation matrix of the trocar; according to the rotation matrix of the trocar, the orientation of the trocar is obtained, and according to the orientation of the trocar, control An instrument at the robotic end interfaces with the trocar.
  • the present invention detects the position of the trocar through the U-Net network, and obtains the orientation of the trocar, can accurately control the docking of the instrument at the end of the robot and the trocar, and realize automatic judgment of the trocar position and posture , able to adapt to internal surgery projects with poor visibility.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Robotics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pathology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Manipulator (AREA)
  • Surgical Instruments (AREA)

Abstract

一种机器人辅助的套管针自动对接方法和装置,该方法包括:获取套管针的数据集,通过预设的U-Net网络进行训练,获得第一模型;通过第一模型,检测待测图像中套管针的位置,输出套管针的满足预设条件的位置信息;根据套管针的满足预设条件的位置信息,对套管针的旋转角度进行参数化,获得套管针的旋转矩阵;根据套管针的旋转矩阵,获得套管针的取向,并根据所述套管针的取向,控制机器人末端的器械与套管针进行对接。该方法和装置相对于现有技术,通过U-Net网络检测套管针的位置,并获取套管针的取向,能够精确控制机器人末端的器械与套管针进行对接,实现自动判断套管针位姿,能够适应可视性较差的体内手术项目。

Description

一种机器人辅助的套管针自动对接方法和装置 技术领域
本发明涉及机器人控制领域,尤其涉及一种机器人辅助的套管针自动对接方法和装置。
背景技术
自动导航是机器人辅助手术中至关重要的一部分。目前常见的自动导航技术主要以磁共振、CT等医学影像数据为基础,通过图像处理技术,生成可视化三维模型,以此为机器人的运动提供预定的路线。但是这类离线建模方法适应性较差,对不同的样本需要重复建模,且需要术前完成建模工作,因此这类方法只适用于可视性较差的手术项目,同时需要完整的影像数据,因此对影像数据的要求较高。
发明内容
本发明提供了一种机器人辅助的套管针自动对接方法和装置,以解决如何自动判断套管针位姿的技术问题。
为了解决上述技术问题,本发明实施例提供了一种机器人辅助的套管针自动对接方法,包括:
获取套管针的数据集,通过预设的U-Net网络进行训练,获得第一模型;
通过所述第一模型,检测待测图像中所述套管针的位置,输出所述套管针的满足预设条件的位置信息;
根据所述套管针的满足预设条件的位置信息,对套管针的旋转角度进行参数化,获得所述套管针的旋转矩阵;
根据所述套管针的旋转矩阵,获得所述套管针的取向,并根据所述套管针的取向,控制机器人末端的器械与所述套管针进行对接。
作为优选方案,所述输出所述套管针的满足预设条件的位置信息,具体为:
输出所述套管针满足以下条件的位置信息:
pred(x,y)≥0.8*max(pred);
其中,(x,y)为套管针的像素位置,pred(x,y)为像素位置(x,y)被分类为套管针的置信度,max(pred)为所述U-Net网络的输出图像的整体最大值。
作为优选方案,在所述对套管针的旋转角度进行参数化之前,还包括:对所述套管针的满足预设条件的位置信息进一步处理,具体地:
计算每七个连续图像帧中所述位置信息的中值,并分别计算中值和所述七个连续图像帧中各所述位置信息之间的欧几里得距离,对欧几里得距离小于等于四分之一标准差的位置信息进行平均,以获得所述套管针的最终位置信息。
作为优选方案,所述获得所述套管针的旋转矩阵,具体为:
获取所述套管针在z方向上的旋转矩阵R z和所述套管针在y方向上的旋转矩阵R y,计算所述套管针的六维旋转矩阵R:
Figure PCTCN2022134016-appb-000001
其中,R1、R2、R3分别为所述六维旋转矩阵R的列。
作为优选方案,所述根据所述套管针的旋转矩阵,获得所述套管针的取向,具体为:
获取所述数据集中所述套管针的真值
Figure PCTCN2022134016-appb-000002
根据所述套管针的旋转矩阵
Figure PCTCN2022134016-appb-000003
计算所述套管针的取向:
Figure PCTCN2022134016-appb-000004
其中,所述套管针的取向以所述套管针的旋转矩阵
Figure PCTCN2022134016-appb-000005
和所述套管针的真值
Figure PCTCN2022134016-appb-000006
之间的夹角Δθ进行表示。
相应的,本发明还提供了一种机器人辅助的套管针自动对接装置,包括训练模块、检测模块、旋转矩阵模块和对接模块;其中,
所述训练模块用于获取套管针的数据集,通过预设的U-Net网络进行训练,获得第一模型;
所述检测模块用于通过所述第一模型,检测待测图像中所述套管针的位置,输出所述套管针的满足预设条件的位置信息;
所述旋转矩阵模块用于根据所述套管针的满足预设条件的位置信息,对套管针的旋转角度进行参数化,获得所述套管针的旋转矩阵;
所述对接模块用于根据所述套管针的旋转矩阵,获得所述套管针的取向,并根据所述套管针的取向,控制机器人末端的器械与所述套管针进行对接。
作为优选方案,所述检测模块输出所述套管针的满足预设条件的位置信息,具体为:
所述检测模块输出所述套管针满足以下条件的位置信息:
pred(x,y)≥0.8*max(pred);
其中,(x,y)为套管针的像素位置,pred(x,y)为像素位置(x,y)被分类为套管针的置信度,max(pred)为所述U-Net网络的输出图像的整体最大值。
作为优选方案,所述套管针自动对接装置还包括筛选模块,所述筛选模块用于在所述旋转矩阵模块对套管针的旋转角度进行参数化之前,对所述套管针的满足预设条件的位置信息进一步处理,具体地:
所述筛选模块计算每七个连续图像帧中所述位置信息的中值,并分别计算中值和所述七个连续图像帧中各所述位置信息之间的欧几里得距离,对欧几里得距离小于等于四分之一标准差的位置信息进行平均,以获得所述套管针的最终位置信息。
作为优选方案,所述旋转矩阵模块获得所述套管针的旋转矩阵,具体为:
所述旋转矩阵模块获取所述套管针在z方向上的旋转矩阵R z和所述套管针在y方向上的旋转矩阵R y,计算所述套管针的六维旋转矩阵R:
Figure PCTCN2022134016-appb-000007
其中,R1、R2、R3分别为所述六维旋转矩阵R的列。
作为优选方案,所述对接模块根据所述套管针的旋转矩阵,获得所述套管针的取向,具体为:
所述对接模块获取所述数据集中所述套管针的真值
Figure PCTCN2022134016-appb-000008
根据所述套管针的旋转矩阵
Figure PCTCN2022134016-appb-000009
计算所述套管针的取向:
Figure PCTCN2022134016-appb-000010
其中,所述套管针的取向以所述套管针的旋转矩阵
Figure PCTCN2022134016-appb-000011
和所述套管针的真值
Figure PCTCN2022134016-appb-000012
之间的夹角Δθ进行表示。
相比于现有技术,本发明实施例具有如下有益效果:
本发明实施例提供了一种机器人辅助的套管针自动对接方法和装置,所述方法包括:获取套管针的数据集,通过预设的U-Net网络进行训练,获得第一模型;通过所述第一模型,检测待测图像中所述套管针的位置,输出所述套管针的满足预设条件的位置信息;根据所述套管针的满足预设条件的位置信息,对套管针的旋转角度进行参数化,获得所述套管针的旋转矩阵;根据所述套管针的旋转矩阵,获得所述套管针的取向,并根据所述套管针的取向,控制机器人末端的器械与所述套管针进行对接。本发明相对于现有技术,通过U-Net网络检测套管针的位置,并获取套管针的取向,能够精确控制机器人末端的器械与套管针进行对接,实现自动判断套管针位姿,能够适应可视性较差的体内手术项目。
附图说明
图1:为本发明提供的机器人辅助的套管针自动对接方法的一种实施例的流程示意图。
图2:为本发明提供的机器人辅助的套管针自动对接装置的一种实施例的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
实施例一:
请参照图1,图1为本发明实施例提供的一种机器人辅助的套管针自动对接方法,包括步骤S1至S4;其中,本实施例采用五自由度串并联式眼科手术机器人,以及多功能焊接放大镜摄像机。所述串并联式眼科手术机器人由用于两轴平移和旋转的第一关节、第二关 节及用于末端执行器Z轴运动的滑轨关节构成,第一关节上设置有第一线性马达和第二线性马达,第二关节上设置有第三线性马达和第四线性马达,所述末端滑轨关节上设置有第五线性马达。所述多功能焊接放大镜摄像机由一个3D打印支架以预设的方向刚性安装到机器人末端的注射器上。
步骤S1,获取套管针的数据集,通过预设的U-Net网络进行训练,获得第一模型。
在本实施例中,通过安装在机器人上的多功能焊接放大镜摄像机捕捉图像帧,获取套管针的RGB图像,生成套管针的数据集,数据集包括不少于2000张图像,图像包含套管针在图像坐标中的地面真实信息以及套管针在虚拟场景中相对于相机的三维姿态。选取以Resnet34作为核心特征提取器的U-Net网络,加以训练,形成最优模型。所述U-Net网络首先在套管针的数据集进行预训练,然后在进行套管针标记后的数据集上进行微调,获得第一模型。其中,所述U-Net网络的最后一个网络层使用sigmoid激活函数与二元交叉熵损失函数。
步骤S2,通过所述第一模型,检测待测图像中所述套管针的位置,输出所述套管针的满足预设条件的位置信息。
在本实施例中,通过所述第一模型,为了得到每一帧处理后的套管针位置的最终图像坐标,将所述U-Net网络输出中所有满足条件的像素位置(x,y)作为候选套管针位置,即所述输出所述套管针的满足预设条件的位置信息,具体为:
输出所述套管针满足以下条件的位置信息:
pred(x,y)≥0.8*max(pred);
其中,(x,y)为套管针的像素位置,pred(x,y)为像素位置(x,y)被分类为套管针的置信度,max(pred)为所述U-Net网络输出图像的整体最大值。
进一步地,对所述套管针的满足预设条件的位置信息更进一步处理,具体地:
计算每七个连续图像帧中所述位置信息的中值,并分别计算中值和所述七个连续图像帧中各所述位置信息之间的欧几里得距离,对欧几里得距离小于等于四分之一标准差的位置信息进行平均,以获得所述套管针的最终位置信息。这样处理可以使结果更具鲁棒性。
步骤S3,根据所述套管针的满足预设条件的位置信息,对套管针的旋转角度进行参数化,获得所述套管针的旋转矩阵。
具体地,在本实施例中,所述获得所述套管针的旋转矩阵,具体为:
对套管针旋转角度进行参数化。以套管针的横截面中心作为原点,套管针横截面作为坐标系XY平面,横截面的法向量作为z轴,建立套管针坐标系。
则套管针六维旋转矩阵R 6d可以定义为:
R 6d=[R Z|R Y];
其中,R z、R y分别为套管针在z方向上的旋转矩阵和在y方向上的旋转矩阵。
获取所述套管针在z方向上的旋转矩阵R z和所述套管针在y方向上的旋转矩阵R y,计算所述套管针的六维旋转矩阵R,R为单位和正交z旋转矩阵:
Figure PCTCN2022134016-appb-000013
其中,R1、R2、R3分别为所述六维旋转矩阵R的列,φ表示向量归一化操作。
步骤S4,根据所述套管针的旋转矩阵,获得所述套管针的取向,并根据所述套管针的取向,控制机器人末端的器械与所述套管针进行对接。
具体地,根据图像帧,估计当前图像平面上最接近套管针的横截面的法向量,以确定合适的目标套管针的姿态。对于以套管针为中心的ROI提取出的输入图像,使用基于Resnet34作为特征提取器将其转换为特征,通过全连接层重新以六维旋转矩阵表示。获取所述数据集中所述套管针的真值
Figure PCTCN2022134016-appb-000014
根据所述套管针的旋转矩阵
Figure PCTCN2022134016-appb-000015
计算所述套管针的取向:
Figure PCTCN2022134016-appb-000016
其中,所述套管针的取向以所述套管针的旋转矩阵
Figure PCTCN2022134016-appb-000017
和所述真值
Figure PCTCN2022134016-appb-000018
之间的夹角Δθ进行表示。
由于套管针沿z轴对称,设计损失函数L,使避免因围绕套管针z轴的不相关旋转而惩罚网络。因此MSE与Δθ的余弦距离成正比,损失函数L rotation具体如下所述:
Figure PCTCN2022134016-appb-000019
对于确定的套管针位姿数据,在机器人的末端被放置在套管针可触及的距离的前提下(为了机器人能在最大工作范围限度内完成操作),首先使用两段式的分步对齐的方法,将安装于机器人末端的器械方向与套管针方向对齐,随后再进行平移使机器人器械末端的XY与套管针对齐,这样可以对套管针在术中的微小移动进行补偿。保持器械末端始终位于套管针的连线上,以自适应的速度接近套管针,以此完成对接。
本实施例优选采用带有红外反射器的套管针,微型的摄像机也带有红外线检测器,从而辅助套管针位置检测,帮助锁定套管针的位置范围。另外,需要说明的是,本实施例所描述的机器人辅助的套管针自动对接方法虽然仅展示了眼科手术中的应用,但只是作为一种举例,实际也可以应用于其他类型的微创机器人手术中。
相应的,参照图2,本发明还提供了一种机器人辅助的套管针自动对接装置,包括训练模块101、检测模块102、旋转矩阵模块103和对接模块104;其中,
所述训练模块101用于获取套管针的数据集,通过预设的U-Net网络进行训练,获得第一模型;
所述检测模块102用于通过所述第一模型,检测待测图像中所述套管针的位置,输出所述套管针的满足预设条件的位置信息;
所述旋转矩阵模块103用于根据所述套管针的满足预设条件的位置信息,对套管针的旋转角度进行参数化,获得所述套管针的旋转矩阵;
所述对接模块104用于根据所述套管针的旋转矩阵,获得所述套管针的取向,并根据所述套管针的取向,控制机器人末端的器械与所述套管针进行对接。
在本实施例中,所述检测模块102输出所述套管针的满足预设条件的位置信息,具体为:
所述检测模块102输出所述套管针满足以下条件的位置信息:
pred(x,y)≥0.8*max(pred);
其中,(x,y)为套管针的像素位置,pred(x,y)为像素位置(x,y)被分类为套管针的置信度,max(pred)为所述U-Net网络输出图像的整体最大值。
在本实施例中,所述套管针自动对接装置还包括筛选模块,所述筛选模块用于在所述旋转矩阵模块103对套管针的旋转角度进行参数化之前,对所述套管针的满足预设条件的位置信息进一步处理,具体地:
所述筛选模块计算每七个连续图像帧中所述位置信息的中值,并分别计算中值和所述七个连续图像帧中各所述位置信息之间的欧几里得距离,对欧几里得距离小于等于四分之一标准差的位置信息进行平均,以获得所述套管针的最终位置信息。
在本实施例中,所述旋转矩阵模块103获得所述套管针的旋转矩阵,具体为:
所述旋转矩阵模块103获取所述套管针在z方向上的旋转矩阵R z和所述套管针在y方向上的旋转矩阵R y,计算所述套管针的六维旋转矩阵R:
Figure PCTCN2022134016-appb-000020
其中,R1、R2、R3分别为所述六维旋转矩阵R的列。
在本实施例中,所述对接模块104根据所述套管针的旋转矩阵,获得所述套管针的取向,具体为:
所述对接模块104获取所述数据集中所述套管针的真值
Figure PCTCN2022134016-appb-000021
根据所述套管针的旋转矩阵
Figure PCTCN2022134016-appb-000022
计算所述套管针的取向:
Figure PCTCN2022134016-appb-000023
其中,所述套管针的取向以所述套管针的旋转矩阵
Figure PCTCN2022134016-appb-000024
和所述套管针的真值
Figure PCTCN2022134016-appb-000025
之间的夹角Δθ进行表示。
相比于现有技术,本发明实施例具有如下有益效果:
本发明实施例提供了一种机器人辅助的套管针自动对接方法和装置,所述方法包括:获取套管针的数据集,通过预设的U-Net网络进行训练,获得第一模型;通过所述第一模型,检测待测图像中所述套管针的位置,输出所述套管针的满足预设条件的位置信息;根据所述套管针的满足预设条件的位置信息,对套管针的旋转角度进行参数化,获得所述套管针的旋转矩阵;根据所述套管针的旋转矩阵,获得所述套管针的取向,并根据所述套管针的取向,控制机器人末端的器械与所述套管针进行对接。本发明相对于现有技术,通过U-Net网络检测套管针的位置,并获取套管针的取向,能够精确控制机器人末端的器械与套管针进行对接,实现自动判断套管针位姿,能够适应可视性较差的体内手术项目。
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步的详细说明,应当理解,以上所述仅为本发明的具体实施例而已,并不用于限定本发明的保护范围。特别指出,对于本领域技术人员来说,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种机器人辅助的套管针自动对接方法,其特征在于,包括:
    获取套管针的数据集,通过预设的U-Net网络进行训练,获得第一模型;
    通过所述第一模型,检测待测图像中所述套管针的位置,输出所述套管针的满足预设条件的位置信息;
    根据所述套管针的满足预设条件的位置信息,对套管针的旋转角度进行参数化,获得所述套管针的旋转矩阵;
    根据所述套管针的旋转矩阵,获得所述套管针的取向,并根据所述套管针的取向,控制机器人末端的器械与所述套管针进行对接。
  2. 如权利要求1所述的一种机器人辅助的套管针自动对接方法,其特征在于,所述输出所述套管针的满足预设条件的位置信息,具体为:
    输出所述套管针满足以下条件的位置信息:
    pred(x,y)≥0.8*max(pred);
    其中,(x,y)为套管针的像素位置,pred(x,y)为像素位置(x,y)被分类为套管针的置信度,max(pred)为所述U-Net网络的输出图像的整体最大值。
  3. 如权利要求1所述的一种机器人辅助的套管针自动对接方法,其特征在于,在所述对套管针的旋转角度进行参数化之前,还包括:对所述套管针的满足预设条件的位置信息进一步处理,具体地:
    计算每七个连续图像帧中所述位置信息的中值,并分别计算中值和所述七个连续图像帧中各所述位置信息之间的欧几里得距离,对欧几里得距离小于等于四分之一标准差的位置信息进行平均,以获得所述套管针的最终位置信息。
  4. 如权利要求1至3任意一项所述的一种机器人辅助的套管针自动对接方法,其特征在于,所述获得所述套管针的旋转矩阵,具体为:
    获取所述套管针在z方向上的旋转矩阵R z和所述套管针在y方向上的旋转矩阵R y,计算所述套管针的六维旋转矩阵R:
    Figure PCTCN2022134016-appb-100001
    其中,R1、R2、R3分别为所述六维旋转矩阵R的列。
  5. 如权利要求4所述的一种机器人辅助的套管针自动对接方法,其特征在于,所述根据所述套管针的旋转矩阵,获得所述套管针的取向,具体为:
    获取所述数据集中所述套管针的真值
    Figure PCTCN2022134016-appb-100002
    根据所述套管针的旋转矩阵
    Figure PCTCN2022134016-appb-100003
    计算所述套管针的取向:
    Figure PCTCN2022134016-appb-100004
    其中,所述套管针的取向以所述套管针的旋转矩阵
    Figure PCTCN2022134016-appb-100005
    和所述套管针的真值
    Figure PCTCN2022134016-appb-100006
    之间的夹角Δθ进行表示。
  6. 一种机器人辅助的套管针自动对接装置,其特征在于,包括训练模块、检测模块、旋转矩阵模块和对接模块;其中,
    所述训练模块用于获取套管针的数据集,通过预设的U-Net网络进行训练,获得第一模型;
    所述检测模块用于通过所述第一模型,检测待测图像中所述套管针的位置,输出所述套管针的满足预设条件的位置信息;
    所述旋转矩阵模块用于根据所述套管针的满足预设条件的位置信息,对套管针的旋转角度进行参数化,获得所述套管针的旋转矩阵;
    所述对接模块用于根据所述套管针的旋转矩阵,获得所述套管针的取向,并根据所述套管针的取向,控制机器人末端的器械与所述套管针进行对接。
  7. 如权利要求6所述的一种机器人辅助的套管针自动对接装置,其特征在于,所述检测模块输出所述套管针的满足预设条件的位置信息,具体为:
    所述检测模块输出所述套管针满足以下条件的位置信息:
    pred(x,y)≥0.8*max(pred);
    其中,(x,y)为套管针的像素位置,pred(x,y)为像素位置(x,y)被分类为套管针的置信度,max(pred)为所述U-Net网络的输出图像的整体最大值。
  8. 如权利要求6所述的一种机器人辅助的套管针自动对接装置,其特征在于,所述套管针自动对接装置还包括筛选模块,所述筛选模块用于在所述旋转矩阵模块对套管针的旋转角度进行参数化之前,对所述套管针的满足预设条件的位置信息进一步处理,具体地:
    所述筛选模块计算每七个连续图像帧中所述位置信息的中值,并分别计算中值和所述七个连续图像帧中各所述位置信息之间的欧几里得距离,对欧几里得距离小于等于四分之一标准差的位置信息进行平均,以获得所述套管针的最终位置信息。
  9. 如权利要求6至8任意一项所述的一种机器人辅助的套管针自动对接装置,其特征在于,所述旋转矩阵模块获得所述套管针的旋转矩阵,具体为:
    所述旋转矩阵模块获取所述套管针在z方向上的旋转矩阵R z和所述套管针在y方向上的旋转矩阵R y,计算所述套管针的六维旋转矩阵R:
    Figure PCTCN2022134016-appb-100007
    其中,R1、R2、R3分别为所述六维旋转矩阵R的列。
  10. 如权利要求9所述的一种机器人辅助的套管针自动对接装置,其特征在于,所述对接模块根据所述套管针的旋转矩阵,获得所述套管针的取向,具体为:
    所述对接模块获取所述数据集中所述套管针的真值
    Figure PCTCN2022134016-appb-100008
    根据所述套管针的旋转矩阵
    Figure PCTCN2022134016-appb-100009
    计算所述套管针的取向:
    Figure PCTCN2022134016-appb-100010
    其中,所述套管针的取向以所述套管针的旋转矩阵
    Figure PCTCN2022134016-appb-100011
    和所述套管针的真值
    Figure PCTCN2022134016-appb-100012
    之间的夹角Δθ进行表示。
PCT/CN2022/134016 2021-12-21 2022-11-24 一种机器人辅助的套管针自动对接方法和装置 WO2023116333A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111577523.9 2021-12-21
CN202111577523.9A CN114159166B (zh) 2021-12-21 2021-12-21 一种机器人辅助的套管针自动对接方法和装置

Publications (1)

Publication Number Publication Date
WO2023116333A1 true WO2023116333A1 (zh) 2023-06-29

Family

ID=80487687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/134016 WO2023116333A1 (zh) 2021-12-21 2022-11-24 一种机器人辅助的套管针自动对接方法和装置

Country Status (3)

Country Link
CN (1) CN114159166B (zh)
LU (1) LU504661B1 (zh)
WO (1) WO2023116333A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114159166B (zh) * 2021-12-21 2024-02-27 广州市微眸医疗器械有限公司 一种机器人辅助的套管针自动对接方法和装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012034886A1 (de) * 2010-09-17 2012-03-22 Siemens Aktiengesellschaft Verfahren zum platzieren eines laparoskopieroboters in einer vorgebbaren relativlage zu einem trokar
CN102905642A (zh) * 2010-05-25 2013-01-30 西门子公司 将腹腔镜检查机器人的器械臂运动到相对于套针的预定的相对位置的方法
US20210059857A1 (en) * 2019-09-04 2021-03-04 Carl Zeiss Meditec Ag Eye surgery surgical system and computer implemented method for providing the position of at least one trocar point
US20210128260A1 (en) * 2019-10-31 2021-05-06 Verb Surgical Inc. Systems and methods for visual sensing of and docking with a trocar
US20210290311A1 (en) * 2020-03-19 2021-09-23 Verb Surgical Inc. Trocar pose estimation using machine learning for docking surgical robotic arm to trocar
CN113538522A (zh) * 2021-08-12 2021-10-22 广东工业大学 一种用于腹腔镜微创手术的器械视觉跟踪方法
CN114159166A (zh) * 2021-12-21 2022-03-11 广州市微眸医疗器械有限公司 一种机器人辅助的套管针自动对接方法和装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9414776B2 (en) * 2013-03-06 2016-08-16 Navigated Technologies, LLC Patient permission-based mobile health-linked information collection and exchange systems and methods
US20200297444A1 (en) * 2019-03-21 2020-09-24 The Board Of Trustees Of The Leland Stanford Junior University Systems and methods for localization based on machine learning
US11170526B2 (en) * 2019-03-26 2021-11-09 Samsung Electronics Co., Ltd. Method and apparatus for estimating tool trajectories
US11547495B2 (en) * 2019-04-26 2023-01-10 Globus Medical, Inc. System and method for reducing interference in positional sensors for robotic surgery
CN110559075B (zh) * 2019-08-05 2021-09-24 常州锦瑟医疗信息科技有限公司 术中增强现实配准方法和装置
WO2021030536A1 (en) * 2019-08-13 2021-02-18 Duluth Medical Technologies Inc. Robotic surgical methods and apparatuses
WO2021214754A1 (en) * 2020-04-19 2021-10-28 Xact Robotics Ltd. Optimizing checkpoint locations along an insertion trajectory of a medical instrument using data analysis
AU2021283341A1 (en) * 2020-06-03 2022-12-22 Noah Medical Corporation Systems and methods for hybrid imaging and navigation
CN112370161B (zh) * 2020-10-12 2022-07-26 珠海横乐医学科技有限公司 基于超声图像特征平面检测的手术导航方法及介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102905642A (zh) * 2010-05-25 2013-01-30 西门子公司 将腹腔镜检查机器人的器械臂运动到相对于套针的预定的相对位置的方法
WO2012034886A1 (de) * 2010-09-17 2012-03-22 Siemens Aktiengesellschaft Verfahren zum platzieren eines laparoskopieroboters in einer vorgebbaren relativlage zu einem trokar
US20210059857A1 (en) * 2019-09-04 2021-03-04 Carl Zeiss Meditec Ag Eye surgery surgical system and computer implemented method for providing the position of at least one trocar point
US20210128260A1 (en) * 2019-10-31 2021-05-06 Verb Surgical Inc. Systems and methods for visual sensing of and docking with a trocar
US20210290311A1 (en) * 2020-03-19 2021-09-23 Verb Surgical Inc. Trocar pose estimation using machine learning for docking surgical robotic arm to trocar
CN113538522A (zh) * 2021-08-12 2021-10-22 广东工业大学 一种用于腹腔镜微创手术的器械视觉跟踪方法
CN114159166A (zh) * 2021-12-21 2022-03-11 广州市微眸医疗器械有限公司 一种机器人辅助的套管针自动对接方法和装置

Also Published As

Publication number Publication date
LU504661B1 (en) 2023-11-07
CN114159166B (zh) 2024-02-27
CN114159166A (zh) 2022-03-11

Similar Documents

Publication Publication Date Title
CN109308693B (zh) 由一台ptz相机构建的目标检测和位姿测量单双目视觉系统
Jiang et al. An overview of hand-eye calibration
CN111801198B (zh) 一种手眼标定方法、系统及计算机存储介质
US11580724B2 (en) Virtual teach and repeat mobile manipulation system
Taylor et al. Robust vision-based pose control
CN112634318B (zh) 一种水下维修机器人的遥操作系统和方法
WO2016193781A1 (en) Motion control system for a direct drive robot through visual servoing
CN106625673A (zh) 狭小空间装配系统及装配方法
WO2023116333A1 (zh) 一种机器人辅助的套管针自动对接方法和装置
CN110555426A (zh) 视线检测方法、装置、设备及存储介质
CN113103235B (zh) 一种基于rgb-d图像对柜体表面设备进行垂直操作的方法
CN113525631A (zh) 一种基于光视觉引导的水下终端对接系统及方法
CN111216109A (zh) 一种用于临床治疗与检测的视觉跟随装置及其方法
Yang et al. Visual servoing of humanoid dual-arm robot with neural learning enhanced skill transferring control
JP2920352B2 (ja) 移動体の走行制御方法および装置
Han et al. Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning
Eslamian et al. Towards the implementation of an autonomous camera algorithm on the da vinci platform
WO2022012337A1 (zh) 运动臂系统以及控制方法
EP4058975A1 (en) Scene perception systems and methods
Gans et al. Visual servoing to an arbitrary pose with respect to an object given a single known length
CN113974834B (zh) 手术机器人系统的套管位姿确定方法、装置
Nguyen et al. Real-time obstacle detection for an autonomous wheelchair using stereoscopic cameras
Debarba et al. Tracking a consumer HMD with a third party motion capture system
Zhang et al. Triangle codes and tracer lights based absolute positioning method for terminal visual docking of autonomous underwater vehicles
Mangipudi et al. Vision based Passive Arm Localization Approach for Underwater ROVs Using a Least Squares on $ SO (3) $ Gradient Algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22909648

Country of ref document: EP

Kind code of ref document: A1