WO2019080358A1 - 3d影像手术导航机器人及其控制方法 - Google Patents

3d影像手术导航机器人及其控制方法

Info

Publication number
WO2019080358A1
WO2019080358A1 PCT/CN2017/120169 CN2017120169W WO2019080358A1 WO 2019080358 A1 WO2019080358 A1 WO 2019080358A1 CN 2017120169 W CN2017120169 W CN 2017120169W WO 2019080358 A1 WO2019080358 A1 WO 2019080358A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
doctor
display
image
robot
Prior art date
Application number
PCT/CN2017/120169
Other languages
English (en)
French (fr)
Inventor
张贯京
葛新科
王海荣
高伟明
张红治
周亮
Original Assignee
深圳市前海安测信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市前海安测信息技术有限公司 filed Critical 深圳市前海安测信息技术有限公司
Publication of WO2019080358A1 publication Critical patent/WO2019080358A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2063Acoustic tracking systems, e.g. using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Definitions

  • the invention relates to the technical field of medical instruments, in particular to a 3D imaging surgical navigation robot and a control method thereof.
  • Surgery refers to the process of using a medical device to cut or cut or otherwise manipulate a patient's skin, mucous membranes, or other tissue to treat a pathological condition.
  • Surgical procedures such as laparotomy that cuts the skin and treats, reconstructs, or removes internal organs may present problems with blood loss, side effects, pain, and scarring, so the use of surgical robots is currently considered a popular alternative. the way.
  • the surgical robot can not automatically identify the lesion area of the tissue and organ, nor can it automatically guide the doctor to find the lesion area, thus failing to ensure the accuracy and safety of the operation.
  • the doctor can perform a surgical operation inside the patient's body while viewing the surgical image displayed on the display.
  • the position of the display is generally fixed, for example, during a robotic surgery in which the doctor sits or stands, the display cannot be moved according to the doctor's eye movement position, thereby viewing the surgical image on the display for the doctor during the operation. bring inconvenience.
  • the main object of the present invention is to provide a 3D imaging surgical navigation robot and a control method thereof, which aim to solve the problem that the existing surgical robot cannot automatically recognize the lesion area of the tissue and organ, and the acquired surgical image cannot be automatically displayed according to the doctor's eye movement position.
  • the present invention provides a 3D imaging surgical navigation robot, including a first mechanical arm, a second mechanical arm, and a robot body.
  • the first mechanical arm is provided with an ultrasonic probe
  • the second mechanical arm is disposed.
  • a 3D display is provided, an outer surface of the ultrasonic probe is provided with an infrared positioner, and an outer surface of the robot body is provided with a sensor unit, and the robot body is provided with a microcontroller suitable for implementing various program instructions and a memory for storing a plurality of program instructions, the program instructions being loaded by the microcontroller and performing the following steps: controlling the ultrasonic probe of the first robot arm to take an ultrasonic image of a target tissue and organ during a patient's operation in real time; The ultrasonic image is compared with the reference image of the normal tissue organ to locate the lesion area of the target tissue organ; generate a surgical navigation command that illuminates the lesion area, and control the infrared locator to generate an infrared light guiding point
  • the ultrasound image of the organ and the lesion area image display are displayed on the 3D display.
  • the step of positioning the ultrasound image of the target tissue organ with the reference image of the normal tissue organ to locate the lesion region of the target tissue organ comprises the steps of: reading a reference image of the normal tissue organ from the memory; comparing the target The ultrasonic image of the tissue organ and the reference image of the normal tissue and organ determine the texture distribution difference between the two; the lesion area of the target tissue and organ is located according to the difference of the texture distribution of the two, the texture distribution difference includes the tissue structure of the human tissue organogenesis lesion Differences, size differences, and contour differences.
  • the step of generating a driving signal for driving the 3D display on the second robot arm to move in front of the doctor's eyes according to the doctor's eye position coordinates comprises the steps of: calculating the final to be moved by the 3D display according to the doctor's eye position coordinates Position, the final position of the 3D display is calculated by using the human eye pupil analysis method to identify the pupil position of the eye from the image of the eye, and connecting the position of the pupil of the eye with the position of the lesion area into a line segment; The position of the pupil and the position of the lesion are taken as the two vertices of the triangle, and the line segment connected between the pupil position of the eye and the position of the lesion is used as an edge of the triangle to form a positive triangle; the third vertex of the equilateral triangle As the final position of the 3D display; the angle at which the second robot arm is to be driven is calculated from the final position of the 3D display, and a drive signal that drives the movement of the second robot arm at the calculated angle is generated and
  • the step of generating a surgical navigation command for illuminating the lesion area comprises the steps of: establishing a spatial coordinate system with the position and direction of the ultrasonic probe with the operating table lying flat of the patient as a horizontal plane;
  • the coordinate system calculates the position coordinates of the lesion area, and generates the surgical navigation instruction according to the position coordinates of the lesion area.
  • the ultrasonic probe, the infrared positioner, the sensor unit, the 3D display and the memory are all electrically connected to the microcontroller.
  • the present invention also provides a control method for a 3D imaging surgical navigation robot, the 3D imaging surgical navigation robot comprising a first robot arm, a second robot arm, and a robot body, wherein the first robot arm is provided with an ultrasonic probe, the first The robot arm is provided with a 3D display, the outer surface of the ultrasonic probe is provided with an infrared positioner, the outer surface of the robot body is provided with a sensor unit, and the control method of the 3D image surgical navigation robot includes the steps of: controlling the The ultrasonic probe of the first robot arm ingests the ultrasonic image of the target tissue and organ during the operation of the patient in real time; compares the ultrasonic image of the target tissue and organ with the reference image of the normal tissue and organ to locate the lesion area of the target tissue and organ; a surgical navigation instruction of the lesion area, and controlling the infrared locator to generate an infrared light guiding point; driving the first robot arm moving direction according to the surgical navigation instruction to illuminate the infrared light
  • the step of comparing the ultrasound image of the target tissue organ with the reference image of the normal tissue organ to locate the lesion region of the target tissue organ comprises the steps of: reading a reference image of the normal tissue organ from the memory; comparing the target tissue organ The ultrasonic image and the reference image of the normal tissue and organ determine the difference in texture distribution between the two; the lesion region of the target tissue and organ is located according to the difference in the texture distribution, and the texture distribution difference includes the tissue structure difference and size of the human tissue organogenesis lesion Size differences and contour differences.
  • the step of generating a driving signal for driving the 3D display on the second robot arm to move in front of the doctor's eyes according to the doctor's eye position coordinates comprises the steps of: calculating the final to be moved by the 3D display according to the doctor's eye position coordinates Position, the final position of the 3D display is calculated by using the human eye pupil analysis method to identify the pupil position of the eye from the image of the eye, and connecting the position of the pupil of the eye with the position of the lesion area into a line segment; The position of the pupil and the position of the lesion are taken as the two vertices of the triangle, and the line segment connected between the pupil position of the eye and the position of the lesion is used as an edge of the triangle to form a positive triangle; the third vertex of the equilateral triangle As the final position of the 3D display; the angle at which the second robot arm is to be driven is calculated from the final position of the 3D display, and a drive signal that drives the movement of the second robot arm at the calculated angle is generated and
  • the step of generating a surgical navigation command for illuminating the lesion area comprises the steps of: establishing a spatial coordinate system with the position and direction of the ultrasonic probe with the operating table lying flat of the patient as a horizontal plane;
  • the coordinate system calculates the position coordinates of the lesion area, and generates the surgical navigation instruction according to the position coordinates of the lesion area.
  • the surgical navigation command includes distance and direction information between the first mechanical arm and a lesion area of a target tissue organ
  • the infrared light guiding point is a method for guiding a doctor to find a tissue organ during a patient's operation.
  • the 3D image surgical navigation robot and the control method thereof adopt the above technical solution, and achieve the following technical effects: acquiring an object tissue and organ during a surgical procedure by using an ultrasonic probe disposed on the first robot arm
  • the ultrasonic image is used to track the lesion area of the target tissue and organ in real time through the infrared locator, so that the position of the lesion area is clearly visible and automatically navigated to the lesion area to guide the doctor to operate, thereby facilitating the doctor's operation and improving the efficiency of the operation.
  • the sensor unit is free to track the doctor's eye position, and the second robot arm is driven to automatically move the 3D display to the position required by the doctor according to the doctor's eye position, and the ultrasonic image is displayed on the 3D display for the doctor to perform during the operation.
  • Surgical reference which improves the accuracy and safety of the operation.
  • FIG. 1 is a schematic diagram of an application environment of a preferred embodiment of a 3D imaging surgical navigation robot of the present invention
  • FIG. 2 is a schematic diagram showing the internal circuit connection of the 3D imaging surgical navigation robot of the present invention.
  • FIG. 3 is a flow chart of a preferred embodiment of a control method for a 3D imaging surgical navigation robot of the present invention.
  • FIG. 1 is a schematic diagram of an application environment of a preferred embodiment of a 3D imaging surgical navigation robot according to the present invention.
  • the 3D imaging surgical navigation robot 01 can be placed in a surgical room, and the operating room 02 is also placed in the operating room for the patient to lie flat.
  • the medical robot 01 includes, but is not limited to, a first robot arm 1, a second robot arm 2, and a robot body 3.
  • the robot body 3 is provided with a microcontroller 30, a memory 31 and a charging device 33.
  • the outer surface of the robot body 01 is provided with a sensor unit 32, and the memory 31, the sensor unit 32 and the charging device 33 are electrically connected to On the microcontroller 30.
  • the first robot arm 1 is provided with an ultrasonic probe 11 electrically connected to the microcontroller 30 for ingesting ultrasonic images of the patient's tissue and organs in real time through the ultrasound during the patient's operation.
  • the ultrasound image is a three-dimensional (3D) ultrasound image of the tissue organ.
  • the outer surface of the ultrasonic probe 11 is provided with an infrared locator 12 for generating an infrared light guiding point for guiding the doctor to find the location of the lesion of the tissue during the patient's operation.
  • the second robot arm 2 is provided with a 3D display 21, which is electrically connected to the microcontroller 30.
  • the embodiment of the present invention sets the 3D display 21 on the second robot arm 2 such that the 3D display 21 can freely move with the second robot arm 2.
  • the 3D display 21 is a 3D display system capable of causing a doctor to feel a stereoscopic virtual feeling, and provides a stereoscopic 3D display image to a doctor, thereby providing a 3D display effect.
  • the 3D display 21 can be implemented as a small, lightweight 3D display module.
  • the 3D display 21 of the present invention is coupled to a second robot arm 2 having a certain degree of freedom of movement so that the doctor can move the 3D display 21 as desired.
  • the 3D display 21 can automatically move according to the posture and position of the doctor, and the second robot 2 can move the 3D display 21 to the front of the doctor's eyes, thereby serving as a doctor. Viewing the image of the 3D display 21 provides the most convenient location.
  • the microcontroller 30 of the present embodiment can drive the first robot arm 1 and the second robot arm 2 to move freely at a certain angle and direction.
  • the microcontroller 30 may be a microprocessor, a microcontroller unit (MCU) or the like embedded in the robot body 3, and the microcontroller 30 may receive position information from the sensor unit 32 that senses the doctor's eyes, according to the doctor's eyes. The location information determines the final location to which the 3D display 21 is to be moved. If the 3D display 21 is to be moved to the final position, the microcontroller 30 can calculate the angle at which the second robot arm 2 is to be driven according to the position information of the doctor's eye, and then can generate and transmit the second robot arm at the calculated angle. 2 moving drive signals.
  • MCU microcontroller unit
  • the 3D display 21 is coupled to the second robot arm 2, and the 3D display 21 can be moved to a position required by the doctor, so that the doctor can take an arbitrary posture for surgery and can view the required surgery.
  • 3D ultrasound images to view 3D ultrasound images to assist doctors in accurately performing the surgery.
  • the sensor unit 32 senses position information of the doctor's eyes and transmits position information of the doctor's eyes to the microcontroller 30.
  • the microcontroller 30 generates a drive signal for driving the movement of the second robot arm 2 based on the position information of the doctor's eyes, and the second robot arm 2 automatically moves the 3D display 21 to the front of the doctor's eyes.
  • the sensor unit 32 that detects the doctor's eye can be connected to the microcontroller 30, and since the sensor unit 32 senses the position of the doctor's eye and transmits a corresponding drive signal, the microcontroller 30 can receive the drive signal And drive the second robot arm 2.
  • the sensor unit 32 may be a sensor for detecting the position of the doctor's eye, for example, if the image sensor is used to determine the position of the doctor's eye, the microcontroller 30 may analyze the image obtained by the image sensor to calculate the position of the doctor's eye. The coordinates, and then the microcontroller 30 can drive the second robot arm 2 to move the 3D display 21 to the position of the doctor's eye.
  • the sensor unit 32 of the present embodiment does not need to sense the doctor's eyes, and can also determine the position of the doctor's eyes by detecting the doctor's face or different body parts, or can detect the doctor by wearing a special device.
  • the device determines the position of the doctor's eye and then transmits a corresponding signal to the microcontroller 30.
  • a physician may perform a robotic procedure while wearing a marker at a particular location on his or her body, the marker being dedicated to detection by the sensor unit 32.
  • Sensor unit 32 can then detect the indicia and microcontroller 30 calculates the position of the doctor's eye based on the marked position and then generates and transmits a drive signal.
  • the sensor unit 32 of the present embodiment can detect the position or movement of the doctor, and the microcontroller 30 automatically moves the 3D display 21 within a specific range while tracking the doctor's eyes according to the position of the doctor.
  • FIG. 2 is a schematic diagram showing the internal circuit connection of the 3D imaging surgical navigation robot of the present invention.
  • the ultrasonic probe 11, the infrared locator 12, the 3D display 21, the memory 31, the sensor unit 32, and the charging device 33 are all electrically connected to the microcontroller 30.
  • the electrical connection in this embodiment means that each electrical component is connected to the microcontroller 30 through one or more of a conductive line, a signal line, and a control line, so that the microcontroller 30 can control the above various electrical components to complete. The corresponding function.
  • the microcontroller 30 can be a central processing unit (CPU), a microprocessor, a micro control unit chip (MCU), a data processing chip, or a control unit having a data processing function.
  • the memory 31 can be a read only memory unit ROM, an electrically erasable memory unit EEPROM or a flash memory unit FLASH.
  • the memory 31 stores a reference image of a normal human tissue of the human body, and stores pre-programmed computer program instructions.
  • the microcontroller 30 can read and load the computer program instructions from the memory 31 so that the 3D image surgical navigation robot 01 can be a patient. Surgical guidelines are provided during the procedure.
  • the power supply device 33 includes a rechargeable lithium battery 331 and a charging base 332 electrically connected to the microcontroller 30 for providing operating power to the robot 01.
  • the charging stand 332 is electrically connected to the lithium battery 331 for plugging an external power source to charge the lithium battery 331.
  • FIG. 3 is a flow chart of a preferred embodiment of a control method for a 3D imaging surgical navigation robot of the present invention.
  • various method steps of the health monitoring method are implemented by a computer software program in the form of computer program instructions and stored in a computer readable storage medium (eg, memory 31), the storage medium
  • the method may include: a read only memory, a random access memory, a magnetic disk or an optical disk, etc., the computer program instructions being loadable by the processor and performing the following steps S31 to S41.
  • Step S31 controlling the ultrasonic probe of the first robot arm to take an ultrasonic image of the target tissue and organ during the patient's operation in real time; specifically, the microcontroller 30 controls the first robot arm 1 to move to the vicinity of the patient's operating table 02, and activates the ultrasonic probe.
  • the ultrasonic probe 11 Real-time ingestion of ultrasound images of target tissues and organs during surgery of the patient on the operating table 02.
  • the ultrasonic probe 11 may employ a three-dimensional ultrasonic probe that acquires a three-dimensional ultrasonic image of a target tissue organ in real time by transmitting a pyramid-shaped volume ultrasonic beam, and transmits the three-dimensional ultrasonic image to the microcontroller 30.
  • Step S32 comparing the ultrasonic image of the target tissue organ with the reference image of the normal tissue organ to locate the lesion area of the target tissue organ; specifically, the microcontroller 30 reads the reference image of the normal tissue organ from the memory 31, and the target The ultrasound image of the tissue organ is compared with the reference image of the normal tissue and organ to locate the lesion area of the target tissue and organ.
  • the microcontroller 30 determines the difference in texture distribution between the ultrasound image of the target tissue organ and the reference image of the normal tissue organ to locate the lesion of the target tissue and organ according to the difference in texture distribution between the two.
  • the difference in texture distribution includes differences in tissue structure, size difference, and contour difference of human tissue and organogenesis lesions.
  • Step S33 taking the operating table lying on the patient as a horizontal plane and establishing a spatial coordinate system with the position and direction of the ultrasonic probe; specifically, the microcontroller 30 takes the operating table 02 of the patient lying flat as the horizontal plane and the position of the ultrasonic probe 11 and The direction establishes a spatial coordinate system.
  • the position coordinates of the lesion area include the position and direction of the lesion area with respect to the ultrasonic probe 11.
  • the pathological positioning module 102 establishes a spatial coordinate system XYZ according to the position and direction of the ultrasonic probe 11 according to the operating table 02 on which the patient lies.
  • Step S34 calculating the position coordinate of the lesion area based on the space coordinate system; specifically, the microcontroller 30 calculates the arbitrary position of the ultrasonic image in the space coordinate system XYZ by the position and direction of the ultrasonic probe 11 in the space coordinate system XYZ.
  • the position coordinates, in turn, the position and orientation of the target tissue relative to the ultrasound probe 11 can be known.
  • Step S35 generating a surgical navigation instruction according to the position coordinates of the lesion area; specifically, the microcontroller 30 generates a surgical navigation instruction according to the position coordinates of the lesion area.
  • the surgical navigation command includes distance and direction information between the first robot arm 1 and the lesion area of the target tissue organ.
  • Step S36 controlling the infrared locator in the ultrasonic probe to generate an infrared light guiding point; specifically, the microcontroller 30 controls the infrared locator 12 in the ultrasonic probe 11 to generate an infrared light guiding point.
  • the infrared light guiding point is a visible infrared dot for guiding a doctor to find a location of a tissue organ lesion during a patient's surgery.
  • Step S37 driving the first robot arm moving direction according to the surgical navigation instruction to irradiate the infrared light guiding point to the lesion area, and controlling the ultrasonic probe to take in the lesion area image; specifically, the microcontroller 30 controls the first machine according to the surgical navigation instruction.
  • the moving direction of the arm 1 causes the infrared light guiding point to illuminate the lesion area of the target tissue and organ, thereby enabling the doctor to quickly and accurately find the lesion position of the target tissue and organ, thereby facilitating the doctor's surgery and improving the efficiency of the operation.
  • step S38 the sensor unit 32 is turned on to acquire an image of the doctor's eye, and the eye position coordinates of the doctor are calculated based on the eye image.
  • the doctor first looks at the sensor unit 32 with the eye, so that the sensor unit 32 senses the image of the doctor's eye, and the microcontroller 30 analyzes the image of the eye obtained by the sensor unit 32 according to the human eye pupil analysis method in the prior art. Identify the position of the pupil of the eye from the image of the eye and calculate the position coordinates of the doctor's eye.
  • the human eye pupil analysis method is written into the memory 31 in the form of program instructions, which are read and executed by the microcontroller 30 to calculate the position coordinates of the doctor's eyes.
  • a driving signal for driving the 3D display on the second robot arm to move in front of the doctor's eyes is generated according to the doctor's eye position coordinates.
  • the microcontroller 30 generates a drive signal that drives the second robot arm 2 to move in front of the doctor's eyes based on the eye position coordinates.
  • the microcontroller 30 calculates the final position to which the 3D display 21 is to be moved according to the eye position coordinates of the doctor, calculates the angle at which the second robot arm 2 is to be driven by the final position of the 3D display 21, and then generates and transmits the image.
  • the calculated angle drives the drive signal for the movement of the second robot arm 2.
  • the final position of the 3D display 21 is calculated by using the human eye pupil analysis method to identify the pupil position of the eye from the eye image, and connecting the pupil position of the eye with the position of the lesion area into a line segment, respectively Taking the position of the pupil of the eye and the position of the lesion as the two vertices of the triangle, and connecting the line segment between the pupil position of the eye and the position of the lesion as an edge of the triangle, forming a positive triangle, the first triangle
  • the three vertices serve as the final position of the 3D display.
  • step S40 the movement of the second robot arm 2 is driven according to the driving signal to move the 3D display 21 to the front of the doctor's eyes.
  • the microcontroller 30 drives the movement of the second robot arm 2 according to the driving signal to drive the 3D display 21 disposed on the second robot arm 2 to move in front of the doctor's eyes, thereby providing the doctor with the greatest convenience in viewing the ultrasonic image of the 3D display 21.
  • the position allows the doctor to perform the surgical operation with reference to the ultrasonic image displayed on the 3D display 21.
  • step S40 the ultrasonic image and the lesion area image of the target tissue organ are displayed on the 3D display 21; since the 3D display 21 can be moved to the position required by the doctor, the doctor can take an arbitrary posture for surgery and can observe the surgical procedure.
  • the 3D image surgical navigation robot of the present invention acquires an ultrasonic image of a target tissue and organ during a surgical procedure through an ultrasonic probe 11 disposed on the first robot arm 1, and tracks the lesion region of the target tissue and organ in real time through the infrared locator 12 to make the lesion
  • the location of the area is clearly visible and automatically navigated to the lesion area to guide the surgeon's surgery, thus facilitating the surgeon's surgery and improving the efficiency of the procedure.
  • the 3D image surgical navigation robot of the present invention can finally track the position of the doctor's eyes through the sensor unit 32, and can automatically move the 3D display 21 to the doctor according to the doctor's eye position by being disposed on the second robot arm 2.
  • the position and the ultrasound image and the lesion area of the target tissue and organ are displayed on the 3D display 21 for the doctor to make a surgical reference during the operation, thereby improving the accuracy and safety of the operation.
  • the 3D image surgical navigation robot and the control method thereof adopt the above technical solution, and achieve the following technical effects: acquiring an object tissue and organ during a surgical procedure by using an ultrasonic probe disposed on the first robot arm
  • the ultrasonic image is used to track the lesion area of the target tissue and organ in real time through the infrared locator, so that the position of the lesion area is clearly visible and automatically navigated to the lesion area to guide the doctor to operate, thereby facilitating the doctor's operation and improving the efficiency of the operation.
  • the sensor unit is free to track the doctor's eye position, and the second robot arm is driven to automatically move the 3D display to the position required by the doctor according to the doctor's eye position, and the ultrasonic image is displayed on the 3D display for the doctor to perform during the operation.
  • Surgical reference which improves the accuracy and safety of the operation.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

本发明公开一种3D影像手术导航机器人及其控制方法,该3D影像手术导航机器人包括第一机械手臂、第二机械手臂和机器人本体,第一机械手臂上设置有超声波探头,第二机械手臂上设置有3D显示器,超声波探头的外表面设置有红外定位器,机器人本体的外表面设置有传感器单元,机器人本体内设置有微控制器和存储器。本发明所述机器人通过超声波探头获取手术过程中目标组织器官的超声波图像,通过红外定位器跟踪定位目标组织器官的病变区域,方便医生手术并提高手术效率;通过传感器单元自由追踪医生的眼睛位置自动将3D显示器移动到医生所需的位置,并将超声波图像显示在3D显示器上供医生手术参考,提高手术的准确性和安全性。

Description

3D影像手术导航机器人及其控制方法 技术领域
本发明涉及医疗器械技术领域,尤其涉及一种3D影像手术导航机器人及其控制方法。
背景技术
手术是指使用医疗设备进行切割或切入或其它方式操作患者的皮肤、黏膜或其它组织,以处理病理状况的过程。诸如切开皮肤并且处理、重构或切除内部器官等的剖腹手术等的手术过程可能会存在失血、副作用、疼痛以及疤痕的问题,因此,手术机器人的使用目前被认为是一种受欢迎的替代方式。
目前,手术机器人不能自动识别出组织器官的病变区域,也不能自动指引医生找到病变区域,从而无法保证手术的准确性和安全性。此外,当使用手术机器人进行机器人手术时,医生可以观看显示器上显示的手术图像的同时为患者身体内部进行手术操作。然而,由于显示器的位置一般是固定的,例如在医生坐着或站立进行机器人手术的过程中,因此显示器不能根据医生的眼睛移动位置而移动,从而为医生在手术过程中观看显示器上的手术图像带来不便。
技术问题
本发明的主要目的在于提供一种3D影像手术导航机器人及其控制方法,旨在解决现有手术机器人不能自动识别组织器官的病变区域,且获取的手术图像不能根据医生的眼睛移动位置自动显示在医生眼睛前面的技术问题。
技术解决方案
为实现上述目的,本发明提供了一种3D影像手术导航机器人,包括第一机械手臂、第二机械手臂以及机器人本体,所述第一机械手臂上设置有超声波探头,所述第二机械手臂上设置有3D显示器,所述超声波探头的外表面设置有红外定位器,所述机器人本体的外表面设置有传感器单元,所述机器人本体内设置有适于实现各种程序指令的微控制器以及适于存储多条程序指令的存储器,所述程序指令由微控制器加载并执行如下步骤:控制所述第一机械手臂的超声波探头实时摄取病人手术过程中目标组织器官的超声波图像;将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域;产生一个照射在所述病变区域的手术导航指令,并控制所述红外定位器产生红外导光点;根据所述手术导航指令驱动第一机械手臂移动方向使所述红外导光点照射在所述病变区域,并控制超声波探头摄取病变区域图像;开启所述传感器单元获取医生的眼睛图像,并基于眼睛图像利用人体眼睛瞳孔分析法计算出医生的眼睛位置坐标;根据医生的眼睛位置坐标产生驱动第二机械手臂上的3D显示器移动到医生眼睛前方的驱动信号;根据驱动信号驱动所述第二机械手臂移动使3D显示器移动至医生的眼睛前方;将目标组织器官的超声波图像和病变区域图像显示显示在3D显示器上。
优选的,所述将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域的步骤包括如下步骤:从所述存储器读取正常组织器官的参考图像;比较目标组织器官的超声波图像与正常组织器官的参考图像确定两者的纹理分布差异;根据两者的纹理分布差异定位出目标组织器官的病变区域,所述纹理分布差异包括人体组织器官发生病变的组织结构差异、尺寸大小差异以及外形轮廓差异。
优选的,所述根据医生的眼睛位置坐标产生驱动第二机械手臂上的3D显示器移动到医生眼睛前方的驱动信号的步骤包括如下步骤:根据医生的眼睛位置坐标计算出3D显示器将要移动到的最终位置,所述3D显示器的最终位置通过如下步骤计算得到:利用人体眼睛瞳孔分析法从眼睛图像中识别出眼睛瞳孔位置,并将眼睛瞳孔位置与病变区域位置的两点连接成一条线段;以眼睛瞳孔位置和病变区域位置作为三角形的两个顶点,并以眼睛瞳孔位置与病变区域位置两点之间连接成的线段作为三角形的一条边,形成一个正三角;将该正三角形的第三个顶点作为3D显示器的最终位置;通过3D显示器的最终位置计算第二机械手臂将要被驱动的角度,产生并传输以所计算的角度驱动第二机械手臂移动的驱动信号。
优选的,所述产生一个照射在所述病变区域的手术导航指令的步骤包括如下步骤:以病人平躺的手术台为水平面并以所述超声波探头的位置和方向建立空间坐标系;基于该空间坐标系计算病变区域的位置坐标,并根据所述病变区域的位置坐标产生所述手术导航指令。
优选的,所述超声波探头、红外定位器、传感器单元、3D显示器和存储器均电连接至微控制器上。
本发明还提供一种3D影像手术导航机器人的控制方法,该3D影像手术导航机器人包括第一机械手臂、第二机械手臂以及机器人本体,所述第一机械手臂上设置有超声波探头,所述第二机械手臂上设置有3D显示器,所述超声波探头的外表面设置有红外定位器,所述机器人本体的外表面设置有传感器单元,所述3D影像手术导航机器人的控制方法包括步骤:控制所述第一机械手臂的超声波探头实时摄取病人手术过程中目标组织器官的超声波图像;将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域;产生一个照射在所述病变区域的手术导航指令,并控制所述红外定位器产生红外导光点;根据所述手术导航指令驱动第一机械手臂移动方向使所述红外导光点照射在所述病变区域,并控制超声波探头摄取病变区域图像;开启所述传感器单元获取医生的眼睛图像,并根据眼睛图像计算出医生的眼睛位置坐标;根据医生的眼睛位置坐标产生驱动第二机械手臂上的3D显示器移动到医生眼睛前方的驱动信号;根据驱动信号驱动所述第二机械手臂移动使3D显示器移动至医生的眼睛前方;将目标组织器官的超声波图像和病变区域图像显示显示在3D显示器上。
优选的,所述将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域的步骤包括如下步骤:从存储器读取正常组织器官的参考图像;比较目标组织器官的超声波图像与正常组织器官的参考图像确定两者的纹理分布差异;根据所述纹理分布差异定位出目标组织器官的病变区域,所述纹理分布差异包括人体组织器官发生病变的组织结构差异、尺寸大小差异以及外形轮廓差异。
优选的,所述根据医生的眼睛位置坐标产生驱动第二机械手臂上的3D显示器移动到医生眼睛前方的驱动信号的步骤包括如下步骤:根据医生的眼睛位置坐标计算出3D显示器将要移动到的最终位置,所述3D显示器的最终位置通过如下步骤计算得到:利用人体眼睛瞳孔分析法从眼睛图像中识别出眼睛瞳孔位置,并将眼睛瞳孔位置与病变区域位置的两点连接成一条线段;以眼睛瞳孔位置和病变区域位置作为三角形的两个顶点,并以眼睛瞳孔位置与病变区域位置两点之间连接成的线段作为三角形的一条边,形成一个正三角;将该正三角形的第三个顶点作为3D显示器的最终位置;通过3D显示器的最终位置计算第二机械手臂将要被驱动的角度,产生并传输以所计算的角度驱动第二机械手臂移动的驱动信号。
优选的,所述产生一个照射在所述病变区域的手术导航指令的步骤包括如下步骤:以病人平躺的手术台为水平面并以所述超声波探头的位置和方向建立空间坐标系;基于该空间坐标系计算病变区域的位置坐标,并根据所述病变区域的位置坐标产生所述手术导航指令。
优选的,所述手术导航指令包括所述第一机械手臂与目标组织器官的病变区域之间的距离与方向信息,所述红外导光点为一种在病人手术过程中指引医生找出组织器官病变位置的可视红外圆点。
有益效果
相较于现有技术,本发明所述3D影像手术导航机器人及其控制方法采用上述技术方案,达到了如下技术效果:通过设置在第一机械手臂上的超声波探头获取手术过程中目标组织器官的超声波图像,通过红外定位器实时跟踪定位目标组织器官的病变区域,使病变区域的位置清晰可见并自动导航到病变区域指引医生手术,从而方便医生手术并提高了手术的效率。此外,通过传感器单元自由追踪医生的眼睛位置,根据医生的眼睛位置驱动第二机械手臂自动地将3D显示器移动到医生所需的位置,并将超声波图像显示在3D显示器上供医生在手术过程作手术参考,从而提高了手术的准确性和安全性。
附图说明
图1是本发明3D影像手术导航机器人优选实施例的应用环境示意图;
图2是本发明3D影像手术导航机器人的内部电路连接示意图;
图3是本发明3D影像手术导航机器人的控制方法优选实施例的流程图。
本发明目的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的最佳实施方式
为更进一步阐述本发明为达成上述目的所采取的技术手段及功效,以下结合附图及较佳实施例,对本发明的具体实施方式、结构、特征及其功效进行详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
参照图1所示,图1是本发明3D影像手术导航机器人优选实施例的应用环境示意图。在本实施例中,所述3D影像手术导航机器人01可以放置在手术室内,所述手术室内还放置有供病人平躺进行手术的手术台02。所述医疗机器人01包括,但不仅限于,第一机械手臂1、第二机械手臂2以及机器人本体3。所述机器人本体3内设置有微控制器30、存储器31以及充电装置33,所述机器人本体01的外表面设置有传感器单元32,所述存储器31、传感器单元32以及充电装置33分别电连接至微控制器30上。
在本实施例中,所述第一机械手臂1上设置有超声波探头11,该超声波探头11电连接至微控制器30,用于在病人手术过程中通过超声波实时摄取病人组织器官的超声波图像,该超声波图像为组织器官的三维(3D)超声波图像。所述超声波探头11的外表面设置有红外定位器12,用于在病人手术过程中产生一个指引医生找出组织器官病变位置的红外导光点。
在本实施例中,所述第二机械手臂2上设置有3D显示器21,该3D显示器21电连接至微控制器30。本发明实施例将3D显示器21设置在第二机械手臂2上,从而使得3D显示器21可以随着第二机械手臂2自由地移动。在现有技术中,3D显示器21是一种能够使医生感受到立体虚拟感觉的3D显示系统,向医生提供立体观感的3D显示图像,从而提供3D显示效果。在本实施例中,所述3D显示器21可以实现为小型、轻型的3D显示模块。本发明所述的3D显示器21与具有一定移动自由度的第二机械手臂2联结,从而使得医生可以按所需移动3D显示器21。例如,在医生坐着或站立进行机器人手术的过程中,3D显示器21可以根据医生的姿势和位置需要而自动移动,第二机械手臂2可以将3D显示器21移动到医生的眼睛前方,从而为医生观看3D显示器21的图像提供最大便利的位置。
本实施例的微控制器30可以驱动第一机械手臂1和第二机械手臂2按照一定的角度和方向自由移动。所述微控制器30可以为嵌入机器人本体3中的微处理器、微控制器单元(MCU)等,该微控制器30可以接收来自传感器单元32感测到医生眼睛的位置信息,根据医生眼睛的位置信息确定3D显示器21将要移动到的最终位置。如果3D显示器21将要移动到的最终位置,则微控制器30可以根据医生眼睛的位置信息计算第二机械手臂2将要被驱动的角度,然后可以产生并传输以所计算的角度驱动第二机械手臂2移动的驱动信号。在本实施例中,通过3D显示器21与第二机械手臂2联结,3D显示器21可以移动到医生所需的位置,从而使医生可以采取任意的姿势进行手术并能观看到手术过程中所需的3D超声波图像,以观看3D超声波图像来辅助医生准确完成手术。
所述传感器单元32以感测医生眼睛的位置信息,并将医生眼睛的位置信息发送至微控制器30。所述微控制器30根据医生眼睛的位置信息产生驱动第二机械手臂2移动的驱动信号,第二机械手臂2自动将3D显示器21移动到医生眼睛的前方。在这种情形中,检测医生眼睛的传感器单元32可以连接到微控制器30,并且由于传感器单元32感测医生眼睛的位置并且传输相应的驱动信号,因此微控制器30可以接收所述驱动信号并驱动机第二机械手臂2。
所述传感器单元32可以为用于检测医生眼睛位置的传感器,例如,如果图像传感器用于确定医生眼睛的位置,则微控制器30可以分析图像传感器所获得的图像以计算医生眼睛所处的位置坐标,而后微控制器30可以驱动第二机械手臂2将3D显示器21移动到医生眼睛的位置。
此外,本实施例的传感器单元32不需要必须感测医生的眼睛,也可以通过检测医生的面部或不同身体部分来确定医生眼睛的位置,或者如果医生佩戴特殊的装置,则可以通过检测所述装置来确定医生眼睛的位置,然后将相应的信号传输到微控制器30。例如,医生可以在其身体的特定位置佩戴标记的同时进行机器人手术,所述标记专门用于被传感器单元32检测。然后,传感器单元32可以检测所述标记,微控制器30根据标记位置计算医生眼睛的位置,然后产生和传输的驱动信号。本实施例所述传感器单元32可以检测医生的位置或移动,微控制器30根据医生的位置追踪医生眼睛的同时在特定的范围内自动地移动3D显示器21。
参照图2所示,图2是本发明3D影像手术导航机器人的内部电路连接示意图。在本实施例中,所述超声波探头11、红外定位器12、3D显示器21、存储器31、传感器单元32和充电装置33均电连接至微控制器30。本实施例所述电连接是指各个电气元器件通过导电线、信号线、控制线的一种或多种连接至微控制器30,从而使得微控制器30能够控制上述各个电气元器件能够完成相应的功能。
在本实施例中,所述微控制器30可以为一种中央处理器(CPU)、微处理器、微控制单元芯片(MCU)、数据处理芯片、或者具有数据处理功能的控制单元。所述存储器31可以为一种只读存储单元ROM,电可擦写存储单元EEPROM或快闪存储单元FLASH等存储器。所述存储器31存储有人体正常组织器官的参考图像,以及存储预先编制的计算机程序指令,微控制器30能够从存储器31读取加载计算机程序指令并执行,以便3D影像手术导航机器人01能够为病人手术过程中提供手术指引。
在本实施例中,所述电源装置33包括可充电的锂电池331以及充电座332,所述锂电池331电连接至所述微控制器30上,用于为所述机器人01提供工作电源。所述充电座332电连接至所述锂电池331上,用于接插外部电源为所述锂电池331进行充电。
本发明还提供了一种基于3D显示功能的3D影像手术导航机器人的控制方法,应用于3D影像手术导航机器人01中。参考图3所示,图3是本发明3D影像手术导航机器人的控制方法优选实施例的流程图。在本实施例中,所述健康监护方法的各种方法步骤通过计算机软件程序来实现,该计算机软件程序以计算机程序指令的形式并存储于计算机可读存储介质(例如存储器31)中,存储介质可以包括:只读存储器、随机存储器、磁盘或光盘等,所述计算机程序指令能够被处理器加载并执行如下步骤S31至步骤S41。
步骤S31,控制第一机械手臂的超声波探头实时摄取病人手术过程中目标组织器官的超声波图像;具体地,微控制器30控制第一机械手臂1移动到病人的手术台02附近,并启动超声波探头11实时摄取手术台02上的病人手术过程中目标组织器官的超声波图像。在本实施例中,所述超声波探头11可以采用三维超声波探头,其通过发射金字塔型容积超声束,实时获取目标组织器官的三维超声波图像,并将三维超声波图像发送至微控制器30上。
步骤S32,将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域;具体地,微控制器30从存储器31读取正常组织器官的参考图像,并将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域。在本实施例中,所述微控制器30通过比较目标组织器官的超声波图像与正常组织器官的参考图像确定两者的纹理分布差异,以根据两者的纹理分布差异定位出目标组织器官的病变区域,所述纹理分布差异包括人体组织器官发生病变的组织结构差异、尺寸大小差异及外形轮廓差异。
步骤S33,以患者平躺的手术台为水平面并以超声波探头的位置和方向建立空间坐标系;具体地,微控制器30以病人平躺的手术台02为水平面并以超声波探头11的位置和方向建立空间坐标系。在本实施例中,所述病变区域的位置坐标包括病变区域相对于超声波探头11的位置和方向。参考图1所示,所述病理定位模块102根据病人平躺的手术台02为水平面并以超声波探头11的位置和方向建立空间坐标系XYZ。
步骤S34,基于空间坐标系计算出病变区域的位置坐标;具体地,微控制器30通过超声波探头11在空间坐标系XYZ下的位置和方向计算出超声波图像中任意一点在空间坐标系XYZ下的位置坐标,进而可以知道目标组织器官相对于超声波探头11的位置和方向。
步骤S35,根据病变区域的位置坐标产生手术导航指令;具体地,微控制器30根据所述病变区域的位置坐标产生手术导航指令。在本实施例中,所述手术导航指令包括第一机械手臂1与目标组织器官的病变区域之间的距离与方向信息。
步骤S36,控制超声波探头内的红外定位器产生红外导光点;具体地,微控制器30控制超声波探头11内的红外定位器12产生红外导光点。在本实施例中,所述红外导光点为一种用于在病人手术过程中指引医生找出组织器官病变位置的可视红外圆点。
步骤S37,根据手术导航指令驱动第一机械手臂移动方向使红外导光点照射在病变区域,并控制超声波探头摄取病变区域图像;具体地,微控制器30根据所述手术导航指令控制第一机械手臂1移动方向使红外导光点照射在目标组织器官的病变区域,进而能够辅助医生快速准确地找到目标组织器官的病变位置,从而方便医生手术并提高了手术的效率。
步骤S38,开启传感器单元32获取医生的眼睛图像,并根据眼睛图像计算出医生的眼睛位置坐标。本实施例中,医生首先用眼睛正视一次传感器单元32,以便传感器单元32感测医生的眼睛图像,微控制器30根据现有技术中的人体眼睛瞳孔分析法分析传感器单元32所获得的眼睛图像,从眼睛图像中识别出眼睛瞳孔位置并计算医生眼睛所处的位置坐标。所述人体眼睛瞳孔分析法以程序指令的形式写入存储器31中,由微控制器30读取并执行以计算出医生眼睛所处的位置坐标。
步骤S39,根据医生的眼睛位置坐标产生驱动第二机械手臂上的3D显示器移动到医生眼睛前方的驱动信号。在本实施例中,微控制器30根据眼睛位置坐标产生驱动第二机械手臂2移动到医生眼睛前方的驱动信号。具体地,微控制器30根据医生的眼睛位置坐标计算出3D显示器21将要移动到的最终位置,通过3D显示器21的最终位置计算第二机械手臂2将要被驱动的角度,然后产生并传输以所计算的角度驱动第二机械手臂2移动的驱动信号。进一步地,所述3D显示器21的最终位置通过如下步骤计算得到:利用人体眼睛瞳孔分析法从眼睛图像中识别出眼睛瞳孔位置,将眼睛瞳孔位置与病变区域位置的两点连接成一条线段,分别以眼睛瞳孔位置和病变区域的位置作为三角形的两个顶点,并以眼睛瞳孔位置与病变区域位置两点之间连接成的线段作为三角形的一条边,形成一个正三角,将该正三角形的第三个顶点作为3D显示器的最终位置。
步骤S40,根据驱动信号驱动第二机械手臂2移动使3D显示器21移动至医生的眼睛前方。具体地,微控制器30根据驱动信号驱动第二机械手臂2移动,带动设置在第二机械手臂2的3D显示器21移动至医生的眼睛前方,从而为医生观看3D显示器21的超声波图像提供最大便利的位置,方便医生参照3D显示器21上显示的超声波图像进行手术作业。
步骤S40,将目标组织器官的超声波图像和病变区域图像显示在3D显示器21上;由于3D显示器21可以移动到医生所需的位置,从而使医生可以采取任意的姿势进行手术并能观看到手术过程中所需的超声波图像,以供医生在手术过程中作手术参考,从而提高手术的准确性和安全性。因此,在利用3D影像手术导航机器人01辅助手术时,可以根据医生可以根据医生的姿势和位置需要而移动3D显示器21的位置,以方便医生一边手术一边观看目标组织器官的超声波图像,从而辅助医生准确完成手术。
本发明所述3D影像手术导航机器人通过设置在第一机械手臂1上的超声波探头11获取手术过程中目标组织器官的超声波图像,通过红外定位器12实时跟踪定位目标组织器官的病变区域,使病变区域的位置清晰可见并自动导航到病变区域指引医生手术,从而方便医生手术并提高了手术的效率。此外,本发明所述3D影像手术导航机器人能够通过传感器单元32终追踪医生的眼睛位置,根据医生的眼睛位置通过设置在第二机械手臂2上可以自动地将3D显示器21移动到医生所需的位置,并将目标组织器官的超声波图像和病变区域显示在3D显示器21上供医生在手术过程作手术参考,从而提高了手术的准确性和安全性。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效功能变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
工业实用性
相较于现有技术,本发明所述3D影像手术导航机器人及其控制方法采用上述技术方案,达到了如下技术效果:通过设置在第一机械手臂上的超声波探头获取手术过程中目标组织器官的超声波图像,通过红外定位器实时跟踪定位目标组织器官的病变区域,使病变区域的位置清晰可见并自动导航到病变区域指引医生手术,从而方便医生手术并提高了手术的效率。此外,通过传感器单元自由追踪医生的眼睛位置,根据医生的眼睛位置驱动第二机械手臂自动地将3D显示器移动到医生所需的位置,并将超声波图像显示在3D显示器上供医生在手术过程作手术参考,从而提高了手术的准确性和安全性。

Claims (10)

  1. 一种3D影像手术导航机器人,包括第一机械手臂、第二机械手臂以及机器人本体,其特征在于,所述第一机械手臂上设置有超声波探头,所述第二机械手臂上设置有3D显示器,所述超声波探头的外表面设置有红外定位器,所述机器人本体的外表面设置有传感器单元,所述机器人本体内设置有适于实现各种程序指令的微控制器以及适于存储多条程序指令的存储器,所述程序指令由微控制器加载并执行如下步骤:控制所述第一机械手臂的超声波探头实时摄取病人手术过程中目标组织器官的超声波图像;将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域;产生一个照射在所述病变区域的手术导航指令,并控制所述红外定位器产生红外导光点;根据所述手术导航指令驱动第一机械手臂移动方向使所述红外导光点照射在所述病变区域,并控制超声波探头摄取病变区域图像;开启所述传感器单元获取医生的眼睛图像,并基于眼睛图像利用人体眼睛瞳孔分析法计算出医生的眼睛位置坐标;根据医生的眼睛位置坐标产生驱动第二机械手臂上的3D显示器移动到医生眼睛前方的驱动信号;根据驱动信号驱动所述第二机械手臂移动使3D显示器移动至医生的眼睛前方;将目标组织器官的超声波图像和病变区域图像显示显示在3D显示器上。
  2. 如权利要求1所述的3D影像手术导航机器人,其特征在于,所述将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域的步骤包括如下步骤:从所述存储器读取正常组织器官的参考图像;比较目标组织器官的超声波图像与正常组织器官的参考图像确定两者的纹理分布差异;根据两者的纹理分布差异定位出目标组织器官的病变区域,所述纹理分布差异包括人体组织器官发生病变的组织结构差异、尺寸大小差异以及外形轮廓差异。
  3. 如权利要求1所述的3D影像手术导航机器人,其特征在于,所述根据医生的眼睛位置坐标产生驱动第二机械手臂上的3D显示器移动到医生眼睛前方的驱动信号的步骤包括如下步骤:根据医生的眼睛位置坐标计算出3D显示器将要移动到的最终位置,所述3D显示器的最终位置通过如下步骤计算得到:利用人体眼睛瞳孔分析法从眼睛图像中识别出眼睛瞳孔位置,并将眼睛瞳孔位置与病变区域位置的两点连接成一条线段;以眼睛瞳孔位置和病变区域位置作为三角形的两个顶点,并以眼睛瞳孔位置与病变区域位置两点之间连接成的线段作为三角形的一条边,形成一个正三角;将正三角形的第三个顶点作为3D显示器的最终位置;通过3D显示器的最终位置计算第二机械手臂将要被驱动的角度,产生并传输以所计算的角度驱动第二机械手臂移动的驱动信号。
  4. 如权利要求1项所述的3D影像手术导航机器人,其特征在于,所述产生一个照射在所述病变区域的手术导航指令的步骤包括如下步骤:以病人平躺的手术台为水平面并以所述超声波探头的位置和方向建立空间坐标系;基于该空间坐标系计算病变区域的位置坐标,并根据所述病变区域的位置坐标产生所述手术导航指令。
  5. 如权利要求1至4任一项所述的3D影像手术导航机器人,其特征在于,所述超声波探头、红外定位器、传感器单元、3D显示器和存储器均电连接至微控制器上。
  6. 一种3D影像手术导航机器人的控制方法,该3D影像手术导航机器人包括第一机械手臂、第二机械手臂以及机器人本体,其特征在于,所述第一机械手臂上设置有超声波探头,所述第二机械手臂上设置有3D显示器,所述超声波探头的外表面设置有红外定位器,所述机器人本体的外表面设置有传感器单元,其中,所述3D影像手术导航机器人的控制方法包括步骤:控制所述第一机械手臂的超声波探头实时摄取病人手术过程中目标组织器官的超声波图像;将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域;产生一个照射在所述病变区域的手术导航指令,并控制所述红外定位器产生红外导光点;根据所述手术导航指令驱动第一机械手臂移动方向使所述红外导光点照射在所述病变区域,并控制超声波探头摄取病变区域图像;开启所述传感器单元获取医生的眼睛图像,并基于眼睛图像利用人体眼睛瞳孔分析法计算出医生的眼睛位置坐标;根据医生的眼睛位置坐标产生驱动第二机械手臂上的3D显示器移动到医生眼睛前方的驱动信号;根据驱动信号驱动所述第二机械手臂移动使3D显示器移动至医生的眼睛前方;将目标组织器官的超声波图像和病变区域图像显示显示在3D显示器上。
  7. 如权利要求6所述的3D影像手术导航机器人的控制方法,其特征在于,所述将目标组织器官的超声波图像与正常组织器官的参考图像作比较定位出目标组织器官的病变区域的步骤包括如下步骤:从存储器读取正常组织器官的参考图像;比较目标组织器官的超声波图像与正常组织器官的参考图像确定两者的纹理分布差异;根据所述纹理分布差异定位出目标组织器官的病变区域,所述纹理分布差异包括人体组织器官发生病变的组织结构差异、尺寸大小差异以及外形轮廓差异。
  8. 如权利要求6所述的3D影像手术导航机器人的控制方法,其特征在于,所述根据医生的眼睛位置坐标产生驱动第二机械手臂上的3D显示器移动到医生眼睛前方的驱动信号的步骤包括如下步骤:根据医生的眼睛位置坐标计算出3D显示器将要移动到的最终位置,所述3D显示器的最终位置通过如下步骤计算得到:利用人体眼睛瞳孔分析法从眼睛图像中识别出眼睛瞳孔位置,并将眼睛瞳孔位置与病变区域位置的两点连接成一条线段;以眼睛瞳孔位置和病变区域位置作为三角形的两个顶点,并以眼睛瞳孔位置与病变区域位置两点之间连接成的线段作为三角形的一条边,形成一个正三角;将正三角形的第三个顶点作为3D显示器的最终位置;通过3D显示器的最终位置计算第二机械手臂将要被驱动的角度,产生并传输以所计算的角度驱动第二机械手臂移动的驱动信号。
  9. 如权利要求6所述的3D影像手术导航机器人的控制方法,其特征在于,所述产生一个照射在所述病变区域的手术导航指令的步骤包括如下步骤:以病人平躺的手术台为水平面并以所述超声波探头的位置和方向建立空间坐标系;基于该空间坐标系计算病变区域的位置坐标,并根据所述病变区域的位置坐标产生所述手术导航指令。
  10. 如权利要求6至9任一项所述的3D影像手术导航机器人的控制方法,其特征在于,所述手术导航指令包括所述第一机械手臂与目标组织器官的病变区域之间的距离与方向信息,所述红外导光点为一种在病人手术过程中指引医生找出组织器官病变位置的可视红外圆点。
PCT/CN2017/120169 2017-10-28 2017-12-29 3d影像手术导航机器人及其控制方法 WO2019080358A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711027808.9A CN107669340A (zh) 2017-10-28 2017-10-28 3d影像手术导航机器人及其控制方法
CN201711027808.9 2017-10-28

Publications (1)

Publication Number Publication Date
WO2019080358A1 true WO2019080358A1 (zh) 2019-05-02

Family

ID=61141975

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/120169 WO2019080358A1 (zh) 2017-10-28 2017-12-29 3d影像手术导航机器人及其控制方法

Country Status (2)

Country Link
CN (1) CN107669340A (zh)
WO (1) WO2019080358A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021134160A1 (en) * 2019-12-30 2021-07-08 Fresenius Medical Care Deutschland Gmbh Method for driving a display, tracking monitor and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109758233B (zh) * 2019-01-21 2024-02-02 上海益超医疗器械有限公司 一种诊疗一体化手术机器人系统及其导航定位方法
CN109925057A (zh) * 2019-04-29 2019-06-25 苏州大学 一种基于增强现实的脊柱微创手术导航方法及系统
CN114067361B (zh) * 2021-11-16 2022-08-23 西北民族大学 一种spect成像的非病变热区切分方法与系统
CN114098818B (zh) * 2021-11-22 2024-03-26 邵靓 一种超声原始影像数据的模拟成像方法
CN116196111B (zh) * 2023-05-05 2023-10-31 北京衔微医疗科技有限公司 一种眼科手术机器人系统及其控制方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004029786A1 (en) * 2002-09-25 2004-04-08 Imperial College Innovations Limited Control of robotic manipulation
CN1720008A (zh) * 2002-12-06 2006-01-11 皇家飞利浦电子股份有限公司 用于设备的自动定位的装置和方法
CN102186434A (zh) * 2008-08-21 2011-09-14 韩商未来股份有限公司 手术机器人的3d显示系统及其控制方法
CN105943161A (zh) * 2016-06-04 2016-09-21 深圳市前海康启源科技有限公司 基于医疗机器人的手术导航系统及方法
CN106921857A (zh) * 2015-12-25 2017-07-04 珠海明医医疗科技有限公司 立体显示系统及立体显示方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004029786A1 (en) * 2002-09-25 2004-04-08 Imperial College Innovations Limited Control of robotic manipulation
CN1720008A (zh) * 2002-12-06 2006-01-11 皇家飞利浦电子股份有限公司 用于设备的自动定位的装置和方法
CN102186434A (zh) * 2008-08-21 2011-09-14 韩商未来股份有限公司 手术机器人的3d显示系统及其控制方法
CN106921857A (zh) * 2015-12-25 2017-07-04 珠海明医医疗科技有限公司 立体显示系统及立体显示方法
CN105943161A (zh) * 2016-06-04 2016-09-21 深圳市前海康启源科技有限公司 基于医疗机器人的手术导航系统及方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021134160A1 (en) * 2019-12-30 2021-07-08 Fresenius Medical Care Deutschland Gmbh Method for driving a display, tracking monitor and storage medium

Also Published As

Publication number Publication date
CN107669340A (zh) 2018-02-09

Similar Documents

Publication Publication Date Title
WO2019080358A1 (zh) 3d影像手术导航机器人及其控制方法
ES2907252T3 (es) Sistema para realizar procedimientos quirúrgicos y de intervención automatizados
US20170296292A1 (en) Systems and Methods for Surgical Imaging
AU2022206752A1 (en) Method for augmenting a representation of a surgical site
JP7469120B2 (ja) ロボット手術支援システム、ロボット手術支援システムの作動方法、及びプログラム
US11602403B2 (en) Robotic tool control
WO2017206519A1 (zh) 基于医疗机器人的手术导航系统及方法
JP2010200894A (ja) 手術支援システム及び手術ロボットシステム
JP2015528713A (ja) 手術ロボットプラットフォーム
JP2000308646A (ja) 患者の器官または治療範囲の運動を検知するための方法およびシステム
KR20140112207A (ko) 증강현실 영상 표시 시스템 및 이를 포함하는 수술 로봇 시스템
US10799146B2 (en) Interactive systems and methods for real-time laparoscopic navigation
CN111867438A (zh) 手术辅助设备、手术方法、非暂时性计算机可读介质和手术辅助系统
WO2019080317A1 (zh) 手术导航定位机器人及其控制方法
US20200113636A1 (en) Robotically-assisted surgical device, robotically-assisted surgery method, and system
Adebar et al. Registration of 3D ultrasound through an air–tissue boundary
Penza et al. Vision-guided autonomous robotic electrical bio-impedance scanning system for abnormal tissue detection
US20210298848A1 (en) Robotically-assisted surgical device, surgical robot, robotically-assisted surgical method, and system
Looi et al. KidsArm—An image-guided pediatric anastomosis robot
Escoto et al. A multi-sensory mechatronic device for localizing tumors in minimally invasive interventions
US11771508B2 (en) Robotically-assisted surgical device, robotically-assisted surgery method, and system
Adebar et al. Instrument-based calibration and remote control of intraoperative ultrasound for robot-assisted surgery
Bichlmeier et al. Evaluation of the virtual mirror as a navigational aid for augmented reality driven minimally invasive procedures
Yang et al. Enhancement of spatial orientation and haptic perception for master-slave robotic Natural Orifice Transluminal Endoscopic Surgery
Mansouri et al. Feasibility of infrared tracking of beating heart motion for robotic assisted beating heart surgery

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17929463

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17929463

Country of ref document: EP

Kind code of ref document: A1