WO2023274270A1 - Robot preoperative navigation method and system, storage medium, and computer device - Google Patents

Robot preoperative navigation method and system, storage medium, and computer device Download PDF

Info

Publication number
WO2023274270A1
WO2023274270A1 PCT/CN2022/102141 CN2022102141W WO2023274270A1 WO 2023274270 A1 WO2023274270 A1 WO 2023274270A1 CN 2022102141 W CN2022102141 W CN 2022102141W WO 2023274270 A1 WO2023274270 A1 WO 2023274270A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
information
surgical
surgical robot
virtual map
Prior art date
Application number
PCT/CN2022/102141
Other languages
French (fr)
Chinese (zh)
Inventor
姬亚楠
程陈
何超
Original Assignee
上海微觅医疗器械有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海微觅医疗器械有限公司 filed Critical 上海微觅医疗器械有限公司
Publication of WO2023274270A1 publication Critical patent/WO2023274270A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/32Surgical robots operating autonomously
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Definitions

  • the present application relates to the technical field of medical robots, in particular to a robot preoperative navigation method, a robot preoperative navigation system, storage media and computer equipment.
  • the Global Positioning System GPS, Global Positioning System
  • GPS Global Positioning System
  • This method plans the navigation path of the robot by matching the latitude and longitude coordinates.
  • this method is not suitable for the navigation of the surgical robot in the operating room, so it cannot realize the effective obstacle avoidance of the surgical robot moving from the initial position to the operating position in the operating room.
  • a preoperative navigation method for a robot comprising: obtaining environmental information in an operating room, and displaying a virtual map based on the environmental information; a mark of a starting position of a surgical robot and a mark of a surgical operation position are formed in the virtual map; and Acquiring first interaction information, and generating a planned path of the surgical robot in the virtual map based on the first interaction information; the planned path is used for the surgical robot to correspondingly start in the operating room The position moves to the surgical operation position.
  • the virtual map is superimposed and displayed in the real scene.
  • the virtual map includes a grid mark, and the mark of the starting position and the mark of the operation operation position are located at the intersection of the mark of the grid; the planned path passes through the The intersection marked as no obstacle in the grid mark connects the mark of the starting position and the mark of the surgical operation position.
  • the acquiring the environmental information in the operating room and displaying the virtual map based on the environmental information includes: obtaining the environmental information by using multiple pixel images taken at different positions in the operating room; Constructing and displaying a virtual map with a grid mark according to the environmental information; wherein, some intersection points in the grid mark display the environmental information.
  • the environmental information includes position coordinate information of obstacles in the operating room and the starting position; the construction and display of a virtual map with a grid mark according to the environmental information includes: Based on the position coordinate information of the marked position of the surgical operation position, and the position coordinate information of the obstacle and the initial position, expand the coordinate information of a plurality of virtual points that do not overlap with each position coordinate information; The coordinate information of the virtual point and each location coordinate information are intersection points of the grid mark, and the virtual map with the grid mark is generated and displayed.
  • the shape of the grid in the virtual map with the grid mark includes a triangle.
  • the generating the planned path of the surgical robot in the virtual map based on the first interaction information includes: determining the path relative to the starting position based on the first interaction information. at least one pose change information to obtain a planned path of the surgical robot; and display the planned path on the virtual map.
  • the method further includes: acquiring second interaction information, and adjusting The motion state of the surgical robot, and/or giving prompt information.
  • the robot preoperative navigation method further includes: projecting the planned path in the operating room between the starting position and the surgical operation position according to a corresponding scale.
  • the need to adjust the motion state during the movement of the surgical robot includes: the distance between the surgical robot and obstacles is less than a preset safety distance and/or the trajectory of the surgical robot deviates from the specified Describe the planned path.
  • the method further includes: controlling the movement of the surgical robot and/or controlling the position of the mechanical arm of the surgical robot based on the planned path.
  • a robot preoperative navigation system comprising: a control processing device and a human-computer interaction device, the control processing device and the human-computer interaction device are connected in communication; the human-computer interaction device is used to display a virtual map, and is used to obtain user The first interaction information; the control processing device is used to execute the robot preoperative navigation method described in any one of the above.
  • the human-computer interaction device includes an AR device, and the AR device is configured to superimpose and display the virtual map in a real scene.
  • control processing device includes a position adjustment unit and a camera
  • the camera is arranged on the position adjustment unit
  • the position adjustment unit is used to adjust the shooting angle of the camera
  • the camera uses Multiple pixel images taken at different positions in the operating room are collected to obtain the environmental information.
  • the human-computer interaction device projects the planned path in the operating room between the starting position and the surgical operation position according to a corresponding scale.
  • the control processing device obtains the second interaction information based on the gesture information, and based on the The second interaction information adjusts the motion state of the surgical robot, and/or gives prompt information.
  • the human-computer interaction device acquires the first interaction information by collecting gesture images.
  • a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of any one of the above methods are implemented.
  • a computer device including a memory and a processor; the processor stores a computer program that can run on the processor, and the processor implements the steps of any one of the above methods when executing the computer program .
  • the above robot preoperative navigation method, robot preoperative navigation system, storage medium, and computer equipment obtain environmental information in the operating room to display a virtual map in the operating room.
  • the operator uses gestures to plan the navigation path in advance on the virtual map, and the surgical robot follows the planned path.
  • the motion can automatically and quickly move in place and avoid indoor obstacles, thereby reducing robot collisions and reducing the robot damage rate; and this method can be applied to indoor navigation.
  • Fig. 1 is the flowchart of the robot preoperative navigation method provided in an embodiment
  • Fig. 2 is a schematic diagram of a robot system provided in an embodiment
  • Fig. 3 is a schematic diagram of the preoperative movement process of the surgical robot provided in one embodiment
  • FIG. 4 is a flow chart of specific steps of step S11 provided in an embodiment
  • Fig. 5 is a schematic structural diagram of a control processing device provided in an embodiment
  • FIG. 6 is a flow chart of specific steps of step S112 provided in an embodiment
  • FIGS. 7a to 7d are schematic diagrams of the principles of the gridded navigation path planning method provided in an embodiment
  • FIG. 8 is a flow chart of specific steps of step S12 provided in an embodiment
  • Fig. 9a is a schematic structural diagram of AR glasses with a binocular camera provided in an embodiment
  • Fig. 9b is a schematic diagram of the relative relationship between the coordinate system of the AR glasses and the binocular camera provided in an embodiment
  • Fig. 9c is a schematic diagram of the principle of binocular vision provided in an embodiment
  • FIG. 10 is a flow chart of specific steps of step S122 provided in an embodiment
  • Fig. 11a is a schematic diagram illustrating an implementation manner of histogram-based segmentation provided in an embodiment
  • Fig. 11b is a schematic diagram illustrating the implementation of segmentation based on local area information provided in an embodiment
  • Fig. 12 is a flow chart of steps further included in the robot preoperative navigation method provided in an embodiment
  • Fig. 13 is a schematic diagram of an operator gesture to adjust the motion state of the robot provided in an embodiment
  • Fig. 14 is a schematic diagram of the principle of driving forward provided in an embodiment
  • Fig. 15 is a structural block diagram of a robot preoperative navigation system provided in an embodiment.
  • Fig. 1 is a flow chart of a robot preoperative navigation method in an embodiment.
  • the robotic preoperative navigation method consists of the following steps:
  • Step S11 acquiring environmental information in the operating room, and displaying a virtual map based on the environmental information; the virtual map is formed with a mark of the starting position of the surgical robot and a mark of the surgical operation position.
  • the robot preoperative navigation method can be implemented by using a robot preoperative navigation system, which is used to navigate the preoperative movement of the surgical robot in the robot system in the operating room.
  • the robotic system is located in the operating room.
  • the robot system may include a doctor operating terminal 12, a patient operating terminal (ie, a surgical robot) 11, a vision platform 13, and surgical instruments 14, among others.
  • the operating room may also include a patient table 15 and a patient on the patient table 15 .
  • the mechanical arm of the surgical robot 11 can be used to connect the surgical instrument 14, so that the surgical robot 11 can be used to assist the doctor in the operation.
  • the environment information in the operating room may include position coordinate information of obstacles in the operating room and starting positions.
  • objects and people other than the surgical robot 11 can be referred to as obstacles in the preoperative movement of the surgical robot 11.
  • the starting position may be the current position of the surgical robot 11 .
  • the surgical operation location may be the location of the patient's lesion.
  • the robot preoperative navigation system may include a control processing device and a human-computer interaction device.
  • the control and processing device is used to obtain environmental information in the operating room, and reconstruct a virtual map in the operating room based on the environmental information.
  • the virtual map can be a planar image or a stereoscopic image.
  • the obstacles and the surgical robot 11 in the virtual map may be similar to the corresponding real objects, or may be replaced by corresponding symbols.
  • the obstacles in the virtual map may be called obstacle marks, and the obstacle marks correspond to the obstacles in the operating room in the real scene.
  • the surgical robot 11 in the virtual map may be called a surgical robot mark, and the surgical robot mark corresponds to the surgical robot 11 in the operating room of the real scene.
  • the distance between each obstacle in the virtual map and the surgical robot 11 can be reduced in direct proportion to the corresponding actual distance.
  • the mark of the starting position of the surgical robot 11 and the mark of the surgical operation position are formed in the virtual map.
  • the starting position in the operating room corresponds to the mark of the starting position on the virtual map
  • the operation operation position in the operating room corresponds to the mark of the operation operation position on the virtual map.
  • the human-computer interaction device may be connected in communication with the control processing device, and the human-computer interaction device acquires and displays a virtual map from the control processing device, and detects the first interaction information input by the user.
  • the human-computer interaction device is configured with, for example, a monocular/binocular camera, a touch screen, or a keyboard and mouse, and the first interaction information is obtained by using the human-computer interaction device.
  • the first interaction information includes setting at least one of the following information on the virtual map: a mark of the surgical operation location, a planned route, a posture at the surgical operation location, and the like.
  • the first interaction information is determined based on at least one image containing human body gestures, or determined by detecting at least one of the user's swipe operation, click operation, and press operation on the human-computer interaction device .
  • the human body gestures include: hand gestures, eyeball gestures, or lower limb gestures.
  • the human-computer interaction device detects at least one image containing the posture of the human body to obtain the mark of the surgical operation position corresponding to the posture of the human body and the posture after reaching the surgical operation position.
  • the human-computer interaction device determines the planned route and surgical operation position by detecting the track drawn from the starting position mark on the touch screen and the end point of the track; and by detecting the gesture displayed at the end position on the touch screen option to determine the posture of the surgical robot 11 at the surgical operation position.
  • Step S12 acquiring the first interaction information, and generating a planned path of the surgical robot in the virtual map based on the first interaction information; the planned path is used for the surgical robot to move from the initial position to the surgical operation position in the operating room.
  • the surgical robot 11 Before the operation, due to the need for instrument preparation and patient pushing, the surgical robot 11 will be far away from the patient table 15 . After a series of preparatory work is completed, the surgical robot 11 needs to move to a position relatively close to the patient table 15, and use the robotic arm to control the surgical instrument 14.
  • the robotic arm can also control laparoscope and other medical imaging equipment to assist the doctor to complete the operation process smoothly. Therefore, before the operation, the surgical robot 11 needs to be moved to a more suitable position around the patient for positioning of the mechanical arm.
  • the navigation path is planned in advance before the surgical robot 11 is moved, and the surgical robot 11 can be controlled to move automatically according to the planned path to place the robotic arm at the correct surgical position and posture.
  • the operator can use gestures to connect the mark of the starting position and the mark of the surgical operation position, and avoid obstacles on the virtual map during the connection process.
  • the control processing device generates the planned path of the surgical robot 11 in the virtual map based on the first interaction information.
  • the human-computer interaction device can display the planned path on the virtual map, and the planned path connects the mark of the starting position and the mark of the surgical operation position on the virtual map.
  • a driver of the robot may be integrated inside the surgical robot 11 .
  • the control processing device can transmit the planned path to the driver of the surgical robot 11 through wireless communication technologies such as Bluetooth or mobile hotspot (Wi-Fi), so that the driver of the surgical robot 11 can control the surgical robot 11 from the starting point in the real scene according to the planned path.
  • Wi-Fi wireless communication technologies
  • the position moves autonomously to the surgical operation position.
  • the above robot preoperative navigation method obtains the environmental information in the operating room to display the virtual map of the operating room.
  • the operator uses gestures to plan the navigation path in advance on the virtual map, and the surgical robot 11 can automatically move quickly and avoid the operating room according to the planned path. obstacles, thereby reducing the collision of the surgical robot 11 and reducing the damage rate of the surgical robot 11.
  • This method can be applied to navigation in the operating room.
  • the patient operation end is integrated on the surgical robot 11, and the planned path is used to control the surgical robot 11 to automatically move from the initial position in the real scene to the surgical operation position, without manpower pushing the patient operation end, thereby reducing manual operations Cost, save time and effort.
  • the virtual map can be overlaid and displayed in the real scene, so that it is convenient for the operator to plan a navigation route on the virtual map through gestures.
  • a virtual map may also be displayed through a display screen or the like.
  • the human-computer interaction device may include an Augmented Reality (AR, Augmented Reality) device.
  • AR Augmented Reality
  • the operator can wear AR devices such as AR glasses.
  • the AR device can communicate with the control processing device, and the control processing device can transmit the virtual map to the AR device, so that the virtual map can be superimposed and displayed in the real scene through the AR device.
  • the virtual map includes grid markers.
  • the mark of the starting position, the mark of the operation position and each obstacle can be located at different intersections of the grid mark; or it can also be configured such that the mark of the start position and the mark of the operation position are located at different intersections of the grid mark
  • Each obstacle is distributed on the grid.
  • the operator can connect the mark of the starting position and the mark of the surgical operation position through the intersection marked as no obstacle in the grid mark on the virtual map to form a planned path, so that the surgical robot 11 can move according to the planned path in the operating room. Avoid obstacles in the operating room.
  • step S11 may specifically include steps S111 to S112.
  • Step S111 using multiple pixel images taken at different positions in the operating room to obtain environmental information.
  • the control processing device may include a camera 311 and a position adjustment unit 312 .
  • the camera 311 may be disposed on a position adjustment unit 312 , and the position adjustment unit 312 is used to adjust the shooting angle of the camera 311 .
  • the camera 311 can realize 360-degree rotation and scan the environment in the operating room so as to collect more complete pixel images in the operating room.
  • the control processing device obtains multiple pixel images taken at different positions in the operating room through the camera 311, and environmental information can be obtained according to these pixel images.
  • the control processing device can directly identify the surgical robot 11 , patient lesions and other obstacles in multiple pixel images, so as to obtain the position coordinate information of the starting position, the coordinate information of the surgical operation position and the position coordinate information of obstacles respectively. It can also be configured that the operator determines the coordinate information of the surgical operation position according to these pixel images, and inputs it to the control processing device through the input device.
  • the control processing device may include a depth data measuring device, which can directly measure the position coordinates of each obstacle in the operating room, the current position coordinates of the surgical robot, and the position coordinates of the patient's lesion, so that the measured data can be Get environmental information.
  • Step S112 constructing a virtual map with grid marks according to the environment information; wherein, some intersections in the grid marks display the environment information.
  • the shape of the grid in the grid identifier can be set according to actual requirements.
  • the shape of the grid in the virtual map with the grid mark includes a triangle.
  • step S112 may specifically include steps S1121 to S1122.
  • Step S1121 based on the position coordinate information of the surgical operation position and the position coordinate information of the obstacle and the initial position, expand the coordinate information of multiple virtual points that do not overlap with each position coordinate information.
  • Step S1122 using the coordinate information of each virtual point and each location coordinate information as intersection points marked by grids to generate a virtual map with grid marks.
  • the control processing device may establish an initial virtual map of the operating room according to the environment information.
  • the initial virtual map may include images reduced in proportion to the surgical robot 11 and each obstacle, and the distance between the surgical robot 11 and each obstacle may also be reduced in proportion.
  • the surgical robot 11 and various obstacles in the initial virtual map can also be replaced by corresponding symbols.
  • the initial virtual map does not contain grid marks.
  • the control processing device can establish a grid mark according to the environmental information, and then integrate the grid mark into the initial virtual map, so that the mark of the initial position on the initial virtual map, the mark of the surgical operation position and each obstacle correspond to the grid mark respectively. The intersections of the overlap to form a virtual map with grid identification.
  • the method of forming the grid mark is described by taking the shape of the grid in the grid mark as a triangle as an example.
  • the coordinate information of the extended virtual point can be calculated based on the position coordinate information of the surgical operation position and the position coordinate information of the obstacle and the starting position and using the triangulation (Delaunay) algorithm of the point set.
  • a discrete point set is generated according to the coordinate information of the virtual point and the coordinate information of each position.
  • Fig. 7b using the Delaunay algorithm to generate a triangular mesh image, that is, a mesh mark, from a set of discrete points. Further, please refer to FIG.
  • the grid identification can also be simplified to reduce the calculation amount and improve the generation efficiency of the subsequent planning path. It should be noted that the discrete points corresponding to the starting position and the surgical operation position in the grid mark before simplification and the grid mark after simplification are all at the intersection points of the grid.
  • A. Determine point p3 Suppose there are two points p1 and p2. We call p3 the visible point of the straight line p1p2, and determine p3 according to the following three conditions: (1) p3 is on the right side of side p1p2 (vertex order is clockwise); (2) p3 and p1 are visible, that is, side p1p3 is not Intersect with any constraint edge; (3) p3 and p2 are visible.
  • Step1 Construct the circumcircle C (p1, p2, p3) of ⁇ p1p2p3 and its grid bounding box B (C (p1, p2, p3));
  • Step3 If all grid units in the current grid bounding box have been marked as currently visited grid units, that is, there are no visible points in C(p1, p2, p3), then p3 is the DT point of p1p2.
  • Step1 Take any outer boundary edge p1p2.
  • Step2 Calculate the DT point p3 to form a constrained Delaunay triangle ⁇ p1p2p3.
  • Step3 If the newly generated edge p1p3 is not a constraint edge, if it is already in the stack, delete it from it; otherwise, put it into the stack; similarly, p3p2 can be processed.
  • Step4 If the stack is not empty, take an edge from it and go to Step3; otherwise, the algorithm stops.
  • the grid identifier may also be directly constructed according to each location coordinate in the environment information.
  • the grid spacing can be adjusted according to the interval between the coordinates of each position. In this way, a virtual map with grid marks can also be obtained without expanding the virtual points.
  • step S12 specifically includes steps S121 to S124.
  • Step S121 acquiring gesture images.
  • Step S122 obtaining first interaction information based on the gesture image.
  • Step S123 based on the first interaction information, determine at least one pose change information relative to the starting position, so as to obtain a planned path of the surgical robot.
  • Step S124 displaying the planned route on the virtual map.
  • Figure 7d After the grid mark is integrated into the initial virtual map for display, the operator can use gestures to start from the starting position on the virtual map and connect the points that want to be connected in turn (the bold line in Figure 7d It is the planned route formed by the operator connecting the desired points through gestures, point A corresponds to the starting position, and point B corresponds to the surgical operation position).
  • the first interaction information is information about the operator's hand posture.
  • the upper end of the AR device 20 can be connected to the embedded binocular camera 21 through a printed circuit board (PCB, Printed Circuit Board) to collect the gesture image of the operator, and transmit the gesture image to the control processing device.
  • the control processing device obtains the first interaction information according to the gesture image, and determines at least one pose change information relative to the starting position, so as to obtain the planned path of the surgical robot. For example, when the planned path of the surgical robot 11 is a straight line, it is only necessary to determine a pose change information of the surgical robot 11 relative to the starting position; Multiple pose change information in .
  • the control processing device can record all coordinates and routes on the planned route, and transmit the recorded data to the human-computer interaction device, so that the human-computer interaction device can superimpose the planned route on the corresponding position on the virtual image for display.
  • the camera coordinate system (X 5 , Y 5 , Z 5 ) can establish a mapping relationship through the mechanical position and the display coordinate system (X 3 , Y 3 , Z 3 ).
  • the camera coordinate system (X 5 , Y 5 , Z 5 ) and the world coordinate system (X 0 , Y 0 , Z 0 ) can establish a mapping relationship through the rotation matrix R and the translation vector t, and the mapping relationship is shown in formula (1) .
  • (x c , y c , z c ) is the coordinate value of point P in the camera coordinate system
  • (x w , y w , z w ) is the coordinate value of point P in the world coordinate system.
  • the binocular camera 21 includes a left camera and a right camera, the distance between the left camera and the right camera is b, the distance between the left camera and the right camera to the x axis is f, and the point P(x, y, z)
  • the distance from the intersection of the line connecting the left camera and the x-axis to the z-axis is x l
  • the distance from the intersection point of the line connecting the left camera and the x-axis to the y-axis is y l
  • the distance from the point P(x, y, z) to the x-axis is y l
  • the distance between the intersection of the line connecting the camera and point P and the x-axis to the line where the right camera is located and parallel to the z-axis is x r
  • the distance between point P and the line where the right camera is located and parallel to the z-axis is ( xb).
  • step S122 specifically includes steps S1221 to S1224.
  • Step S1221 preprocessing the gesture image to obtain a gesture contour image.
  • Step S1222 extracting geometric moment features of the gesture contour image.
  • Step S1223 based on the geometric moment feature of the gesture contour image, calculate the distance between the gesture images at different angles at the same moment.
  • Step S1224 based on the distance between the gesture images at different angles at the same moment, the gesture at that moment is recognized, so as to obtain first interaction information.
  • a recognition algorithm of geometric moments and edge detection may be used for preprocessing.
  • the distance between images is calculated by setting the weight of the geometric moment feature, and then the gesture is recognized.
  • This method uses two or more cameras (in this embodiment, the left camera and the right camera in the binocular camera 21) to simultaneously acquire images, as if humans observe the world with their eyes and insects with multi-eye compound eyes. By comparing the difference between the images obtained by these different cameras at the same time, an algorithm is used to calculate the depth information, so as to achieve multi-view 3D imaging.
  • the preprocessing method may specifically include performing any one of histogram-based segmentation, local area information-based segmentation, and physical feature-based segmentation on the gesture image.
  • the three pretreatment methods are illustrated as examples below.
  • the peak-valley structure can be well determined through the preprocessing and contour tracking of the histogram, so as to find a reasonable segmentation threshold. As long as there is a multi-peak structure in the image histogram and an ideal segmentation threshold, this method has a good segmentation effect.
  • the contour extraction can generally obtain the coordinate information of the boundary points through the method of edge detection. Typical is the eight-neighborhood search algorithm to extract the coordinates of gesture boundary points. Each point has eight adjacent points. If one of the points is used as the starting boundary point, the next boundary point must be within the eight neighbors of the point. A closed contour map can be extracted through algorithm tracking .
  • the skin color is extracted through the YCbCr color space and the skin color modeling based on the Gaussian model, and the motion information analysis is performed through the image difference operation to remove the skin-like background in the image.
  • the accuracy of lower gesture segmentation is performed.
  • the robot preoperative navigation method further includes steps S13 to S16.
  • Step S13 control the surgical robot to move from the initial position to the surgical operation position in the operating room based on the planned path.
  • the driver of the surgical robot 11 can control the movement of the surgical robot 11 based on the planned path and/or control the positioning of the robotic arm of the surgical robot 11, so that the control center of the robotic arm of the surgical robot 11 can change from the current position in the operating room to Move to the surgical operating position.
  • Step S14 judging whether the movement state of the surgical robot needs to be adjusted during the movement process.
  • the preoperative navigation system of the robot can automatically determine whether the movement state of the surgical robot 11 needs to be adjusted, and when it is determined that the movement state needs to be adjusted, a prompt can be sent to the operator, so that the operator can perform step S15. It is also possible for the operator to judge whether the motion state needs to be adjusted during the motion of the surgical robot 11 , and step S15 can be executed when it is judged that the motion state needs to be adjusted.
  • the need to adjust the motion state during the movement of the surgical robot 11 may include that the distance between the surgical robot 11 and obstacles is less than a preset safety distance and/or the trajectory of the surgical robot 11 deviates from the planned path, and the like. If it is determined that the movement state of the surgical robot 11 does not need to be adjusted during the movement, step S16 may be performed, and the surgical robot 11 continues to move along the planned path until reaching the surgical operation position.
  • Step S15 acquiring second interaction information, and adjusting the motion state of the surgical robot based on the second interaction information, and/or giving prompt information.
  • the second interaction information is information detected by the human-computer interaction device and used for adjusting a part of the path of the surgical robot 11 during the movement of the surgical robot 11 .
  • the second interaction information and the first interaction information are in different working modes of the surgical robot 11 . For example, when the surgical robot 11 works in the stop mode, it corresponds to acquiring the first interaction information, and when it works in the moving mode, it corresponds to acquiring the second interaction information.
  • the second interaction information and the first interaction information are information represented by different detection signals or different image features.
  • the second interaction information is at least one image containing a left-turn (or right-turn) gesture.
  • the second interaction information is information generated by clicking a turn left (or turn right) button.
  • a binocular camera 21 ie, a binocular vision module
  • the control processing device may acquire second interaction information according to the adjustment gesture image, and adjust the motion state of the surgical robot 11 based on the second interaction information.
  • control processing device may give prompt information based on the second interaction information, for example, when the distance between the surgical robot 11 and the obstacle is less than a preset safety distance, the control processing device may issue a voice prompt.
  • steps S13 and S14 may not be executed by the control processing device, and the control processing device executes step S15 during the movement of the surgical robot 11 from the initial position to the surgical operation position.
  • the robotic preoperative navigation method may also include: acquiring the position information of the surgical robot 11, and 11 to determine whether the motion state of the surgical robot 11 needs to be adjusted; if the motion state of the surgical robot 11 needs to be adjusted during the motion, a prompt message is output.
  • the position information of the surgical robot 11 may include the distance between the obstacle closest to the surgical robot 11 during the real-time movement of the surgical robot 11 .
  • a distance measuring device such as an ultrasonic distance measuring device can be installed on the surgical robot 11 to obtain the position information of the surgical robot 11 , and transmit the position information of the surgical robot 11 to the control processing device.
  • the control processing device After the control processing device acquires the position information of the surgical robot 11, it judges whether the motion state of the surgical robot 11 needs to be adjusted based on the position information of the surgical robot 11. For example, it can be configured that when the distance between the surgical robot 11 and the nearest obstacle to the surgical robot 11 is less than a preset safety distance, it is determined that the surgical robot 11 needs to adjust its motion state during motion.
  • the surgical robot 11 or the control processing device can also be provided with prompting devices such as warning lamps or buzzers.
  • the control processing device determines that the movement state of the surgical robot 11 needs to be adjusted during the movement, it can output prompt information through the prompting device, so that the operator It is known that adjustment gestures need to be issued to control the surgical robot 11 to adjust the motion state to achieve the purpose of obstacle avoidance.
  • the operator can specifically control the movement direction of the surgical robot 11 when using adjustment gestures to control the surgical robot 11 to adjust the motion state, such as controlling the robot to turn left, right, backward, forward and so on. If the operator does not receive the prompt, the control processing device may control the surgical robot 11 to continue moving along the planned path until reaching the surgical operation position. In other examples, the ranging device may further measure the position coordinates of the surgical robot 11 in real time, and transmit the position coordinates of the surgical robot 11 to the control processing device. The control processing device can determine whether the position coordinates of the surgical robot 11 are on the planned path, and if it deviates from the planned path, it can also control the prompting device to issue a prompt.
  • the distance measuring device collects the position information of the robot by means of ultrasonic distance measurement.
  • the specific principles are as follows:
  • Ultrasonic distance measurement is realized by means of the ultrasonic pulse echo transit time method. Assuming that the time elapsed from the ultrasonic pulse from the sensor to the reception is t, and the propagation speed of the ultrasonic wave in the air is c, the distance from the sensor to the obstacle is The distance D can be obtained by formula (6):
  • the prompting device will send out an alarm message.
  • Subsequent operators can adjust the movement direction of the patient's operating end through gestures. For example, if it is less than 5cm, it is judged as the information prompt distance. When the distance between the patient operating end and the obstacle is less than 5cm, the buzzer in the prompt device will automatically sound and the red light will flash. At this time, it is necessary to adjust the movement direction through gestures.
  • the robot preoperative navigation method may also include: projecting the planned path in the operating room according to the corresponding ratio between the initial position and between surgical locations.
  • the control processing device forms a virtual map based on the real image in the operating room with a certain reduction ratio, and the planned path displayed on the virtual map is compared with the starting position and the surgical operation position of the actual control surgical robot 11 in the operating room.
  • the real path when moving between should also be reduced by a certain ratio, so when projecting the planned path on the real scene, it is necessary to zoom in on a certain ratio according to the planned path on the virtual map, so that the planned path projected on the real scene can be Connects the starting position and the operating position in the operating room. In this way, the operator can observe in real time whether the surgical robot deviates from the planned path during the movement of the surgical robot 11 , and if it deviates, the operator can issue an adjustment gesture to control the surgical robot 11 to adjust the motion state.
  • a mobile platform 111 and driving wheels 112 may be provided at the bottom of the surgical robot 11 .
  • An example of the principle of the driver driving the robot to move is: given a point P between two wheels (the distance between each wheelbase point P is 1), the wheel radius r, the angle ⁇ between the robot direction and the X-axis direction, and each wheel speed of with The overall velocity of the surgical robot 11 in the global frame of reference is predicted by the forward kinematics model.
  • the present application also provides a robot preoperative navigation system. Please refer to FIG. 2 and FIG. 15 together.
  • the robot preoperative navigation system includes a control processing device 31 and a human-computer interaction device 32 .
  • the control processing device 31 is in communication connection with the human-computer interaction device 32 .
  • the control processing device 31 is used to acquire the environment information in the operating room, and to generate a virtual map based on the environment information, in which a mark of the starting position of the surgical robot 11 and a mark of the operation position are formed.
  • the human-computer interaction device 32 is used for displaying a virtual map, and for obtaining the first interaction information of the user.
  • the control processing device 31 is further configured to generate a planned path of the surgical robot 11 in the virtual map based on the first interaction information, and the planned path is used for the surgical robot 11 to move from a corresponding initial position in the operating room to a surgical operation position.
  • the human-computer interaction device 32 includes an AR device 20 , and the AR device 20 is used to overlay and display a virtual map in a real scene.
  • the human-computer interaction device 32 is used to collect gesture images to obtain first interaction information.
  • the AR device 20 is provided with a binocular camera 21, and the binocular camera 21 is used to collect gesture images to obtain the first interaction information.
  • the control processing device 31 includes a position adjustment unit 312 and a camera 311, the camera 311 is arranged on the position adjustment unit 312, the position adjustment unit 312 is used to adjust the shooting angle of the camera 311, and the camera 311 is used for Multiple pixel images taken at different positions in the operating room are collected to obtain environmental information.
  • the control processing device obtains the second interaction information based on the gesture information, and adjusts the movement of the surgical robot 11 based on the second interaction information. Exercise status, and/or give prompt information.
  • control processing device 31 is also communicatively connected with the surgical robot 11, and the control processing device 31 is also used to control the movement of the surgical robot 11 from the initial position to the surgical operation position in the operating room based on the planned path; if the surgical robot 11 moves If it is necessary to adjust the motion state during operation, the binocular camera 21 acquires gesture images, and the control processing device 31 obtains second interaction information based on the gesture information, and adjusts the motion state of the surgical robot 11 based on the second interaction information, and/or gives prompt information.
  • the robot preoperative navigation system also includes a distance measuring device (not shown in the figure), the distance measuring device is arranged on the surgical robot 11, and the distance measuring device is used to measure the distance between the surgical robot 11 and the obstacle in real time;
  • the control processing device 31 is also connected in communication with the ranging device, and the control processing device 31 is also used to judge whether the distance between the surgical robot 11 and the obstacle is less than a preset safety distance, and to determine whether the distance between the surgical robot 11 and the obstacle is less than a predetermined distance. When the distance is less than the preset safety distance, a prompt message will be output.
  • the human-computer interaction device 32 is used to project the planned path between the starting position and the surgical operation position in the operating room according to a corresponding scale.
  • the robotic preoperative navigation system can also perform any steps in the above robotic preoperative navigation method.
  • Each module in the above robot preoperative navigation system can be fully or partially realized by software, hardware and combinations thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • the computer equipment may include the AR equipment 20, the control processing device 31, the driver of the surgical robot 11, and the like.
  • the present application also provides a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method described in any one of the above embodiments are implemented.
  • the present application also provides a computer device, including a memory and a processor; the processor stores a computer program that can run on the processor, and the processor implements any of the above embodiments when executing the computer program The steps of the method.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory can include Random Access Memory (RAM) or external cache memory.
  • RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Robotics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Manipulator (AREA)

Abstract

A surgical robot (11) preoperative navigation method, a robot preoperative navigation system, a storage medium, and a computer device. The preoperative navigation method comprises: acquiring environmental information in an operating room, and displaying a virtual map on the basis of the environmental information, a mark of the starting position of the surgical robot (11) and a mark of a surgical operation position being formed in the virtual map (S11); and acquiring first interaction information, and generating a planned path of the surgical robot (11) in the virtual map on the basis of the first interaction information, the planned path being used for the surgical robot (11) to move, in the operating room, from the starting position to the surgical operation position correspondingly (S12). The surgical robot (11) preoperative navigation method, the robot preoperative navigation system, the storage medium, and the computer device can be applied to navigation in the operating room.

Description

机器人术前导航方法、系统、存储介质及计算机设备Robot preoperative navigation method, system, storage medium and computer equipment
本申请要求于2021年6月30日提交中国专利局,申请号为2021107450216,申请名称为“机器人术前导航方法、系统、存储介质及计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application submitted to the China Patent Office on June 30, 2021, with the application number 2021107450216, and the application name is "robot preoperative navigation method, system, storage medium and computer equipment", the entire content of which is passed References are incorporated in this application.
技术领域technical field
本申请涉及医疗机器人技术领域,特别是涉及一种机器人术前导航方法、机器人术前导航系统、存储介质及计算机设备。The present application relates to the technical field of medical robots, in particular to a robot preoperative navigation method, a robot preoperative navigation system, storage media and computer equipment.
背景技术Background technique
传统技术中使用全球定位系统(GPS,Global Positioning System)传感器来定位机器人的起始位置和目标位置以及它们之间路径,这种方法通过匹配经纬坐标来规划机器人的导航路径。然而,由于室内GPS信号较弱,使得这种方法不适用于手术室内手术机器人的导航,故而无法实现手术机器人在手术室内从起始位置运动至手术操作位置之间运动的有效避障。In traditional technology, the Global Positioning System (GPS, Global Positioning System) sensor is used to locate the starting position and target position of the robot and the path between them. This method plans the navigation path of the robot by matching the latitude and longitude coordinates. However, due to the weak indoor GPS signal, this method is not suitable for the navigation of the surgical robot in the operating room, so it cannot realize the effective obstacle avoidance of the surgical robot moving from the initial position to the operating position in the operating room.
发明内容Contents of the invention
基于此,有必要针对上述问题提供一种机器人术前导航方法、机器人术前导航系统、存储介质及计算机设备。Based on this, it is necessary to provide a robot preoperative navigation method, a robot preoperative navigation system, a storage medium and a computer device for the above problems.
一种机器人术前导航方法,包括:获取手术室内的环境信息,并基于所述环境信息显示虚拟地图;所述虚拟地图中形成有手术机器人的起始位置的标记和手术操作位置的标记;及获取第一交互信息,并基于所述第一交互信息在所述虚拟地图中生成所述手术机器人的规划路径;所述规划路径用于供所述手术机器人在所述手术室内相应地从起始位置移动至手术操作位置。A preoperative navigation method for a robot, comprising: obtaining environmental information in an operating room, and displaying a virtual map based on the environmental information; a mark of a starting position of a surgical robot and a mark of a surgical operation position are formed in the virtual map; and Acquiring first interaction information, and generating a planned path of the surgical robot in the virtual map based on the first interaction information; the planned path is used for the surgical robot to correspondingly start in the operating room The position moves to the surgical operation position.
在其中一个实施例中,所述虚拟地图叠加显示于现实场景中。In one of the embodiments, the virtual map is superimposed and displayed in the real scene.
在其中一个实施例中,所述虚拟地图包含网格标识,所述起始位置的标记和所述手术操作位置的标记均位于所述网格标识的交叉点上;所述规划路径通过所述网格标识中标识为无障碍物的交叉点连接所述起始位置的标记和所述手术操作位置的标记。In one of the embodiments, the virtual map includes a grid mark, and the mark of the starting position and the mark of the operation operation position are located at the intersection of the mark of the grid; the planned path passes through the The intersection marked as no obstacle in the grid mark connects the mark of the starting position and the mark of the surgical operation position.
在其中一个实施例中,所述获取手术室内的环境信息,并基于所述环境信息显示虚拟地图,包括:利用在所述手术室内的不同位置拍摄的多幅像素图像,得到所述环境信息;依据所述环境信息构建并显示带有网格标识的虚拟地图;其中,所述网格标识中的部分交叉点显示所述环境信息。In one of the embodiments, the acquiring the environmental information in the operating room and displaying the virtual map based on the environmental information includes: obtaining the environmental information by using multiple pixel images taken at different positions in the operating room; Constructing and displaying a virtual map with a grid mark according to the environmental information; wherein, some intersection points in the grid mark display the environmental information.
在其中一个实施例中,所述环境信息包括手术室内的障碍物的和所述起始位置的位置坐标信息;所述依据所述环境信息构建并显示带有网格标识的虚拟地图,包括:基于所述手术操作位置的标记的位置坐标信息,以及所述障碍物的、所述起始位置的位置坐标信息,拓展出与各位置坐标信息无重叠的多个虚拟点的坐标信息;以各所述虚拟点的坐标信息以及各位置坐标信息为所述网格标识的交叉点,生成并显示所述带有网格标识的虚拟地图。In one of the embodiments, the environmental information includes position coordinate information of obstacles in the operating room and the starting position; the construction and display of a virtual map with a grid mark according to the environmental information includes: Based on the position coordinate information of the marked position of the surgical operation position, and the position coordinate information of the obstacle and the initial position, expand the coordinate information of a plurality of virtual points that do not overlap with each position coordinate information; The coordinate information of the virtual point and each location coordinate information are intersection points of the grid mark, and the virtual map with the grid mark is generated and displayed.
在其中一个实施例中,所述带有网格标识的虚拟地图中的网格形状包括三角形。In one of the embodiments, the shape of the grid in the virtual map with the grid mark includes a triangle.
在其中一个实施例中,所述基于所述第一交互信息在所述虚拟地图中生成所述手术机器人的规划路径,包括:基于所述第一交互信息,确定相对于所述起始位置的至少一个位姿变化信息,以得到所述手术机器人的规划路径;将所述规划路径显示于所述虚拟地图中。In one of the embodiments, the generating the planned path of the surgical robot in the virtual map based on the first interaction information includes: determining the path relative to the starting position based on the first interaction information. at least one pose change information to obtain a planned path of the surgical robot; and display the planned path on the virtual map.
在其中一个实施例中,在所述手术机器人从所述起始位置运动至所述手术操作位置的过程中,所述方法还包括:获取第二交互信息,并基于所述第二交互信息调整所述手术机器人的运动状态、和/或给予提示信息。In one of the embodiments, during the movement of the surgical robot from the initial position to the surgical operation position, the method further includes: acquiring second interaction information, and adjusting The motion state of the surgical robot, and/or giving prompt information.
在其中一个实施例中,所述机器人术前导航方法还包括:将所述规划路径按照对应比例投射于所述手术室中在所述起始位置和所述手术操作位置之间。In one embodiment, the robot preoperative navigation method further includes: projecting the planned path in the operating room between the starting position and the surgical operation position according to a corresponding scale.
在其中一个实施例中,所述手术机器人移动的过程中需要调整运动状态包括:所述手术机器人与障碍物之间的距离小于预设的安全距离和/或所述手术机器人的运动轨迹偏离所述规划路径。In one embodiment, the need to adjust the motion state during the movement of the surgical robot includes: the distance between the surgical robot and obstacles is less than a preset safety distance and/or the trajectory of the surgical robot deviates from the specified Describe the planned path.
在其中一个实施例中,还包括:基于所述规划路径控制所述手术机器人移动和/或控制所述手术机器人的机械臂摆位。In one of the embodiments, the method further includes: controlling the movement of the surgical robot and/or controlling the position of the mechanical arm of the surgical robot based on the planned path.
一种机器人术前导航系统,包括:控制处理装置和人机交互装置,所述控制处理装置和所述人机交互装置通信连接;所述人机交互装置用于显示虚拟地图,并用于获取用户的第一交互信息;所述控制处理装置用于执行如上任一项所述的机器人术前导航方法。A robot preoperative navigation system, comprising: a control processing device and a human-computer interaction device, the control processing device and the human-computer interaction device are connected in communication; the human-computer interaction device is used to display a virtual map, and is used to obtain user The first interaction information; the control processing device is used to execute the robot preoperative navigation method described in any one of the above.
在其中一个实施例中,所述人机交互装置包括AR设备,所述AR设备用于将所述虚拟地图叠加显示于现实场景中。In one of the embodiments, the human-computer interaction device includes an AR device, and the AR device is configured to superimpose and display the virtual map in a real scene.
在其中一个实施例中,所述控制处理装置包括位置调节单元和摄像头,所述摄像头设置于所述位置调节单元上,所述位置调节单元用于调节所述摄像头的拍摄角度,所述摄像头用于采集所述手术室内的不同位置拍摄的多幅像素图像以得到所述环境信息。In one of the embodiments, the control processing device includes a position adjustment unit and a camera, the camera is arranged on the position adjustment unit, the position adjustment unit is used to adjust the shooting angle of the camera, and the camera uses Multiple pixel images taken at different positions in the operating room are collected to obtain the environmental information.
在其中一个实施例中,所述人机交互装置将所述规划路径按照对应比例投射于所述手术室中在所述起始位置和所述手术操作位置之间。In one of the embodiments, the human-computer interaction device projects the planned path in the operating room between the starting position and the surgical operation position according to a corresponding scale.
在其中一个实施例中,在所述手术机器人在所述手术室内从所述起始位置运动至所述手术操作位置期间;所述控制处理装置基于手势信息得到第二交互信息,并基于所述第二交互信息调整所述手术机器人的运动状态、和/或给予提示信息。In one embodiment, during the movement of the surgical robot from the initial position to the surgical operation position in the operating room; the control processing device obtains the second interaction information based on the gesture information, and based on the The second interaction information adjusts the motion state of the surgical robot, and/or gives prompt information.
在其中一个实施例中,所述人机交互装置通过采集手势图像以获得所述第一交互信息。In one of the embodiments, the human-computer interaction device acquires the first interaction information by collecting gesture images.
一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上任一项所述的方法的步骤。A storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of any one of the above methods are implemented.
一种计算机设备,包括存储器和处理器;所述处理器上存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上任一项所述方法的步骤。A computer device, including a memory and a processor; the processor stores a computer program that can run on the processor, and the processor implements the steps of any one of the above methods when executing the computer program .
上述机器人术前导航方法、机器人术前导航系统、存储介质及计算机设备获取手术室内的环境信息以显示手术室内的虚拟地图,操作者利用手势在虚拟地图上提前规划导航路径,手术机器人按照规划路径运动能够自动快速移动到位并避开室内的障碍物,从而减少机器人碰撞,降低机器人损坏率;并且该方法能够适用于室内导航。The above robot preoperative navigation method, robot preoperative navigation system, storage medium, and computer equipment obtain environmental information in the operating room to display a virtual map in the operating room. The operator uses gestures to plan the navigation path in advance on the virtual map, and the surgical robot follows the planned path. The motion can automatically and quickly move in place and avoid indoor obstacles, thereby reducing robot collisions and reducing the robot damage rate; and this method can be applied to indoor navigation.
附图说明Description of drawings
为了更清楚地说明本申请实施例或传统技术中的技术方案,下面将对实施例或传统技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application or the conventional technology, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or the traditional technology. Obviously, the accompanying drawings in the following description are only the present invention For some embodiments of the application, those skilled in the art can also obtain other drawings based on these drawings without creative work.
图1为一实施例中提供的机器人术前导航方法的流程图;Fig. 1 is the flowchart of the robot preoperative navigation method provided in an embodiment;
图2为一实施例中提供的机器人系统的示意图;Fig. 2 is a schematic diagram of a robot system provided in an embodiment;
图3为一实施例中提供的手术机器人术前运动过程的示意图;Fig. 3 is a schematic diagram of the preoperative movement process of the surgical robot provided in one embodiment;
图4为一实施例中提供的步骤S11的具体步骤流程图;FIG. 4 is a flow chart of specific steps of step S11 provided in an embodiment;
图5为一实施例中提供的控制处理装置的结构示意图;Fig. 5 is a schematic structural diagram of a control processing device provided in an embodiment;
图6为一实施例中提供的步骤S112的具体步骤流程图;FIG. 6 is a flow chart of specific steps of step S112 provided in an embodiment;
图7a至7d为一实施例中提供的网格化导航路径规划方法的原理示意图;7a to 7d are schematic diagrams of the principles of the gridded navigation path planning method provided in an embodiment;
图8为一实施例中提供的步骤S12的具体步骤流程图;FIG. 8 is a flow chart of specific steps of step S12 provided in an embodiment;
图9a为一实施例中提供的带双目摄像头的AR眼镜的结构示意图;Fig. 9a is a schematic structural diagram of AR glasses with a binocular camera provided in an embodiment;
图9b为一实施例中提供的AR眼镜和双目摄像头的坐标系相对关系示意图;Fig. 9b is a schematic diagram of the relative relationship between the coordinate system of the AR glasses and the binocular camera provided in an embodiment;
图9c为一实施例中提供的双目视觉原理示意图;Fig. 9c is a schematic diagram of the principle of binocular vision provided in an embodiment;
图10为一实施例中提供的步骤S122的具体步骤流程图;FIG. 10 is a flow chart of specific steps of step S122 provided in an embodiment;
图11a为一实施例中提供的基于直方图的分割实现方式说明的示意图;Fig. 11a is a schematic diagram illustrating an implementation manner of histogram-based segmentation provided in an embodiment;
图11b为一实施例中提供的基于局部区域信息的分割实现方式说明的示意图;Fig. 11b is a schematic diagram illustrating the implementation of segmentation based on local area information provided in an embodiment;
图12为一实施例中提供的机器人术前导航方法还包括的步骤流程图;Fig. 12 is a flow chart of steps further included in the robot preoperative navigation method provided in an embodiment;
图13为一实施例中提供的操作者手势调整机器人运动状态的示意图;Fig. 13 is a schematic diagram of an operator gesture to adjust the motion state of the robot provided in an embodiment;
图14为一实施例中提供的驱动前进原理的示意图;Fig. 14 is a schematic diagram of the principle of driving forward provided in an embodiment;
图15为一实施例中提供的机器人术前导航系统的结构框图。Fig. 15 is a structural block diagram of a robot preoperative navigation system provided in an embodiment.
具体实施方式detailed description
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的首选实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本申请的公开内容更加透彻全面。In order to facilitate the understanding of the present application, the present application will be described more fully below with reference to the relevant drawings. A preferred embodiment of the application is shown in the drawings. However, the present application can be embodied in many different forms and is not limited to the embodiments described herein. On the contrary, the purpose of providing these embodiments is to make the disclosure of this application more thorough and comprehensive.
图1为一实施例中的机器人术前导航方法的流程图。请参阅图1,机器人术前导航方法包括以下步骤:Fig. 1 is a flow chart of a robot preoperative navigation method in an embodiment. Referring to Figure 1, the robotic preoperative navigation method consists of the following steps:
步骤S11,获取手术室内的环境信息,并基于环境信息显示虚拟地图;虚拟地图中形成有手术机器人的起始位置的标记和手术操作位置的标记。Step S11 , acquiring environmental information in the operating room, and displaying a virtual map based on the environmental information; the virtual map is formed with a mark of the starting position of the surgical robot and a mark of the surgical operation position.
具体的,机器人术前导航方法可以利用机器人术前导航系统实现,机器人术前导航系统用于在手术室内对机器人系统中手术机器人的术前运动进行导航。请参阅图2,机器人系统位于手术室内。机器人系统可以包括医生操作端12、患者操作端(即手术机器人)11、视觉平台13以及手术器械14等。手术室内除了机器人系统还可以包括患者台15以及位于患者台15上的患者等。其中,手术机器人11的机械臂可以用于连接手术器械14,从而利用手术机器人11辅助医生手术。Specifically, the robot preoperative navigation method can be implemented by using a robot preoperative navigation system, which is used to navigate the preoperative movement of the surgical robot in the robot system in the operating room. Referring to Figure 2, the robotic system is located in the operating room. The robot system may include a doctor operating terminal 12, a patient operating terminal (ie, a surgical robot) 11, a vision platform 13, and surgical instruments 14, among others. In addition to the robot system, the operating room may also include a patient table 15 and a patient on the patient table 15 . Wherein, the mechanical arm of the surgical robot 11 can be used to connect the surgical instrument 14, so that the surgical robot 11 can be used to assist the doctor in the operation.
手术室内的环境信息可以包括手术室内的障碍物的及起始位置的位置坐标信息。在手术室内,手术机器人11以外的物体和人都可以称为手术机器人11术前运动过程中的障 碍物。起始位置可以是手术机器人11当前的位置。手术操作位置可以是患者病灶所在位置。The environment information in the operating room may include position coordinate information of obstacles in the operating room and starting positions. In the operating room, objects and people other than the surgical robot 11 can be referred to as obstacles in the preoperative movement of the surgical robot 11. The starting position may be the current position of the surgical robot 11 . The surgical operation location may be the location of the patient's lesion.
机器人术前导航系统可以包括控制处理装置和人机交互装置。控制处理装置用于获取手术室内的环境信息,并基于环境信息重建手术室内的虚拟地图。虚拟地图可以为平面图像,也可以为立体图像。虚拟地图中的障碍物和手术机器人11可以与相应的实物类似,也可以采用相应的符号代替。虚拟地图中的障碍物可以称为障碍物标记,障碍物标记与现实场景的手术室内的各障碍物对应。虚拟地图中的手术机器人11可以称为手术机器人标记,该手术机器人标记与现实场景的手术室内的手术机器人11对应。当然也可以不在虚拟地图上标识出手术机器人标记而直接用初始位置的标记代替。虚拟地图中各个障碍物及手术机器人11之间的距离可以与相应的实际距离呈正比缩小。The robot preoperative navigation system may include a control processing device and a human-computer interaction device. The control and processing device is used to obtain environmental information in the operating room, and reconstruct a virtual map in the operating room based on the environmental information. The virtual map can be a planar image or a stereoscopic image. The obstacles and the surgical robot 11 in the virtual map may be similar to the corresponding real objects, or may be replaced by corresponding symbols. The obstacles in the virtual map may be called obstacle marks, and the obstacle marks correspond to the obstacles in the operating room in the real scene. The surgical robot 11 in the virtual map may be called a surgical robot mark, and the surgical robot mark corresponds to the surgical robot 11 in the operating room of the real scene. Of course, it is also possible not to mark the surgical robot mark on the virtual map, but directly replace it with the mark of the initial position. The distance between each obstacle in the virtual map and the surgical robot 11 can be reduced in direct proportion to the corresponding actual distance.
虚拟地图中形成有手术机器人11的起始位置的标记和手术操作位置的标记。本申请中,手术室内的起始位置和虚拟地图上起始位置的标记对应,手术室内的手术操作位置和虚拟地图上手术操作位置的标记对应。The mark of the starting position of the surgical robot 11 and the mark of the surgical operation position are formed in the virtual map. In the present application, the starting position in the operating room corresponds to the mark of the starting position on the virtual map, and the operation operation position in the operating room corresponds to the mark of the operation operation position on the virtual map.
人机交互装置可以与控制处理装置通信连接,人机交互装置从控制处理装置中获取虚拟地图并进行显示,以及检测用户输入的第一交互信息。其中,所述人机交互装置配置有如单目/双目摄像头、触摸屏、或键盘鼠标等,利用该人机交互装置获取第一交互信息。其中,所述第一交互信息包含在虚拟地图上设置以下至少一种信息:手术操作位置的标记,规划路线,在手术操作位置的姿态等。所述第一交互信息举例是根据至少一幅包含人体姿态的图像而确定的,或者是通过检测用户在人机交互装置上的划动操作、点击操作、和按压操作中至少一种而确定的。例如,所述人体姿态举例包括:手部姿态、眼球姿态、或下肢姿态等。人机交互装置通过检测至少一幅包含人体姿态的图像,得到人体姿态所对应的手术操作位置的标记和到达手术操作位置后的姿态。又如,人机交互装置通过检测触摸屏上从起始位置的标记开始所划动的轨迹及其轨迹终点,确定所规划的路线以及手术操作位置;以及通过检测触摸屏上终点位置上所显示的姿态选项,来确定手术机器人11在手术操作位置的姿态。The human-computer interaction device may be connected in communication with the control processing device, and the human-computer interaction device acquires and displays a virtual map from the control processing device, and detects the first interaction information input by the user. Wherein, the human-computer interaction device is configured with, for example, a monocular/binocular camera, a touch screen, or a keyboard and mouse, and the first interaction information is obtained by using the human-computer interaction device. Wherein, the first interaction information includes setting at least one of the following information on the virtual map: a mark of the surgical operation location, a planned route, a posture at the surgical operation location, and the like. For example, the first interaction information is determined based on at least one image containing human body gestures, or determined by detecting at least one of the user's swipe operation, click operation, and press operation on the human-computer interaction device . For example, the human body gestures include: hand gestures, eyeball gestures, or lower limb gestures. The human-computer interaction device detects at least one image containing the posture of the human body to obtain the mark of the surgical operation position corresponding to the posture of the human body and the posture after reaching the surgical operation position. As another example, the human-computer interaction device determines the planned route and surgical operation position by detecting the track drawn from the starting position mark on the touch screen and the end point of the track; and by detecting the gesture displayed at the end position on the touch screen option to determine the posture of the surgical robot 11 at the surgical operation position.
步骤S12,获取第一交互信息,并基于第一交互信息在虚拟地图中生成手术机器人的规划路径;规划路径用于供手术机器人在手术室内相应地从起始位置移动至手术操作位置。Step S12, acquiring the first interaction information, and generating a planned path of the surgical robot in the virtual map based on the first interaction information; the planned path is used for the surgical robot to move from the initial position to the surgical operation position in the operating room.
具体的,请参阅图3,术前,由于需要进行器械准备和病人推入等工作,手术机器人11会离患者台15较远。在一系列准备工作完成之后,需要手术机器人11移动到距离患者台15比较近的位置,利用机械臂操控手术器械14,机械臂还可以操控腹腔镜等医疗影像设备,辅助医生顺利完成手术过程。所以术前需要手术机器人11移动到患者身边比较合适的位置进行机械臂的摆位工作。本实施例中在移动手术机器人11之前提前规划导航路径,可以控制手术机器人11按照规划路径自动运动以将机械臂置于正确的手术位置及手术姿态。Specifically, please refer to FIG. 3 . Before the operation, due to the need for instrument preparation and patient pushing, the surgical robot 11 will be far away from the patient table 15 . After a series of preparatory work is completed, the surgical robot 11 needs to move to a position relatively close to the patient table 15, and use the robotic arm to control the surgical instrument 14. The robotic arm can also control laparoscope and other medical imaging equipment to assist the doctor to complete the operation process smoothly. Therefore, before the operation, the surgical robot 11 needs to be moved to a more suitable position around the patient for positioning of the mechanical arm. In this embodiment, the navigation path is planned in advance before the surgical robot 11 is moved, and the surgical robot 11 can be controlled to move automatically according to the planned path to place the robotic arm at the correct surgical position and posture.
操作者可以根据显示的虚拟地图用手势连接起始位置的标记和手术操作位置的标记,且在连接过程中避开虚拟地图上的障碍物。According to the displayed virtual map, the operator can use gestures to connect the mark of the starting position and the mark of the surgical operation position, and avoid obstacles on the virtual map during the connection process.
控制处理装置基于第一交互信息在虚拟地图中生成手术机器人11的规划路径。人机交互装置在虚拟地图中可以显示出该规划路径,该规划路径在虚拟地图中连接起始位置的 标记和手术操作位置的标记。The control processing device generates the planned path of the surgical robot 11 in the virtual map based on the first interaction information. The human-computer interaction device can display the planned path on the virtual map, and the planned path connects the mark of the starting position and the mark of the surgical operation position on the virtual map.
手术机器人11内部可以集成有机器人的驱动器。控制处理装置可以通过蓝牙或行动热点(Wi-Fi)等无线通信技术将规划路径传输给手术机器人11的驱动器,从而手术机器人11的驱动器可以根据规划路径控制手术机器人11从现实场景中的起始位置自主移动到手术操作位置。A driver of the robot may be integrated inside the surgical robot 11 . The control processing device can transmit the planned path to the driver of the surgical robot 11 through wireless communication technologies such as Bluetooth or mobile hotspot (Wi-Fi), so that the driver of the surgical robot 11 can control the surgical robot 11 from the starting point in the real scene according to the planned path. The position moves autonomously to the surgical operation position.
上述机器人术前导航方法获取手术室内的环境信息以显示手术室内的虚拟地图,操作者利用手势在虚拟地图上提前规划导航路径,手术机器人11按照规划路径运动能够自动快速移动到位并避开手术室内的障碍物,从而减少手术机器人11碰撞,降低手术机器人11损坏率,该方法能够适用于手术室内导航。并且,在手术室中将患者操作端集成于手术机器人11上,利用规划路径控制手术机器人11自动从现实场景中的起始位置移动到手术操作位置,无需人力推动患者操作端,从而减少人工操作成本、省时省力。The above robot preoperative navigation method obtains the environmental information in the operating room to display the virtual map of the operating room. The operator uses gestures to plan the navigation path in advance on the virtual map, and the surgical robot 11 can automatically move quickly and avoid the operating room according to the planned path. obstacles, thereby reducing the collision of the surgical robot 11 and reducing the damage rate of the surgical robot 11. This method can be applied to navigation in the operating room. In addition, in the operating room, the patient operation end is integrated on the surgical robot 11, and the planned path is used to control the surgical robot 11 to automatically move from the initial position in the real scene to the surgical operation position, without manpower pushing the patient operation end, thereby reducing manual operations Cost, save time and effort.
在一些示例中,可以将虚拟地图叠加显示于现实场景中,使得方便操作者通过手势在虚拟地图上规划导航路径。在其他示例中,也可以通过显示屏等显示虚拟地图。In some examples, the virtual map can be overlaid and displayed in the real scene, so that it is convenient for the operator to plan a navigation route on the virtual map through gestures. In other examples, a virtual map may also be displayed through a display screen or the like.
具体的,人机交互装置可以包括增强现实(AR,Augmented Reality)设备。操作者可以佩戴AR眼镜等AR设备。AR设备可以与控制处理装置通信连接,控制处理装置可以将虚拟地图传输给AR设备,使得通过AR设备将该虚拟地图叠加显示于现实场景中。Specifically, the human-computer interaction device may include an Augmented Reality (AR, Augmented Reality) device. The operator can wear AR devices such as AR glasses. The AR device can communicate with the control processing device, and the control processing device can transmit the virtual map to the AR device, so that the virtual map can be superimposed and displayed in the real scene through the AR device.
在一些示例中,虚拟地图包含网格标识。起始位置的标记、手术操作位置的标记及各个障碍物可以均位于网格标识的不同交叉点上;或者还可以配置为起始位置的标记和手术操作位置的标记位于网格标识的不同交叉点上,各个障碍物分布在网格上。操作者可以通过在虚拟地图上网格标识中标识为无障碍物的交叉点连接起始位置的标记和手术操作位置的标记,以形成规划路径,使得手术机器人11在手术室内按照规划路径运动时能够避开手术室内的障碍物。In some examples, the virtual map includes grid markers. The mark of the starting position, the mark of the operation position and each obstacle can be located at different intersections of the grid mark; or it can also be configured such that the mark of the start position and the mark of the operation position are located at different intersections of the grid mark Each obstacle is distributed on the grid. The operator can connect the mark of the starting position and the mark of the surgical operation position through the intersection marked as no obstacle in the grid mark on the virtual map to form a planned path, so that the surgical robot 11 can move according to the planned path in the operating room. Avoid obstacles in the operating room.
在一些示例中,请参阅图4,步骤S11具体可以包括步骤S111至步骤S112。In some examples, referring to FIG. 4 , step S11 may specifically include steps S111 to S112.
步骤S111,利用在手术室内的不同位置拍摄的多幅像素图像,得到环境信息。Step S111, using multiple pixel images taken at different positions in the operating room to obtain environmental information.
具体的,请参阅图5,控制处理装置可以包括摄像头311和位置调节单元312。摄像头311可以设置于位置调节单元312上,位置调节单元312用于调节摄像头311的拍摄角度。通过位置调节单元312的多自由度运动,摄像头311可以实现360度旋转扫描手术室内环境从而采集到手术室内更加完整的像素图像。控制处理装置通过摄像头311获取到手术室内的不同位置拍摄的多幅像素图像,根据这些像素图像可以得到环境信息。控制处理装置可以直接识别多幅像素图像中的手术机器人11、患者病灶及其他障碍物,从而分别得到起始位置的位置坐标信息、手术操作位置的坐标信息及障碍物的位置坐标信息。还可以配置为操作者根据这些像素图像确定手术操作位置的坐标信息,并通过输入设备输入给控制处理装置。在其他示例中,控制处理装置可以包括深度数据测量装置,利用深度数据测量装置可以直接测量手术室内各个障碍物的位置坐标、手术机器人的当前位置坐标及患者病灶的位置坐标,从而根据测量数据可以得到环境信息。Specifically, referring to FIG. 5 , the control processing device may include a camera 311 and a position adjustment unit 312 . The camera 311 may be disposed on a position adjustment unit 312 , and the position adjustment unit 312 is used to adjust the shooting angle of the camera 311 . Through the multi-degree-of-freedom movement of the position adjustment unit 312, the camera 311 can realize 360-degree rotation and scan the environment in the operating room so as to collect more complete pixel images in the operating room. The control processing device obtains multiple pixel images taken at different positions in the operating room through the camera 311, and environmental information can be obtained according to these pixel images. The control processing device can directly identify the surgical robot 11 , patient lesions and other obstacles in multiple pixel images, so as to obtain the position coordinate information of the starting position, the coordinate information of the surgical operation position and the position coordinate information of obstacles respectively. It can also be configured that the operator determines the coordinate information of the surgical operation position according to these pixel images, and inputs it to the control processing device through the input device. In other examples, the control processing device may include a depth data measuring device, which can directly measure the position coordinates of each obstacle in the operating room, the current position coordinates of the surgical robot, and the position coordinates of the patient's lesion, so that the measured data can be Get environmental information.
步骤S112,依据环境信息构建带有网格标识的虚拟地图;其中,网格标识中的部分交叉点显示环境信息。Step S112, constructing a virtual map with grid marks according to the environment information; wherein, some intersections in the grid marks display the environment information.
可选的,网格标识的中网格的形状可以根据实际需求进行设置。譬如,带有网格标识的虚拟地图中的网格形状包括三角形。Optionally, the shape of the grid in the grid identifier can be set according to actual requirements. For example, the shape of the grid in the virtual map with the grid mark includes a triangle.
在一些示例中,请参阅图6,步骤S112具体可以包括步骤S1121至步骤S1122。In some examples, referring to FIG. 6 , step S112 may specifically include steps S1121 to S1122.
步骤S1121,基于手术操作位置的位置坐标信息以及障碍物的、起始位置的位置坐标信息,拓展出与各位置坐标信息无重叠的多个虚拟点的坐标信息。Step S1121, based on the position coordinate information of the surgical operation position and the position coordinate information of the obstacle and the initial position, expand the coordinate information of multiple virtual points that do not overlap with each position coordinate information.
步骤S1122,以各虚拟点的坐标信息以及各位置坐标信息为网格标识的交叉点,生成带有网格标识的虚拟地图。Step S1122, using the coordinate information of each virtual point and each location coordinate information as intersection points marked by grids to generate a virtual map with grid marks.
具体的,控制处理装置可以根据环境信息建立手术室的初始虚拟地图。初始虚拟地图中可以包括与手术机器人11和各个障碍物成等比例缩小的图像,手术机器人11和各个障碍物之间的距离也可以成等比例缩小。当然,初始虚拟地图中的手术机器人11和各个障碍物也可以采用相应的符号代替。初始虚拟地图中不包含有网格标识。然后,控制处理装置可以根据环境信息建立网格标识,再将网格标识融入初始虚拟地图,使得初始虚拟地图上初始位置的标记、手术操作位置的标记及各障碍物分别与网格标识上对应的交叉点重叠,从而形成带有网格标识的虚拟地图。Specifically, the control processing device may establish an initial virtual map of the operating room according to the environment information. The initial virtual map may include images reduced in proportion to the surgical robot 11 and each obstacle, and the distance between the surgical robot 11 and each obstacle may also be reduced in proportion. Certainly, the surgical robot 11 and various obstacles in the initial virtual map can also be replaced by corresponding symbols. The initial virtual map does not contain grid marks. Then, the control processing device can establish a grid mark according to the environmental information, and then integrate the grid mark into the initial virtual map, so that the mark of the initial position on the initial virtual map, the mark of the surgical operation position and each obstacle correspond to the grid mark respectively. The intersections of the overlap to form a virtual map with grid identification.
此处以网格标识中网格的形状为三角形为例说明形成网格标识的方法。请参阅图7a,可以先基于手术操作位置的位置坐标信息以及障碍物的、起始位置的位置坐标信息并利用点集的三角剖分(Delaunay)算法计算拓展出的虚拟点的坐标信息。根据虚拟点的坐标信息和各位置的坐标信息生成离散点集。请参阅图7b,利用Delaunay算法将离散点集生成三角形网格图像即网格标识。进一步的,请参阅图7c,还可以对网格标识进行简化使得减小计算量以提高后续规划路径的生成效率。需要说明的是,简化前的网格标识和简化后的网格标识中起始位置和手术操作位置所对应的离散点均处于网格的交叉点上。Here, the method of forming the grid mark is described by taking the shape of the grid in the grid mark as a triangle as an example. Referring to Fig. 7a, the coordinate information of the extended virtual point can be calculated based on the position coordinate information of the surgical operation position and the position coordinate information of the obstacle and the starting position and using the triangulation (Delaunay) algorithm of the point set. A discrete point set is generated according to the coordinate information of the virtual point and the coordinate information of each position. Please refer to Fig. 7b, using the Delaunay algorithm to generate a triangular mesh image, that is, a mesh mark, from a set of discrete points. Further, please refer to FIG. 7 c , the grid identification can also be simplified to reduce the calculation amount and improve the generation efficiency of the subsequent planning path. It should be noted that the discrete points corresponding to the starting position and the surgical operation position in the grid mark before simplification and the grid mark after simplification are all at the intersection points of the grid.
以下详细介绍Delaunay算法利用离散点集生成网格标识的具体实现过程:The following describes in detail the specific implementation process of the Delaunay algorithm using discrete point sets to generate grid labels:
A.确定p3点:假设有两个点p1和p2。我们称p3为直线p1p2的可见点,根据满足下面三个条件来确定p3:(1)p3在边p1p2的右侧(顶点顺序为顺时针);(2)p3与p1可见,即边p1p3不与任何一个约束边相交;(3)p3与p2可见。A. Determine point p3: Suppose there are two points p1 and p2. We call p3 the visible point of the straight line p1p2, and determine p3 according to the following three conditions: (1) p3 is on the right side of side p1p2 (vertex order is clockwise); (2) p3 and p1 are visible, that is, side p1p3 is not Intersect with any constraint edge; (3) p3 and p2 are visible.
B.确定DT点:在一个约束Delaunay三角形中,其中与一条边相对的顶点称为该边的DT点。确定DT点的过程如下:B. Determine the DT point: In a constrained Delaunay triangle, the vertex opposite to a side is called the DT point of the side. The process of determining the DT point is as follows:
Step1.构造Δp1p2p3的外接圆C(p1,p2,p3)及其网格包围盒B(C(p1,p2,p3));Step1. Construct the circumcircle C (p1, p2, p3) of Δp1p2p3 and its grid bounding box B (C (p1, p2, p3));
Step2.依次访问网格包围盒内的每个网格单元:对未访问过的网格单元进行搜索,并将其标记为当前访问网格单元。若某个网格单元中存在可见点p,并且∠p1pp2>∠p1p3p2,则令p3=p1,转Step1;否则,转Step3。Step2. Visit each grid unit in the grid bounding box in turn: search for unvisited grid units and mark them as currently visited grid units. If there is a visible point p in a certain grid unit, and ∠p1pp2>∠p1p3p2, set p3=p1, and go to Step1; otherwise, go to Step3.
Step3.若当前网格包围盒内所有网格单元都已被标记为当前访问网格单元,也即C(p1,p2,p3)内无可见点,则p3为的p1p2的DT点。Step3. If all grid units in the current grid bounding box have been marked as currently visited grid units, that is, there are no visible points in C(p1, p2, p3), then p3 is the DT point of p1p2.
C.算法设计:C. Algorithm design:
Step1.取任意一条外边界边p1p2。Step1. Take any outer boundary edge p1p2.
Step2.计算DT点p3,构成约束Delaunay三角形Δp1p2p3。Step2. Calculate the DT point p3 to form a constrained Delaunay triangle Δp1p2p3.
Step3.如果新生成的边p1p3不是约束边,若已经在堆栈中,则将其从中删除否则,将其放入堆栈;类似地,可处理p3p2。Step3. If the newly generated edge p1p3 is not a constraint edge, if it is already in the stack, delete it from it; otherwise, put it into the stack; similarly, p3p2 can be processed.
Step4.若堆栈不空,则从中取出一条边,转Step3;否则,算法停止。Step4. If the stack is not empty, take an edge from it and go to Step3; otherwise, the algorithm stops.
然后,将简化前或者简化后的网格标识融入初始虚拟地图中以得到带有网格标识的虚拟地图。Then, integrate the grid logo before or after simplification into the initial virtual map to obtain a virtual map with grid logo.
在其他示例中,还可以根据环境信息中的各位置坐标直接构建网格标识。网格间距可根据各位置坐标之间的间隔进行调整。如此,在无需拓展虚拟点的情况下,也能得到带有网格标识的虚拟地图。In other examples, the grid identifier may also be directly constructed according to each location coordinate in the environment information. The grid spacing can be adjusted according to the interval between the coordinates of each position. In this way, a virtual map with grid marks can also be obtained without expanding the virtual points.
在一些示例中,请参阅图8,步骤S12具体包括步骤S121至S124。In some examples, referring to FIG. 8 , step S12 specifically includes steps S121 to S124.
步骤S121,获取手势图像。Step S121, acquiring gesture images.
步骤S122,基于手势图像得到第一交互信息。Step S122, obtaining first interaction information based on the gesture image.
步骤S123,基于第一交互信息,确定相对于起始位置的至少一个位姿变化信息,以得到手术机器人的规划路径。Step S123, based on the first interaction information, determine at least one pose change information relative to the starting position, so as to obtain a planned path of the surgical robot.
步骤S124,将规划路径显示于虚拟地图中。Step S124, displaying the planned route on the virtual map.
具体的,请参阅图7d,在将网格标识融入初始虚拟地图进行显示后,操作者可以利用手势在虚拟地图上从起始位置开始,依次连接想要连接的点(图7d中加粗直线为操作者通过手势连接想要的点所形成的规划路线,A点对应于起始位置,B点对应于手术操作位置)。Specifically, please refer to Figure 7d. After the grid mark is integrated into the initial virtual map for display, the operator can use gestures to start from the starting position on the virtual map and connect the points that want to be connected in turn (the bold line in Figure 7d It is the planned route formed by the operator connecting the desired points through gestures, point A corresponds to the starting position, and point B corresponds to the surgical operation position).
第一交互信息即操作者手部姿势的信息。请参阅图9a,AR设备20上端可以通过印制电路板(PCB,Printed Circuit Board)连接嵌入式双目摄像头21,以采集操作者的手势图像,并将手势图像传输给控制处理装置。控制处理装置根据手势图像得到第一交互信息,并确定相对于起始位置的至少一个位姿变化信息,以得到手术机器人的规划路径。譬如,当手术机器人11的规划路径为一条直线时只需要确定手术机器人11相对于起始位置的一个位姿变化信息,当手术机器人11的规划路径不是一条直线时需要确定手术机器人11在运动过程中多个位姿变化信息。控制处理装置可以记录规划路径上的所有坐标和路线,并将所记录数据传输给人机交互装置,使得人机交互装置将规划路径叠加于虚拟图像上对应位置进行显示。The first interaction information is information about the operator's hand posture. Please refer to FIG. 9a, the upper end of the AR device 20 can be connected to the embedded binocular camera 21 through a printed circuit board (PCB, Printed Circuit Board) to collect the gesture image of the operator, and transmit the gesture image to the control processing device. The control processing device obtains the first interaction information according to the gesture image, and determines at least one pose change information relative to the starting position, so as to obtain the planned path of the surgical robot. For example, when the planned path of the surgical robot 11 is a straight line, it is only necessary to determine a pose change information of the surgical robot 11 relative to the starting position; Multiple pose change information in . The control processing device can record all coordinates and routes on the planned route, and transmit the recorded data to the human-computer interaction device, so that the human-computer interaction device can superimpose the planned route on the corresponding position on the virtual image for display.
其中,图9a中双目摄像头的工作原理如下:Among them, the working principle of the binocular camera in Figure 9a is as follows:
请参阅图9b和9c,AR设备20与双目摄像头21相对坐标关系固定。相机坐标系(X 5,Y 5,Z 5)可通过机械位置和显示坐标系(X 3,Y 3,Z 3)建立映射关系。相机坐标系(X 5,Y 5,Z 5)和世界坐标系(X 0,Y 0,Z 0)可通过旋转矩阵R与平移向量t来建立映射关系,映射关系如式(1)所示。 Referring to FIGS. 9b and 9c , the relative coordinate relationship between the AR device 20 and the binocular camera 21 is fixed. The camera coordinate system (X 5 , Y 5 , Z 5 ) can establish a mapping relationship through the mechanical position and the display coordinate system (X 3 , Y 3 , Z 3 ). The camera coordinate system (X 5 , Y 5 , Z 5 ) and the world coordinate system (X 0 , Y 0 , Z 0 ) can establish a mapping relationship through the rotation matrix R and the translation vector t, and the mapping relationship is shown in formula (1) .
Figure PCTCN2022102141-appb-000001
Figure PCTCN2022102141-appb-000001
其中,(x c,y c,z c)为P点在相机坐标系中的坐标值,(x w,y w,z w)为P点在世界坐标系中的坐标值。 Among them, (x c , y c , z c ) is the coordinate value of point P in the camera coordinate system, and (x w , y w , z w ) is the coordinate value of point P in the world coordinate system.
其中,根据图9c中的几何关系可以得到P点满足式(2)至(5)Among them, according to the geometric relationship in Figure 9c, it can be obtained that point P satisfies the formulas (2) to (5)
Figure PCTCN2022102141-appb-000002
Figure PCTCN2022102141-appb-000002
Figure PCTCN2022102141-appb-000003
Figure PCTCN2022102141-appb-000003
Figure PCTCN2022102141-appb-000004
Figure PCTCN2022102141-appb-000004
Figure PCTCN2022102141-appb-000005
Figure PCTCN2022102141-appb-000005
其中,双目摄像头21包括左相机和右相机,左相机和右相机之间的距离为b,左相机和右相机到x轴之间的距离均为f,点P(x,y,z)和左相机的连线与x轴的交点到z轴的距离为x l,点P(x,y,z)和左相机的连线与x轴的交点到y轴的距离为y l,右相机和点P的连线与x轴的交点到右相机所在且与z轴平行的直线之间的距离为x r,点P到右相机所在且与z轴平行的直线之间的距离为(x-b)。 Wherein, the binocular camera 21 includes a left camera and a right camera, the distance between the left camera and the right camera is b, the distance between the left camera and the right camera to the x axis is f, and the point P(x, y, z) The distance from the intersection of the line connecting the left camera and the x-axis to the z-axis is x l , the distance from the intersection point of the line connecting the left camera and the x-axis to the y-axis is y l , and the distance from the point P(x, y, z) to the x-axis is y l The distance between the intersection of the line connecting the camera and point P and the x-axis to the line where the right camera is located and parallel to the z-axis is x r , and the distance between point P and the line where the right camera is located and parallel to the z-axis is ( xb).
在一些示例中,请参阅图10,步骤S122具体包括步骤S1221至步骤S1224。In some examples, referring to FIG. 10 , step S122 specifically includes steps S1221 to S1224.
步骤S1221,对手势图像进行预处理,以得到手势轮廓图像。Step S1221, preprocessing the gesture image to obtain a gesture contour image.
步骤S1222,提取手势轮廓图像的几何矩特征。Step S1222, extracting geometric moment features of the gesture contour image.
步骤S1223,基于手势轮廓图像的几何矩特征,计算同一时刻不同角度的手势图像之间的距离。Step S1223, based on the geometric moment feature of the gesture contour image, calculate the distance between the gesture images at different angles at the same moment.
步骤S1224,基于同一时刻不同角度的手势图像之间的距离识别该时刻的手势,以得到第一交互信息。Step S1224, based on the distance between the gesture images at different angles at the same moment, the gesture at that moment is recognized, so as to obtain first interaction information.
具体的,可以采用几何矩和边缘检测的识别算法进行预处理。先对手势图像二值化处理得到手势轮廓图像。再提取手势轮廓图像的几何矩特征。具体可以取出七个向量中的四个分量,在灰度图基础上直接检测图像的边缘,利用直方图表示图像的边界方向特征。最后,通过设定几何矩特征的权重来计算图像间的距离,再对手势进行识别。这种方式使用两个或者两个以上的摄像头(本实施例中为双目摄像头21中的左相机和右相机)同时获取图像,就好像是人类用双眼、昆虫用多目复眼来观察世界,通过比对这些不同摄像头在同一时刻获得的图像的差别,使用算法来计算深度信息,从而多视角三维成像。Specifically, a recognition algorithm of geometric moments and edge detection may be used for preprocessing. First, binarize the gesture image to obtain the gesture contour image. Then extract the geometric moment feature of the gesture contour image. Specifically, four components of the seven vectors can be taken out, the edge of the image can be directly detected on the basis of the grayscale image, and the boundary direction feature of the image can be represented by a histogram. Finally, the distance between images is calculated by setting the weight of the geometric moment feature, and then the gesture is recognized. This method uses two or more cameras (in this embodiment, the left camera and the right camera in the binocular camera 21) to simultaneously acquire images, as if humans observe the world with their eyes and insects with multi-eye compound eyes. By comparing the difference between the images obtained by these different cameras at the same time, an algorithm is used to calculate the depth information, so as to achieve multi-view 3D imaging.
在一些示例中,预处理方法具体可以包括对手势图像进行基于直方图的分割、基于局部区域信息的分割、基于物理特征的分割中的任意一种。以下分别对这三种预处理方法进行举例说明。In some examples, the preprocessing method may specifically include performing any one of histogram-based segmentation, local area information-based segmentation, and physical feature-based segmentation on the gesture image. The three pretreatment methods are illustrated as examples below.
请参阅图11a,基于直方图的分割中,通过对直方图的预处理及轮廓追踪,可以很好的确定其峰谷结构,从而找到合理的分割门限。只要图像直方图中存在着多波峰结构并且在一个理想的分割门限,这种方法就有很好的分割效果。Please refer to Figure 11a. In the histogram-based segmentation, the peak-valley structure can be well determined through the preprocessing and contour tracking of the histogram, so as to find a reasonable segmentation threshold. As long as there is a multi-peak structure in the image histogram and an ideal segmentation threshold, this method has a good segmentation effect.
请参阅图11b,基于局部区域信息的分割中,轮廓提取一般可以通过边缘检测的方法得到边界点的坐标信息。比较典型的就是八邻域搜索算法提取手势边界点的坐标。每个点都有八个点与之相邻,以其中一个点为起始边界点,则下一个边界点一定在该点的八邻域之内,通过算法跟踪即可提取出封闭的轮廓图。Please refer to Fig. 11b. In the segmentation based on local area information, the contour extraction can generally obtain the coordinate information of the boundary points through the method of edge detection. Typical is the eight-neighborhood search algorithm to extract the coordinates of gesture boundary points. Each point has eight adjacent points. If one of the points is used as the starting boundary point, the next boundary point must be within the eight neighbors of the point. A closed contour map can be extracted through algorithm tracking .
基于颜色等物理特征的分割中,通过YCbCr颜色空间和基于高斯模型的肤色建模对肤色进行提取,并通过图像差运算进行运动信息分析去除图像中的类肤色背景,该方法保证了在复杂背景下手势分割的准确性。In the segmentation based on physical features such as color, the skin color is extracted through the YCbCr color space and the skin color modeling based on the Gaussian model, and the motion information analysis is performed through the image difference operation to remove the skin-like background in the image. The accuracy of lower gesture segmentation.
在一些示例中,请参阅图12,机器人术前导航方法还包括步骤S13至步骤S16。In some examples, please refer to FIG. 12 , the robot preoperative navigation method further includes steps S13 to S16.
步骤S13,基于规划路径控制手术机器人在手术室中从起始位置运动至手术操作位 置。Step S13, control the surgical robot to move from the initial position to the surgical operation position in the operating room based on the planned path.
具体的,手术机器人11的驱动器可以基于规划路径控制手术机器人11移动和/或控制手术机器人11的机械臂摆位,使得手术机器人11的机械臂的控制中心在现实场景中从手术室内的当前位置运动至手术操作位置。Specifically, the driver of the surgical robot 11 can control the movement of the surgical robot 11 based on the planned path and/or control the positioning of the robotic arm of the surgical robot 11, so that the control center of the robotic arm of the surgical robot 11 can change from the current position in the operating room to Move to the surgical operating position.
步骤S14,判断手术机器人运动的过程中是否需要调整运动状态。Step S14, judging whether the movement state of the surgical robot needs to be adjusted during the movement process.
具体的,可以由机器人术前导航系统自动判断手术机器人11运动的过程是否需要调整运动状态,当判断到需要调整运动状态时可以向操作者发出提示,从而操作者可以执行步骤S15。也可以由操作者判断手术机器人11运动的过程中是否需要调整运动状态,在判断到需要调整运动状态时可以执行步骤S15。手术机器人11移动的过程中需要调整运动状态可以包括手术机器人11与障碍物之间的距离小于预设的安全距离和/或手术机器人11的运动轨迹偏离规划路径等等。若判断到手术机器人11运动的过程中不需要调整运动状态,则可以执行步骤S16,手术机器人11沿着规划路径继续运动直到到达手术操作位置。Specifically, the preoperative navigation system of the robot can automatically determine whether the movement state of the surgical robot 11 needs to be adjusted, and when it is determined that the movement state needs to be adjusted, a prompt can be sent to the operator, so that the operator can perform step S15. It is also possible for the operator to judge whether the motion state needs to be adjusted during the motion of the surgical robot 11 , and step S15 can be executed when it is judged that the motion state needs to be adjusted. The need to adjust the motion state during the movement of the surgical robot 11 may include that the distance between the surgical robot 11 and obstacles is less than a preset safety distance and/or the trajectory of the surgical robot 11 deviates from the planned path, and the like. If it is determined that the movement state of the surgical robot 11 does not need to be adjusted during the movement, step S16 may be performed, and the surgical robot 11 continues to move along the planned path until reaching the surgical operation position.
步骤S15,获取第二交互信息,并基于第二交互信息调整手术机器人的运动状态、和/或给予提示信息。其中,所述第二交互信息为所述人机交互装置检测到的,用于在手术机器人11移动过程中调整手术机器人11的部分路径的信息。与第一交互信息的获取方式相同或相似,为了区分第一交互信息,在一些示例中,第二交互信息与第一交互信息处于手术机器人11的不同工作模式。例如,手术机器人11工作在停止模式下对应获取第一交互信息,工作在移动模式下对应获取第二交互信息。在另一些示例中,第二交互信息与第一交互信息为不同检测信号、或不同图像特征所表示的信息。例如,第二交互信息为至少一幅包含左转(或右转)手势的图像。又如,第二交互信息为点击左转(或右转)按钮所产生的信息。Step S15, acquiring second interaction information, and adjusting the motion state of the surgical robot based on the second interaction information, and/or giving prompt information. Wherein, the second interaction information is information detected by the human-computer interaction device and used for adjusting a part of the path of the surgical robot 11 during the movement of the surgical robot 11 . The same as or similar to the manner of obtaining the first interaction information, in order to distinguish the first interaction information, in some examples, the second interaction information and the first interaction information are in different working modes of the surgical robot 11 . For example, when the surgical robot 11 works in the stop mode, it corresponds to acquiring the first interaction information, and when it works in the moving mode, it corresponds to acquiring the second interaction information. In some other examples, the second interaction information and the first interaction information are information represented by different detection signals or different image features. For example, the second interaction information is at least one image containing a left-turn (or right-turn) gesture. As another example, the second interaction information is information generated by clicking a turn left (or turn right) button.
具体的,请参阅图13和9a,当判断到手术机器人11运动的过程中需要调整运动状态时,操作者发出调整手势。可以在操作者所佩戴的AR设备20上设置双目摄像头21(即双目视觉模组)用于采集操作者的调整手势图像,并将调整手势图像传输给控制处理装置。控制处理装置获取到调整手势图像后可以根据调整手势图像获取第二交互信息,并基于第二交互信息调整手术机器人11的运动状态。在其他示例中,控制处理装置可以基于第二交互信息给予提示信息,譬如,当手术机器人11与障碍物之间的距离小于预设的安全距离时,控制处理装置可以发出语音提示。在其他示例中,步骤S13和S14也可以不由控制处理装置执行,控制处理装置在手术机器人11从起始位置运动至手术操作位置的过程中执行步骤S15。Specifically, referring to FIGS. 13 and 9 a , when it is determined that the movement state of the surgical robot 11 needs to be adjusted during the movement, the operator issues an adjustment gesture. A binocular camera 21 (ie, a binocular vision module) can be set on the AR device 20 worn by the operator to collect images of the operator's adjustment gestures, and transmit the adjustment gesture images to the control processing device. After acquiring the adjustment gesture image, the control processing device may acquire second interaction information according to the adjustment gesture image, and adjust the motion state of the surgical robot 11 based on the second interaction information. In other examples, the control processing device may give prompt information based on the second interaction information, for example, when the distance between the surgical robot 11 and the obstacle is less than a preset safety distance, the control processing device may issue a voice prompt. In other examples, steps S13 and S14 may not be executed by the control processing device, and the control processing device executes step S15 during the movement of the surgical robot 11 from the initial position to the surgical operation position.
在一些示例中,步骤S14中由机器人术前导航系统判断手术机器人11在运动过程中是否需要调整运动状态时,机器人术前导航方法还可以包括:获取手术机器人11的位置信息,并基于手术机器人11的位置信息判断是否需要调整手术机器人11的运动状态;若手术机器人11运动的过程中需要调整运动状态,则输出提示信息。In some examples, when the robotic preoperative navigation system judges in step S14 whether the surgical robot 11 needs to adjust its motion state during the motion, the robotic preoperative navigation method may also include: acquiring the position information of the surgical robot 11, and 11 to determine whether the motion state of the surgical robot 11 needs to be adjusted; if the motion state of the surgical robot 11 needs to be adjusted during the motion, a prompt message is output.
具体的,手术机器人11的位置信息可以包括手术机器人11实时运动过程中与该手术机器人11距离最近的障碍物之间的距离。可以在手术机器人11上设置超声波测距装置等测距装置来获取该手术机器人11的位置信息,并且将手术机器人11的位置信息传输给控制处理装置。控制处理装置获取到手术机器人11的位置信息后,基于手术机器人11的 位置信息判断是否需要调整手术机器人11的运动状态。譬如,可以配置为手术机器人11与距离该手术机器人11最近的障碍物之间的距离小于预设的安全距离时判断为手术机器人11运动过程中需要调整运动状态。手术机器人11或者控制处理装置上还可以设置有报警灯或蜂鸣器等提示装置,当控制处理装置判断到手术机器人11运动过程中需要调整运动状态时可以通过提示装置输出提示信息,使得操作者知晓需要发出调整手势来控制手术机器人11调整运动状态以达到避障的目的。Specifically, the position information of the surgical robot 11 may include the distance between the obstacle closest to the surgical robot 11 during the real-time movement of the surgical robot 11 . A distance measuring device such as an ultrasonic distance measuring device can be installed on the surgical robot 11 to obtain the position information of the surgical robot 11 , and transmit the position information of the surgical robot 11 to the control processing device. After the control processing device acquires the position information of the surgical robot 11, it judges whether the motion state of the surgical robot 11 needs to be adjusted based on the position information of the surgical robot 11. For example, it can be configured that when the distance between the surgical robot 11 and the nearest obstacle to the surgical robot 11 is less than a preset safety distance, it is determined that the surgical robot 11 needs to adjust its motion state during motion. The surgical robot 11 or the control processing device can also be provided with prompting devices such as warning lamps or buzzers. When the control processing device determines that the movement state of the surgical robot 11 needs to be adjusted during the movement, it can output prompt information through the prompting device, so that the operator It is known that adjustment gestures need to be issued to control the surgical robot 11 to adjust the motion state to achieve the purpose of obstacle avoidance.
操作者在利用调整手势控制手术机器人11调整运动状态时可以具体控制手术机器人11的运动朝向,譬如控制机器人左转、右转、后退、前进等等。若操作者未收到提示,则控制处理装置可以控制手术机器人11继续沿着规划路径运动直到到达手术操作位置。在其他示例中,测距装置可以进一步实时测量手术机器人11的位置坐标,并将手术机器人11的位置坐标传输给控制处理装置。控制处理装置可以判断手术机器人11的位置坐标是否位于规划路径上,若偏离规划路径,同样可以控制提示装置发出提示。The operator can specifically control the movement direction of the surgical robot 11 when using adjustment gestures to control the surgical robot 11 to adjust the motion state, such as controlling the robot to turn left, right, backward, forward and so on. If the operator does not receive the prompt, the control processing device may control the surgical robot 11 to continue moving along the planned path until reaching the surgical operation position. In other examples, the ranging device may further measure the position coordinates of the surgical robot 11 in real time, and transmit the position coordinates of the surgical robot 11 to the control processing device. The control processing device can determine whether the position coordinates of the surgical robot 11 are on the planned path, and if it deviates from the planned path, it can also control the prompting device to issue a prompt.
在一些示例中,测距装置采用超声波测距方式采集机器人的位置信息。具体原理如下:In some examples, the distance measuring device collects the position information of the robot by means of ultrasonic distance measurement. The specific principles are as follows:
超声波测距是借助于超声脉冲回波渡越时间法来实现的,设超声波脉冲由传感器发出到接收所经历的时间为t,超声波在空气中的传播速度为c,则从传感器到障碍物的距离D可用式(6)求出:Ultrasonic distance measurement is realized by means of the ultrasonic pulse echo transit time method. Assuming that the time elapsed from the ultrasonic pulse from the sensor to the reception is t, and the propagation speed of the ultrasonic wave in the air is c, the distance from the sensor to the obstacle is The distance D can be obtained by formula (6):
D=ct/2     (6)D=ct/2 (6)
可以理解,采用IO(Trig(控制端))触发测距给至少10us的高电平信号,模块会自动发射8个40khz的方波,自动检测是否有信号返回,如有,则超声波输出一个高电平,高电平持续的时间就是超声波往返的时间(可用定时器来计算),测试距离=(高电平时间*声速(340M/S))/2。It can be understood that if the IO (Trig (control terminal)) is used to trigger the ranging and give a high-level signal of at least 10us, the module will automatically transmit 8 square waves of 40khz, and automatically detect whether there is a signal return. If so, the ultrasonic output will be a high Level, the duration of the high level is the round-trip time of the ultrasonic wave (can be calculated by a timer), and the test distance = (high level time * speed of sound (340M/S))/2.
在手术机器人11运动过程中,可以配置为当测距装置所检测的手术机器人11与距离该机器人最近的障碍物之间的距离小于设定的安全距离,提示装置就会发出报警信息。后续操作者可通过手势调整患者操作端的运动方向。比如小于5cm时判定为信息提示距离,当患者操作端与障碍物距离小于5cm时,提示装置中蜂鸣器会自动发出响声且红灯闪烁,此时就需要通过手势调整运动朝向。During the movement of the surgical robot 11, it can be configured such that when the distance between the surgical robot 11 detected by the ranging device and the nearest obstacle to the robot is less than a set safe distance, the prompting device will send out an alarm message. Subsequent operators can adjust the movement direction of the patient's operating end through gestures. For example, if it is less than 5cm, it is judged as the information prompt distance. When the distance between the patient operating end and the obstacle is less than 5cm, the buzzer in the prompt device will automatically sound and the red light will flash. At this time, it is necessary to adjust the movement direction through gestures.
在一些示例中,步骤S14中由操作者判断手术机器人11在运动过程中是否需要调整运动状态时机器人术前导航方法还可以包括:将规划路径按照对应比例投射于手术室中在起始位置和手术操作位置之间。In some examples, in step S14, when the operator judges whether the surgical robot 11 needs to adjust the motion state during the motion process, the robot preoperative navigation method may also include: projecting the planned path in the operating room according to the corresponding ratio between the initial position and between surgical locations.
具体的,控制处理装置根据将手术室内的真实图像以一定的缩小比例形成虚拟地图,则显示在虚拟地图上的规划路径相比于实际控制手术机器人11在手术室内的起始位置和手术操作位置之间运动时的真实路径也应该以一定的比例缩小,因此在将规划路径投射于现实场景中时需要根据虚拟地图上的规划路径进行一定比例的放大,使得投射于现实场景中的规划路径能够连接手术室中起始位置和手术操作位置。如此,操作者可以在手术机器人11运动的过程中实时的观察到手术机器人是否偏离规划路径,若偏离,则可以发出调整手势控制手术机器人11调整运动状态。Specifically, the control processing device forms a virtual map based on the real image in the operating room with a certain reduction ratio, and the planned path displayed on the virtual map is compared with the starting position and the surgical operation position of the actual control surgical robot 11 in the operating room. The real path when moving between should also be reduced by a certain ratio, so when projecting the planned path on the real scene, it is necessary to zoom in on a certain ratio according to the planned path on the virtual map, so that the planned path projected on the real scene can be Connects the starting position and the operating position in the operating room. In this way, the operator can observe in real time whether the surgical robot deviates from the planned path during the movement of the surgical robot 11 , and if it deviates, the operator can issue an adjustment gesture to control the surgical robot 11 to adjust the motion state.
在一些示例中,请参阅图14,手术机器人11底部可以设置移动平台111和驱动轮112。驱动器驱动机器人移动的原理举例为:给定两轮之间的一个点P(各轮距点P的距离为1)、 轮子半径r、机器人方向与X轴方向之间的夹角θ以及各轮的转速
Figure PCTCN2022102141-appb-000006
Figure PCTCN2022102141-appb-000007
通过前向运动学模型预测全局参考框架中的手术机器人11的总速度。
In some examples, please refer to FIG. 14 , a mobile platform 111 and driving wheels 112 may be provided at the bottom of the surgical robot 11 . An example of the principle of the driver driving the robot to move is: given a point P between two wheels (the distance between each wheelbase point P is 1), the wheel radius r, the angle θ between the robot direction and the X-axis direction, and each wheel speed of
Figure PCTCN2022102141-appb-000006
with
Figure PCTCN2022102141-appb-000007
The overall velocity of the surgical robot 11 in the global frame of reference is predicted by the forward kinematics model.
应该理解的是,虽然图1、4、6、8、10及12的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1、4、6、8、10及12中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flowcharts of FIGS. 1 , 4 , 6 , 8 , 10 and 12 are shown sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in Figures 1, 4, 6, 8, 10 and 12 may include multiple steps or multiple stages, and these steps or stages are not necessarily executed at the same time, but may be performed at different times For execution, the execution order of these steps or stages is not necessarily performed sequentially, but may be executed in turn or alternately with other steps or at least a part of steps or stages in other steps.
本申请还提供一种机器人术前导航系统。请一并参阅图2和图15,机器人术前导航系统包括控制处理装置31和人机交互装置32。控制处理装置31和人机交互装置32通信连接。控制处理装置31用于获取手术室内的环境信息,并用于基于环境信息生成虚拟地图,虚拟地图中形成有手术机器人11的起始位置的标记和手术操作位置的标记。人机交互装置32用于显示虚拟地图,并用于获取用户的第一交互信息。控制处理装置31还用于基于第一交互信息在虚拟地图中生成手术机器人11的规划路径,规划路径用于供手术机器人11从手术室内相应的起始位置移动至手术操作位置。The present application also provides a robot preoperative navigation system. Please refer to FIG. 2 and FIG. 15 together. The robot preoperative navigation system includes a control processing device 31 and a human-computer interaction device 32 . The control processing device 31 is in communication connection with the human-computer interaction device 32 . The control processing device 31 is used to acquire the environment information in the operating room, and to generate a virtual map based on the environment information, in which a mark of the starting position of the surgical robot 11 and a mark of the operation position are formed. The human-computer interaction device 32 is used for displaying a virtual map, and for obtaining the first interaction information of the user. The control processing device 31 is further configured to generate a planned path of the surgical robot 11 in the virtual map based on the first interaction information, and the planned path is used for the surgical robot 11 to move from a corresponding initial position in the operating room to a surgical operation position.
在一些示例中,请一并参阅图9a,人机交互装置32包括AR设备20,AR设备20用于将虚拟地图叠加显示于现实场景中。In some examples, please also refer to FIG. 9 a , the human-computer interaction device 32 includes an AR device 20 , and the AR device 20 is used to overlay and display a virtual map in a real scene.
在一些示例中,人机交互装置32用于采集手势图像以获得第一交互信息。具体可以配置为AR设备20上设置有双目摄像头21,双目摄像头21用于采集手势图像以获得第一交互信息。In some examples, the human-computer interaction device 32 is used to collect gesture images to obtain first interaction information. Specifically, it can be configured that the AR device 20 is provided with a binocular camera 21, and the binocular camera 21 is used to collect gesture images to obtain the first interaction information.
在一些示例中,请参阅图5,控制处理装置31包括位置调节单元312和摄像头311,摄像头311设置于位置调节单元312上,位置调节单元312用于调节摄像头311的拍摄角度,摄像头311用于采集手术室内的不同位置拍摄的多幅像素图像以得到环境信息。In some examples, please refer to FIG. 5 , the control processing device 31 includes a position adjustment unit 312 and a camera 311, the camera 311 is arranged on the position adjustment unit 312, the position adjustment unit 312 is used to adjust the shooting angle of the camera 311, and the camera 311 is used for Multiple pixel images taken at different positions in the operating room are collected to obtain environmental information.
在一些示例中,在手术机器人11在手术室内从所述起始位置运动至所述手术操作位置期间;控制处理装置基于手势信息得到第二交互信息,并基于第二交互信息调整手术机器人11的运动状态、和/或给予提示信息。In some examples, during the movement of the surgical robot 11 from the initial position to the surgical operation position in the operating room; the control processing device obtains the second interaction information based on the gesture information, and adjusts the movement of the surgical robot 11 based on the second interaction information. Exercise status, and/or give prompt information.
可选的,控制处理装置31还与手术机器人11通信连接,控制处理装置31还用于基于规划路径控制手术机器人11在手术室内从起始位置运动至手术操作位置;若手术机器人11运动的过程中需要调整运动状态,则双目摄像头21获取手势图像,控制处理装置31基于手势信息得到第二交互信息,并基于第二交互信息调整手术机器人11的运动状态、和/或给予提示信息。Optionally, the control processing device 31 is also communicatively connected with the surgical robot 11, and the control processing device 31 is also used to control the movement of the surgical robot 11 from the initial position to the surgical operation position in the operating room based on the planned path; if the surgical robot 11 moves If it is necessary to adjust the motion state during operation, the binocular camera 21 acquires gesture images, and the control processing device 31 obtains second interaction information based on the gesture information, and adjusts the motion state of the surgical robot 11 based on the second interaction information, and/or gives prompt information.
在一些示例中,机器人术前导航系统还包括测距装置(图未示出),测距装置设置于手术机器人11上,测距装置用于实时测量手术机器人11与障碍物之间的距离;控制处理装置31还与测距装置通信连接,控制处理装置31还用于判断手术机器人11与障碍物之间的距离是否小于预设的安全距离,并在手术机器人11与所述障碍物之间的距离小于预设的安全距离时输出提示信息。In some examples, the robot preoperative navigation system also includes a distance measuring device (not shown in the figure), the distance measuring device is arranged on the surgical robot 11, and the distance measuring device is used to measure the distance between the surgical robot 11 and the obstacle in real time; The control processing device 31 is also connected in communication with the ranging device, and the control processing device 31 is also used to judge whether the distance between the surgical robot 11 and the obstacle is less than a preset safety distance, and to determine whether the distance between the surgical robot 11 and the obstacle is less than a predetermined distance. When the distance is less than the preset safety distance, a prompt message will be output.
在一些示例中,人机交互装置32用于将规划路径按照对应比例投射于手术室中在起 始位置和手术操作位置之间。In some examples, the human-computer interaction device 32 is used to project the planned path between the starting position and the surgical operation position in the operating room according to a corresponding scale.
进一步的,机器人术前导航系统还可以执行上述机器人术前导航方法中的任意步骤。关于机器人术前导航系统的具体限定可以参见上文中对于机器人术前导航方法的限定,在此不再赘述。上述机器人术前导航系统中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。其中,计算机设备可以包括AR设备20、控制处理装置31、手术机器人11的驱动器等。Further, the robotic preoperative navigation system can also perform any steps in the above robotic preoperative navigation method. For specific limitations on the robot preoperative navigation system, please refer to the above-mentioned limitations on the robot preoperative navigation method, which will not be repeated here. Each module in the above robot preoperative navigation system can be fully or partially realized by software, hardware and combinations thereof. The above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules. Wherein, the computer equipment may include the AR equipment 20, the control processing device 31, the driver of the surgical robot 11, and the like.
本申请还提供一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上任一实施例所述的方法的步骤。The present application also provides a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method described in any one of the above embodiments are implemented.
本申请还提供一种计算机设备,包括存储器和处理器;所述处理器上存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上任一实施例所述的方法的步骤。The present application also provides a computer device, including a memory and a processor; the processor stores a computer program that can run on the processor, and the processor implements any of the above embodiments when executing the computer program The steps of the method.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the computer programs can be stored in a non-volatile computer-readable memory In the medium, when the computer program is executed, it may include the processes of the embodiments of the above-mentioned methods. Wherein, any references to memory, storage, database or other media used in the various embodiments provided in the present application may include at least one of non-volatile memory and volatile memory. Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be It is considered to be within the range described in this specification.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only represent several implementation modes of the present application, and the description thereof is relatively specific and detailed, but it should not be construed as limiting the scope of the patent for the invention. It should be noted that those skilled in the art can make several modifications and improvements without departing from the concept of the present application, and these all belong to the protection scope of the present application. Therefore, the scope of protection of the patent application should be based on the appended claims.

Claims (18)

  1. 一种机器人术前导航方法,包括:A method for preoperative navigation of a robot, comprising:
    获取手术室内的环境信息,并基于所述环境信息显示虚拟地图;所述虚拟地图中形成有手术机器人的起始位置的标记和手术操作位置的标记;及Acquiring environmental information in the operating room, and displaying a virtual map based on the environmental information; a mark of the starting position of the surgical robot and a mark of the surgical operation position are formed in the virtual map; and
    获取第一交互信息,并基于所述第一交互信息在所述虚拟地图中生成所述手术机器人的规划路径;所述规划路径用于供所述手术机器人在所述手术室内相应地从起始位置移动至手术操作位置。Acquiring first interaction information, and generating a planned path of the surgical robot in the virtual map based on the first interaction information; the planned path is used for the surgical robot to correspondingly start in the operating room The position moves to the surgical operation position.
  2. 根据权利要求1所述的机器人术前导航方法,其中,所述虚拟地图叠加显示于现实场景中。The robot preoperative navigation method according to claim 1, wherein the virtual map is superimposed and displayed in the real scene.
  3. 根据权利要求1所述的机器人术前导航方法,其中,所述虚拟地图包含网格标识,所述起始位置的标记和所述手术操作位置的标记均位于所述网格标识的交叉点上;所述规划路径通过所述网格标识中标识为无障碍物的交叉点连接所述起始位置的标记和所述手术操作位置的标记。The robot preoperative navigation method according to claim 1, wherein the virtual map includes a grid mark, and the mark of the starting position and the mark of the operation operation position are both located at the intersection of the grid mark The planned path connects the mark of the starting position and the mark of the surgical operation position through the intersection marked as no obstacle in the grid mark.
  4. 根据权利要求1所述的机器人术前导航方法,其中,所述获取手术室内的环境信息,并基于所述环境信息显示虚拟地图,包括:The robot preoperative navigation method according to claim 1, wherein said acquiring environmental information in the operating room and displaying a virtual map based on said environmental information comprises:
    利用在所述手术室内的不同位置拍摄的多幅像素图像,得到所述环境信息;obtaining the environmental information by using multiple pixel images taken at different positions in the operating room;
    依据所述环境信息构建并显示带有网格标识的虚拟地图;其中,所述网格标识中的部分交叉点显示所述环境信息。Constructing and displaying a virtual map with a grid mark according to the environmental information; wherein, some intersection points in the grid mark display the environmental information.
  5. 根据权利要求4所述的机器人术前导航方法,其中,所述环境信息包括手术室内的障碍物的和所述起始位置的位置坐标信息;所述依据所述环境信息构建并显示带有网格标识的虚拟地图,包括:The robot preoperative navigation method according to claim 4, wherein the environmental information includes position coordinate information of obstacles in the operating room and the starting position; A virtual map of cell identification, including:
    基于所述手术操作位置的位置坐标信息以及所述障碍物的、所述起始位置的位置坐标信息,拓展出与各位置坐标信息无重叠的多个虚拟点的坐标信息;Based on the position coordinate information of the surgical operation position and the position coordinate information of the obstacle and the initial position, expand the coordinate information of a plurality of virtual points that do not overlap with each position coordinate information;
    以各所述虚拟点的坐标信息以及各位置坐标信息为所述网格标识的交叉点,生成并显示所述带有网格标识的虚拟地图;Using the coordinate information of each virtual point and each position coordinate information as the intersection point of the grid mark, generating and displaying the virtual map with the grid mark;
    或者,所述带有网格标识的虚拟地图中的网格形状包括三角形。Alternatively, the shape of the grid in the virtual map with the grid mark includes a triangle.
  6. 根据权利要求1所述的机器人术前导航方法,其中,所述基于所述第一交互信息在所述虚拟地图中生成所述手术机器人的规划路径,包括:The robot preoperative navigation method according to claim 1, wherein the generating the planned path of the surgical robot in the virtual map based on the first interaction information comprises:
    基于所述第一交互信息,确定相对于所述起始位置的至少一个位姿变化信息,以得到所述手术机器人的规划路径;Based on the first interaction information, determine at least one pose change information relative to the starting position, so as to obtain a planned path of the surgical robot;
    将所述规划路径显示于所述虚拟地图中。The planned route is displayed on the virtual map.
  7. 根据权利要求1至6任一项所述的机器人术前导航方法,还包括:在所述手术机器人从所述起始位置运动至所述手术操作位置的过程中,The robot preoperative navigation method according to any one of claims 1 to 6, further comprising: during the movement of the surgical robot from the initial position to the surgical operation position,
    获取第二交互信息,并基于所述第二交互信息调整所述手术机器人的运动状态、和/或给予提示信息。Acquire second interaction information, and adjust the motion state of the surgical robot based on the second interaction information, and/or give prompt information.
  8. 根据权利要求2所述的机器人术前导航方法,还包括:The robot preoperative navigation method according to claim 2, further comprising:
    将所述规划路径按照对应比例投射于所述手术室中在所述起始位置和所述手术操作位置之间。The planned path is projected in the operating room between the starting position and the surgical operation position in a corresponding scale.
  9. 根据权利要求7所述的机器人术前导航方法,其中,所述手术机器人移动的过程中 需要调整运动状态包括:所述手术机器人与障碍物之间的距离小于预设的安全距离和/或所述手术机器人的运动轨迹偏离所述规划路径。The robot preoperative navigation method according to claim 7, wherein the need to adjust the motion state during the movement of the surgical robot includes: the distance between the surgical robot and the obstacle is less than a preset safety distance and/or the The trajectory of the surgical robot deviates from the planned path.
  10. 根据权利要求1所述的机器人术前导航方法,还包括:基于所述规划路径控制所述手术机器人移动和/或控制所述手术机器人的机械臂摆位。The robot preoperative navigation method according to claim 1, further comprising: controlling the movement of the surgical robot and/or controlling the positioning of the mechanical arm of the surgical robot based on the planned path.
  11. 一种机器人术前导航系统,包括:控制处理装置和人机交互装置,所述控制处理装置和所述人机交互装置通信连接;A robot preoperative navigation system, comprising: a control processing device and a human-computer interaction device, the control processing device and the human-computer interaction device are connected in communication;
    所述人机交互装置用于显示虚拟地图,并用于获取用户的第一交互信息;The human-computer interaction device is used to display a virtual map, and to obtain the user's first interaction information;
    所述控制处理装置用于执行如权利要求1-10中任一项所述的机器人术前导航方法。The control processing device is used to execute the robot preoperative navigation method according to any one of claims 1-10.
  12. 根据权利要求11所述的机器人术前导航系统,其中,所述人机交互装置包括AR设备,所述AR设备用于将所述虚拟地图叠加显示于现实场景中。The robot preoperative navigation system according to claim 11, wherein the human-computer interaction device includes an AR device, and the AR device is used to overlay and display the virtual map in a real scene.
  13. 根据权利要求11所述的机器人术前导航系统,其中,所述控制处理装置包括位置调节单元和摄像头,所述摄像头设置于所述位置调节单元上,所述位置调节单元用于调节所述摄像头的拍摄角度,所述摄像头用于采集所述手术室内的不同位置拍摄的多幅像素图像以得到所述环境信息。The robot preoperative navigation system according to claim 11, wherein the control processing device includes a position adjustment unit and a camera, the camera is arranged on the position adjustment unit, and the position adjustment unit is used to adjust the position of the camera The camera is used to collect multiple pixel images taken at different positions in the operating room to obtain the environmental information.
  14. 根据权利要求11所述的机器人术前导航系统,其中,所述人机交互装置将所述规划路径按照对应比例投射于所述手术室中在所述起始位置和所述手术操作位置之间。The robot preoperative navigation system according to claim 11, wherein the human-computer interaction device projects the planned path in the operating room between the initial position and the surgical operation position according to a corresponding proportion .
  15. 根据权利要求11至14任一项所述的机器人术前导航系统,其中,在所述手术机器人在所述手术室内从所述起始位置运动至所述手术操作位置期间;The robot preoperative navigation system according to any one of claims 11 to 14, wherein, during the movement of the surgical robot from the initial position to the surgical operation position in the operating room;
    所述控制处理装置基于手势信息得到第二交互信息,并基于所述第二交互信息调整所述手术机器人的运动状态、和/或给予提示信息。The control processing device obtains the second interaction information based on the gesture information, and adjusts the motion state of the surgical robot and/or gives prompt information based on the second interaction information.
  16. 根据权利要求11所述的机器人术前导航系统,其中,所述人机交互装置通过采集手势图像以获得所述第一交互信息。The robot preoperative navigation system according to claim 11, wherein the human-computer interaction device acquires the first interaction information by collecting gesture images.
  17. 一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1~10任一项所述的方法的步骤。A storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method according to any one of claims 1-10 are implemented.
  18. 一种计算机设备,包括存储器和处理器;所述处理器上存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1~10任一项所述方法的步骤。A computer device, comprising a memory and a processor; the processor stores a computer program that can run on the processor, and when the processor executes the computer program, any one of claims 1 to 10 is realized The steps of the method.
PCT/CN2022/102141 2021-06-30 2022-06-29 Robot preoperative navigation method and system, storage medium, and computer device WO2023274270A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110745021.6 2021-06-30
CN202110745021.6A CN115542889A (en) 2021-06-30 2021-06-30 Preoperative navigation method and system for robot, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
WO2023274270A1 true WO2023274270A1 (en) 2023-01-05

Family

ID=84691448

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/102141 WO2023274270A1 (en) 2021-06-30 2022-06-29 Robot preoperative navigation method and system, storage medium, and computer device

Country Status (2)

Country Link
CN (1) CN115542889A (en)
WO (1) WO2023274270A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116687564B (en) * 2023-05-22 2024-06-25 北京长木谷医疗科技股份有限公司 Surgical robot self-sensing navigation method system and device based on virtual reality

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105953785A (en) * 2016-04-15 2016-09-21 青岛克路德机器人有限公司 Map representation method for robot indoor autonomous navigation
CN107544482A (en) * 2017-08-08 2018-01-05 浙江工业大学 Automatic distribution robot system facing medical environment
JP2018028867A (en) * 2016-08-19 2018-02-22 日本電信電話株式会社 Route information generator, route coupling device, method, and program
CN109668561A (en) * 2017-10-13 2019-04-23 中兴通讯股份有限公司 A kind of interior paths planning method, terminal and readable storage medium storing program for executing
US20190254754A1 (en) * 2018-02-19 2019-08-22 Globus Medical, Inc. Augmented reality navigation systems for use with robotic surgical systems and methods of their use
US20200100846A1 (en) * 2018-09-27 2020-04-02 Eped, Inc. Active-detection self-propelled artificial intelligence surgical navigation cart
CN112914731A (en) * 2021-03-08 2021-06-08 上海交通大学 Interventional robot contactless teleoperation system based on augmented reality and calibration method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105953785A (en) * 2016-04-15 2016-09-21 青岛克路德机器人有限公司 Map representation method for robot indoor autonomous navigation
JP2018028867A (en) * 2016-08-19 2018-02-22 日本電信電話株式会社 Route information generator, route coupling device, method, and program
CN107544482A (en) * 2017-08-08 2018-01-05 浙江工业大学 Automatic distribution robot system facing medical environment
CN109668561A (en) * 2017-10-13 2019-04-23 中兴通讯股份有限公司 A kind of interior paths planning method, terminal and readable storage medium storing program for executing
US20190254754A1 (en) * 2018-02-19 2019-08-22 Globus Medical, Inc. Augmented reality navigation systems for use with robotic surgical systems and methods of their use
US20200100846A1 (en) * 2018-09-27 2020-04-02 Eped, Inc. Active-detection self-propelled artificial intelligence surgical navigation cart
CN112914731A (en) * 2021-03-08 2021-06-08 上海交通大学 Interventional robot contactless teleoperation system based on augmented reality and calibration method

Also Published As

Publication number Publication date
CN115542889A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN109682381B (en) Omnidirectional vision based large-view-field scene perception method, system, medium and equipment
JP6896077B2 (en) Vehicle automatic parking system and method
CN110377015B (en) Robot positioning method and robot positioning device
US10481265B2 (en) Apparatus, systems and methods for point cloud generation and constantly tracking position
JP4278979B2 (en) Single camera system for gesture-based input and target indication
EP3336489A1 (en) Method and system for automatically establishing map indoors by mobile robot
Zhang et al. An indoor navigation aid for the visually impaired
CN103472434B (en) Robot sound positioning method
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
Pradeep et al. A wearable system for the visually impaired
CN113566808A (en) Navigation path planning method, device, equipment and readable storage medium
WO2023274270A1 (en) Robot preoperative navigation method and system, storage medium, and computer device
JP2021177144A (en) Information processing apparatus, information processing method, and program
Marie et al. Visual servoing on the generalized voronoi diagram using an omnidirectional camera
WO2022188333A1 (en) Walking method and apparatus, and computer storage medium
EP3088983B1 (en) Moving object controller and program
CN113885506A (en) Robot obstacle avoidance method and device, electronic equipment and storage medium
CN112182122A (en) Method and device for acquiring navigation map of working environment of mobile robot
KR101475207B1 (en) Simulation device used for trainning of robot control
JP7179687B2 (en) Obstacle detector
Nowak et al. Vision-based positioning of electric buses for assisted docking to charging stations
JPS5890268A (en) Detector of 3-dimensional object
CN109901589B (en) Mobile robot control method and device
Canh et al. Multisensor data fusion for reliable obstacle avoidance
KR20200145410A (en) Apparatus and method for obtaining location information for camera of vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22832083

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22832083

Country of ref document: EP

Kind code of ref document: A1