CN110051436B - Automated cooperative work assembly and application thereof in surgical instrument - Google Patents

Automated cooperative work assembly and application thereof in surgical instrument Download PDF

Info

Publication number
CN110051436B
CN110051436B CN201810051219.2A CN201810051219A CN110051436B CN 110051436 B CN110051436 B CN 110051436B CN 201810051219 A CN201810051219 A CN 201810051219A CN 110051436 B CN110051436 B CN 110051436B
Authority
CN
China
Prior art keywords
target
moving arm
arm
acquisition device
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810051219.2A
Other languages
Chinese (zh)
Other versions
CN110051436A (en
Inventor
姚君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shecheng Medical Equipment Co ltd
Original Assignee
Shanghai Shecheng Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shecheng Medical Equipment Co ltd filed Critical Shanghai Shecheng Medical Equipment Co ltd
Priority to CN201810051219.2A priority Critical patent/CN110051436B/en
Publication of CN110051436A publication Critical patent/CN110051436A/en
Application granted granted Critical
Publication of CN110051436B publication Critical patent/CN110051436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/371Surgical systems with images on a monitor during operation with simultaneous use of two cameras

Abstract

An automated collaborative work assembly includes a first motion arm, a second motion arm, a work piece, and an image capture device. The image acquisition device is arranged at one end of the second moving arm, moves under the control of the second moving arm, performs optical positioning and tracking on the target, and identifies and avoids the target shielding. Respectively digitizing the positions, postures and movement speeds of the first movement arm and the second movement arm, establishing the association relationship of the first movement arm and the second movement arm, and implementing cooperative movement under the control of a computer. The assembly provided by the invention is used for image-guided surgical instruments, can cooperatively finish the surgical process, avoids collision, and simultaneously solves the problem of surgical interruption caused by shielding of a tracking target by a human body, medical instruments and tissues and organs in the surgical process, so that the surgery can be continued, and the continuity of the surgery is ensured.

Description

Automated cooperative work assembly and application thereof in surgical instrument
Technical Field
The invention relates to a mechanical device capable of automatically operating, in particular to an assembly, which can be controlled by a computer to cooperate with each other to automatically complete various tasks, and application of the assembly in surgical instruments.
Background
Surgical robotic systems based on optical positioning, such as: the ROSA Spine system of MedTech, france, MAKO system of Stryker, usa, and the "breguet" system of tianzhihang, inc all use optical positioning to track the surgical site and surgical tools (or robotic arms). These systems are provided with only one robot arm for gripping the surgical tool, and the optical positioning system is separated from the main body on which the robot arm is mounted, and is additionally mounted on a movable support. A disadvantage of such a system is that the robotic arm carries out a large range of automated movements at risk, possibly colliding with surrounding objects, causing damage. In addition, the system needs to keep the surgical site and the surgical tool identified and tracked by the optical positioning and tracking system at all times, so as to ensure the smooth operation. Meanwhile, the positioning and tracking system is required to have a larger identification and tracking range when two moving objects are identified, and the larger the range is, the lower the positioning accuracy of the optical positioning and tracking system is. In addition, the optical positioning and tracking system is based on the camera shooting principle, the shooting light path is shielded, and when any one of the operation part or the operation tool is shielded, the operation is interrupted. In addition, since two tracked targets (such as an operation part and an operation tool) can move, when the two tracked targets keep tracking at the same time, an operator needs to specially make a shooting light path for the tracking camera; there is also a need to constantly keep the surgical tool oriented to the position tracking system. When a doctor operates the surgical instrument, the degree of freedom of movement (such as movement, rotation, overturning and the like) of the surgical instrument is large, so that the surgical instrument is easily shielded and loses tracking, a lot of inconvenience is brought to the doctor to perform the operation, and the experience of the doctor on the operation of equipment is reduced.
Disclosure of Invention
The invention aims to provide a component, which can automatically cooperate with each other under the control of a computer to complete various work tasks.
It is another object of the present invention to provide a kit that allows for automated coordination of the components under computer control to perform the surgical procedure.
Still another object of the present invention is to provide an assembly for image-guided surgical instruments, which solves the problem of surgical interruption caused by the shielding of the tracking target by human body, medical instruments and organs during the surgical procedure, so that the surgical procedure can be continued and the continuity of the surgical procedure can be ensured.
An assembly, comprising:
the first moving arm comprises a plurality of first support arms and at least 1 first moving joint;
the second moving arm comprises a plurality of second support arms and at least 1 second moving joint;
the working component is arranged at one end of the first moving arm and is controlled by the first moving arm to move;
the image acquisition device is arranged at one end of the second moving arm, moves under the control of the second moving arm, and performs optical positioning and tracking on the target, and the identification and avoidance of the target shielding;
respectively digitizing the position, the attitude and the movement speed of the first movement arm and the second movement arm, establishing an association relationship between the position information and the attitude information of the first movement arm and the position information and the attitude information of the second movement arm, realizing the conversion between a first movement arm coordinate system and a second movement arm coordinate system, and implementing cooperative movement under the control of a computer.
The image acquisition device provided by the invention comprises a plurality of cameras which are arranged according to a rule.
The assembly of the invention, the method of implementing the coordinated movement is as follows:
the image acquisition device acquires the position and the posture of a tracked target in real time, the computer acquires the position and the posture of a working component (such as an instrument used in an operation) in real time through the rotation angle of each first joint on the first moving arm, and the computer completes the conversion between the position and the posture of the tracked target under the coordinate system of the image acquisition device and the position and the posture of the tracked target under the coordinate system of the first moving arm according to the incidence relation between the first moving arm and the second moving arm, so that the first moving arm acquires the position and the posture of the tracked target under the coordinate system of the first moving arm, further obtains the specific position and the direction of a preset working path, and drives the first moving arm to complete a working task near the tracked target according to the preset path.
The computer acquires the position information and the attitude information of a working component (such as an instrument used in an operation) and the position information and the attitude information of the first moving arm in real time;
at the moment, the image acquisition device arranged on the second moving arm adjusts the direction of acquiring the image in real time, faces the first moving arm and the working part and keeps tracking the first moving arm and the working part;
then, a three-dimensional computer scene of the surrounding environment where the working component and the first moving arm are located is obtained in time, namely three-dimensional information of surface points of objects in the surrounding environment is obtained, point cloud is obtained, and meshing is carried out;
then, according to the positions of all first motion joints on the first motion arm and the three-dimensional information of the surface points of the surrounding environment object, calculating the minimum distance between the first motion arm and the surrounding environment object and the minimum distance between the working component and the surrounding environment object, and prejudging whether the first motion arm or the working component collides with the surrounding environment object or not;
when there is a possibility of collision, the movement of the first moving arm is stopped.
When the work task is executed, the second moving arm and the image acquisition device can also solve the problem of operation interruption caused by shielding of a tracked target by a human body, medical equipment and a tissue organ in the operation process, so that the operation can be continued, and the continuity of the operation is ensured.
During operation, the image acquisition device tracks the operation position by tracking the marker rigidly connected with the operation position. The marker is also called a target and comprises a plurality of marker points, the marker points have specific shapes and colors, have strong reflection capacity to specific light, can be shot by a camera with high contrast and can be efficiently identified through an algorithm. By identifying the mark points and applying a binocular positioning mode, the position and the posture of the target can be judged. The target is kept tracked through the image acquisition device, and meanwhile, the surface three-dimensional position information of the object in the view scene is acquired in real time. If the image acquisition device loses tracking on the target due to being shielded or the movement of the operation part, automatically calculating a new tracking position and direction by acquiring the surface three-dimensional information of the object in the visual field scene; the second moving arm drives the image acquisition device to reach a new position and direction, and the real-time tracking of the operation part is realized again.
The invention provides a method for avoiding shielding and keeping tracking a target, which comprises the following steps:
1. the image acquisition device is calibrated to obtain internal parameters (such as focal length) and external parameters (such as position and direction) of each camera.
2. Setting synchronous exposure shooting of each camera in the image acquisition device according to a set frequency, and repeating the following steps for each exposure:
firstly, identifying mark points of a target, presetting more than 1 mark point for one target, and identifying and determining the central position of each mark point of the target in an image acquired by each camera; and meanwhile, the three-dimensional space position of the center of each mark point is also determined, and the center position of a target is calculated according to the center positions.
And when the number of the mark points of the identified target in the image acquired by any camera is less than the preset number, judging that the shielding occurs.
Calculating three-dimensional position information of all object surfaces in the field of view of the image acquisition device to obtain object surfaces represented by point clouds, performing meshing representation on the point clouds to obtain a geometric boundary of the object, and then obtaining a new position and direction of the image acquisition device according to the following steps:
1) establishing a spherical or planar area or a randomly selected three-dimensional area of a regular space as a search area on the distance between the image acquisition device and the target center position, wherein the range of the search area is determined by the range which can be reached by the second moving arm;
2) several sample points are obtained over the search area (e.g.: taking the grid intersection points as sampling points in a gridding mode);
3) setting the central position of the image acquisition device on the sampling points in sequence, and setting the field direction of the image acquisition device as a vector between the sampling point and the target central position;
4) at each sampling point, rotating the image acquisition device along the field of view direction according to the internal parameters and the external parameters of the cameras, and virtually and continuously exposing and shooting in a computer until an intention position is found, so that each camera can simultaneously acquire all mark points of a target;
and finally, the second moving arm moves and rotates the image acquisition device to the calculated intended position.
If the shielding still exists on the calculated intention position, the searching is continued until all the cameras acquire all the mark points preset by the target.
The second motion arm can realize the motion with a plurality of degrees of freedom, and the image acquisition device can reach a new position and direction (such as a 6-degree-of-freedom mechanical arm, and the movement and the rotation in the x, y and z directions).
The second motion arm is provided with a programming interface and a motor, can receive commands and actively moves to the position and the posture indicated by the commands.
The method provided by the invention is suitable for being applied to image-guided surgery, solves the problem of surgical interruption caused by shielding of a tracking target by a human body, medical instruments and tissues and organs in the surgical process, enables the surgery to be continuous and ensures the continuity of the surgery.
Considering that the shooting ranges of the cameras are different in size, finding the position and the direction for avoiding the shielding needs to acquire the information of the object around the target in a larger range. The image acquisition device can be specially configured, for example, two narrow-angle cameras are used for specially acquiring the real-time space position of a target mark point for positioning and tracking and judging shielding at the same time; and then, acquiring the situation of the periphery of the target by using the other two large-angle cameras, and calculating the position and the direction of a new image acquisition device to avoid shielding.
The image acquisition device has a point cloud acquisition mode, can acquire the point cloud information of the surface of the surgical site in a non-contact manner, performs surface registration with the surface of the surgical site obtained by reconstructing the CT image, acquires the coordinate conversion relation between the actual surgical site and the CT image, and realizes higher surgical precision. Face registration currently has a recognized maturation algorithm. The common surface registration point-taking method in the image-guided surgery is to use a special surgical tool to stroke a point-taking (such as a point-taking on the skin) at a surgical site, perform optical non-contact registration and perform relative contact registration, so that the surface deformation of the surgical site can be avoided, and higher precision can be obtained.
The technical scheme of the invention has the following beneficial effects:
the assembly provided by the invention can automatically find the advancing path of a working component (such as a surgical instrument), and meanwhile, the assembly is prevented from being touched, so that the safe operation is ensured.
The assembly provided by the invention can enable the image acquisition mechanism to avoid shielding, enable the target to be continuously tracked and ensure automatic implementation of cooperative work, such as: the operation is continuous and uninterrupted.
The tracking of the working component is realized by obtaining the rotation parameters of the joint of the moving arm, and an optical positioning and tracking system is not needed, so that the operation is not interrupted due to the loss of tracking of an operation tool, and the possible shielding is reduced.
The assembly provided by the invention is applied to an image-guided surgical instrument, does not need to track a surgical site and a surgical tool at the same time, only needs to track the surgical site, reduces the tracking range of an optical positioning and tracking system, improves the precision and also reduces the cost.
The assembly provided by the invention can effectively ensure that the distance between the optical positioning and tracking system and the surgical site is within the optimal precision range, and improve the precision. The optical positioning and tracking system can automatically avoid the shielding of a shooting light path in the operation process, so that the operation of a doctor is facilitated, and better operation experience is obtained.
The component provided by the invention is applied to an image-guided surgical instrument, can obtain the point cloud information of the surface of a surgical site without contact, obtains higher-precision original data and a registration matrix, and improves the system precision
Drawings
FIG. 1 is a schematic diagram of one embodiment of 3D reconstruction of a camera;
FIG. 2 is a schematic view of an assembly according to one embodiment of the present invention;
FIG. 3 is a schematic view of another embodiment of the assembly of the present invention;
FIG. 4 is a schematic diagram of an embodiment of the present invention in which the components perform meshing according to the point cloud data of the three-dimensional positions of the objects around the tracked target;
FIG. 5 is a schematic view of one embodiment of a mechanism for rigid connection to a patient according to the present invention;
FIG. 6 is a diagram illustrating an embodiment of gridding and finding new tracking positions and directions according to three-dimensional position information of objects around a tracking target;
FIG. 7 is a schematic view of an embodiment of an object surface represented by a point cloud;
FIG. 8 is a schematic view of an embodiment of a grid of the point cloud shown in FIG. 7;
fig. 9 is a diagrammatic view of an embodiment of the assembly of the present invention as an instrument for use in surgery.
Detailed Description
The technical scheme of the invention is described in detail in the following with reference to the accompanying drawings. Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.
In this embodiment, the image acquisition device is loaded with different software modules, namely, an optical positioning and tracking system and a visual monitoring and obstacle avoidance system. They may each be based on a group of cameras or on a common group of cameras (a group comprising two cameras). As shown in fig. 1, a and B are the central positions of two cameras calibrated (internal parameters such as focal length of the cameras can be obtained by calibration, and external parameters such as position and direction), and each camera corresponds to an imaging plane; if a coordinate system is established at the position of A, B, the three-dimensional space position of c can be defined, and c can be continuously shot in real time, so that the tracking of c by a positioning and tracking system can be realized. In addition, if an algorithm (e.g., SIFT algorithm) is used to find the corresponding relationship between a plurality of feature points (e.g., feature points of the object surface) in the plane 1 and the plane 2, the position information of a large number of points on the object surface in the scene shot by the camera can be obtained to obtain a point cloud (e.g., fig. 7), the point cloud is meshed (e.g., fig. 8), the object shape can be reduced, and a three-dimensional computer scene can be obtained. And the visual monitoring obstacle avoidance system judges possible blocking or shielding by using the obtained three-dimensional scene information. Because the precision or shooting range required by the positioning and tracking system and the visual obstacle avoidance system are different (for example, the precision required by the visual monitoring obstacle avoidance system is relatively low, but the shooting range is required to be larger), two groups of cameras can be used, and each two cameras in each group are respectively realized.
The motion arm is a parameterized system, and the posture of the motion arm can be calculated by the motion angle of each motion joint. The image acquisition device is arranged at one end of one moving arm, so that the parameterization setting and acquisition of the position of the image acquisition device can be realized, and the position and the direction of the image acquisition device can be obtained in real time. The surgical instrument is arranged on the end flange of the first moving arm through a clamp, and the position relation of the surgical instrument and the end flange of the first moving arm is obtained when the mechanical structure is designed. The coordinate relation between the image acquisition device and the second moving arm and the coordinate transformation relation between the first moving arm and the second moving arm can be obtained through the set size parameters during the design of the mechanical structure. Therefore, the coordinate transformation relation between the surgical instrument and the image acquisition device can be finally obtained, namely, the position and the posture of the surgical tool can be tracked in real time through the structural design parameters and the motion information of the motion joint.
And because the motion arms are a parameterized system, the positions and postures of the two motion arms are completely determined by the motion angles of the motion joints, and a computer can accurately acquire the position information and posture information of any motion arm, an image acquisition device and a surgical tool which are arranged on the motion arm at any moment, so that the two arms realize automatic cooperative work.
When the operation is performed, the image acquisition device acquires the posture and the position of a tracked target of the operation part, the tracked target is rigidly connected with the operation part, and then the position and the direction of the operation path at the operation part are acquired. And converting the position and the direction of the operation path of the operation part under the coordinate system of the image acquisition device into the position and the direction under the coordinate system of the first motion arm through the position and posture correlation between the first motion arm and the second motion arm, driving the surgical instrument to reach the preset position and direction by the first motion arm, and executing the operation. In the operation process, the system can acquire the position and the direction of the surgical instrument in real time through parameters of each joint of the moving arm, feed back the operation progress (such as drilling depth) in real time, and judge whether deviation exists or not.
When non-surgical actions are carried out, the first motion arm obtains a target position and drives the first motion arm to move the surgical instrument to the target position, and the computer avoids collision according to the following steps:
under the control of a computer, the position of the surgical instrument, the position and the posture information of the first moving arm are obtained in real time through the rotation angle of each first joint on the first moving arm;
at the moment, the image acquisition device arranged on the second moving arm adjusts the direction of acquiring the image, faces the first moving arm and the surgical instrument and keeps tracking the first moving arm and the surgical instrument;
then, a three-dimensional computer scene of the surrounding environment where the surgical instrument and the first motion arm are located is obtained in time, namely three-dimensional information of surface points of objects in the surrounding environment is obtained, point cloud is obtained, and meshing is carried out;
then, according to the position of each first joint on the first moving arm and the three-dimensional information of the surface points of the surrounding environment object, calculating the minimum distance between the first moving arm and the surgical instrument and the surrounding environment object, and prejudging whether the first moving arm or the surgical instrument collides with the surrounding environment object;
when there is a possibility of collision, the movement of the first moving arm is stopped.
In the operation process, the method for avoiding the occlusion and keeping the target tracking through the visual monitoring obstacle avoidance system software module in the embodiment comprises the following steps:
1. calibrating the image acquisition device to obtain the internal (e.g. focal length) and external parameters (e.g. position and direction) of each camera
2. Setting each camera in the image acquisition device to synchronously expose and shoot according to a certain frequency, and repeating the following steps for each exposure:
firstly, identifying mark points of a target, presetting more than 1 mark point for one target, and identifying and determining the central position of each mark point of the target in an image acquired by each camera; and simultaneously, determining the three-dimensional space position of the center of each mark point, and calculating the center position of a target through the center positions, and recording the center position as DC.
When the number of the mark points of the identified target is less than the preset number in the image acquired by any camera, it is determined that occlusion occurs (at this time, the central position DC of the target may not be acquired, and the central position acquired by exposure shooting of the previous frame may be the central position DC of the target of the current frame, and the subsequent steps are continued).
Calculating three-dimensional position information of all object surfaces in a view scene of the current image acquisition device to obtain object surfaces represented by point clouds (as shown in fig. 7), gridding and representing the point clouds (as shown in fig. 8) to obtain geometric boundaries of the objects, and obtaining a new camera position and direction according to the following method:
1) at a position spaced from the target center DC, a spherical or planar region, or optionally a regular spatial three-dimensional region, is established, denoted as CS, whose range is determined by the range that the moving arm can reach.
2) Several samples are obtained on CS.
3) The center position of the image acquisition device is set at these sampling points in sequence, and the field direction of the image acquisition device is set as a vector between the sampling point and the target center DC.
4) And virtually and continuously exposing and shooting in a computer at each sampling point according to the internal and external parameters of the cameras until a position is found, ensuring that all the mark points of the target are shot by the cameras simultaneously, and stopping the process.
If the calculated direction and position still have occlusion, the above process is repeated until a position which is not occluded is obtained.
Considering that the shooting ranges of the cameras are different in size, finding the position and the direction for avoiding the shielding needs to acquire the information of the object around the target in a larger range. Two narrow-angle cameras can be used for specially acquiring the real-time space position of a target mark point for optical positioning and tracking and judging shielding; and then, acquiring the situation of the periphery of the target by using the other two large-angle cameras, and calculating the position and the direction of a new camera according to the algorithm to avoid the shielding and realize the visual monitoring and obstacle avoidance. Here the coordinate system relationship between the two sets of cameras needs to be calculated.
Referring to fig. 2, two UR3 robots from Universal Robot were used, UR3 having 6 degrees of freedom. The end flange of the second moving arm 100 is provided with a camera assembly 300, which comprises two cameras (such as microsoft lifecamsidio) to form an image acquisition device, and the images acquired by the image acquisition device are transmitted to a computer for optical positioning tracking and visual monitoring obstacle avoidance; the end of first motion arm 200 is flanged with a surgical instrument 400, such as a bone drill.
In another possible embodiment, as shown in fig. 3, a first camera 500 and a second camera 510 are flange-mounted at the end of the second moving arm 100 to form an image capturing device, the first camera 500 is used for optical positioning and tracking, the second camera 510 is used for visual monitoring and obstacle avoidance, and the first camera 500 and the second camera 510 may also be mounted at 90 degrees. One advantage of this embodiment is that cameras of different performance and parameters can be selected for different respective requirements for optical positioning and visual monitoring. The uniqueness of the installation positions of the first camera 500 and the second camera 510 is ensured in structural design, the shooting directions of the first camera 500 and the second camera 510 are consistent, the coordinate transformation relation between the first camera 500 and the second camera 510 can be realized through one-time translation, the translation matrix is set to be M, and then the view field target transformation between the first camera 500 and the second camera 510 can be realized through M. Such as: through M, the position coordinate information or the point cloud data information obtained by shooting by the second camera 510 is converted into the position coordinate information or the point cloud data information under the first camera 500, so that the data of the two cameras are unified, and subsequent visual monitoring and obstacle avoidance are facilitated.
In fig. 2, the positional relationships (coordinate transformation relationships) between the second moving arm 100 and the image acquisition device 300, between the first moving arm 200 and the surgical instrument 400, and between the second moving arm 100 and the first moving arm 200 can be obtained by dimensional measurements at the time of mechanical structure installation. In addition, since the positions of the second moving arm 100 and the first moving arm 200 are completely determined by the moving angles of the respective moving joints, these angles can be obtained by the internal parameters of the moving arm return. Therefore, the positional relationship (i.e., coordinate system transformation relationship) between the image capturing apparatus 300 and the first moving arm 200 and the surgical instrument 400 can be obtained by calculation in real time, and it is not necessary to install any identification mark and mark point for optical localization tracking at the surgical instrument 400.
Fig. 9 is a diagrammatic view of an embodiment of the assembly of the present invention as an instrument for use in surgery. As shown in fig. 9, when the surgical task is performed, the tracked target 610 is rigidly connected to the surgical site 900, and the movement of the tracked target 610 reflects the movement of the surgical site 900. The surgical site 900 includes a surgical path 910 (e.g., a drilling direction), and a general method for obtaining the surgical path 910 is to perform a surgical plan on the CT image, mark the surgical path on the CT image, then obtain a plurality of points on the CT image, and simultaneously obtain a plurality of corresponding points on the surgical site to perform point registration, so as to unify a coordinate system of the CT image and a coordinate system of the surgical site, and finally transfer the surgical path based on the CT coordinate system to an actual surgical site. The image capturing mechanism 300 captures the position and orientation of the target 610 in real time, and the computer computes the position and orientation of the surgical path in real time, where the position and orientation are described in a coordinate system under the image capturing mechanism 300. Through the association relationship between the second moving arm 100 and the first moving arm 200, the computer converts the position and the posture of the surgical path 910 under the coordinate system of the image acquisition mechanism 300 into the position and the posture under the coordinate system of the first moving arm 200, and drives the first moving arm 200 to execute the surgical procedure according to the planned surgical path. The computer calculates the operation progress (such as drilling depth and direction) of the first moving arm 200 in real time according to the angle of each moving joint of the first moving arm 200, and judges whether the direction deviates from the direction, whether the preset position is reached, and the like.
When a non-surgical task is performed (for example, when the surgical instrument is powered off, the movable arm is automatically retracted), when the first movable arm 200 obtains a command to drive the surgical instrument 400 to reach a specified position, the computer may obtain the position relationship of the surgical instrument 400 with respect to the image capturing device 300 and the second movable arm 100 in real time, and further calculate a new position and direction of the image capturing device 300, so as to ensure that the surgical instrument 400 is located within the tracking and monitoring range of the image capturing device 300. The image acquisition device 300 captures images of the environmental objects around the surgical instrument 400 in real time to obtain point cloud data of the surface of the environmental background object where the surgical instrument 400 is located, and coordinates of the point cloud data are calculated with respect to the image acquisition device 300. The point cloud data are gridded into a solid surface (the point cloud acquisition algorithm and the point cloud gridding algorithm belong to mature algorithms, and mature products exist in the market, so that the point cloud data can be acquired in real time and are gridded, such as Microsoft kinect). As shown in fig. 4, in a simplified schematic diagram, in the surrounding environment, the first object 310 and the second object 320 are object surfaces obtained by meshing point cloud data obtained by the image acquisition device 300, a model of the first moving arm 200 and the surgical instrument 400 can be virtually established in a computer through angles of moving joints of the first moving arm 200 and a relative fixed position relationship between the first moving arm 200 and the surgical instrument 400, and is drawn in a coordinate system of the image acquisition device 300, that is, a relative position relationship between the first moving arm 200 and the surgical instrument 400 and between the first object 310 and the second object 320 is obtained, and the computer calculates a distance between the first moving arm 200 and the surgical instrument 400 relative to the first object 310 and the second object 320 in real time. When the distance is less than the set safety value, the movement of the first moving arm 200 is stopped, preventing a collision from occurring. The distance calculation is in the domain of collision detection, and corresponding algorithms are available in the field.
Fig. 5 is a mechanism 600 rigidly attached to the patient and containing an object 610 to be tracked, since the mechanism 600 is rigidly attached to the patient, the change in position of the tracking mechanism 600 is equivalent to tracking the patient's movement. The target 610 includes 4 landmark points 620, defining a center 630, centered with respect to each landmark point 620.
The target is shot using 300 exposures, spatial position information for each marker point 620 is obtained using the aforementioned binocular positioning principle, and the center position 630 of the tracked target is calculated.
All the mark points are recognized on the image generated by each camera in the image acquisition device 300, and when the number of the mark points in the image shot by any one camera in the image acquisition device 300 is less than 4, the occlusion is determined to occur.
The images obtained by the two cameras in the image acquisition device 300 are used to obtain the three-dimensional position information of the surfaces of all objects within the field of view of the image acquisition device 300 by the SIFT algorithm and the like commonly used in the field of computer vision, and the spatial positions are gridded to obtain object entities 710 and 720, as shown in fig. 6.
These entities are arranged in a computer rendering environment, such as an OpenGL environment, and all the entities mentioned above as well as the tracked object are contained in the scene. As shown in fig. 6, scene information 700 arranged in a computer, which contains entities, has the same positional relationship as in the real world.
In the above-mentioned computer scenario, the two cameras included in the image capturing device 300 are positioned, and a plane 800 is established near the position where the camera group is located, the plane being perpendicular to the line (see fig. 6) between the position where the image capturing device 300 is located and the tracked target (i.e., the mechanism 600), the distance between the plane and the tracked target being within the optimal precision range of the cameras, the size of the plane being determined according to the range that the second moving arm 100 can reach. The plane is first divided equally, for example 6, to find 12 positions. As shown in fig. 6, there are 12 position representative points 810 on the plane 800, and virtual photographing is sequentially attempted at these 12 positions. The image acquisition device 300 is sequentially placed at the 12 representative points 810, the target is aligned to shoot, the mark points are identified on the obtained photos, and if the number of the mark points identified by the photos obtained by the two cameras is 4, the position is considered to have no occlusion. (so-called virtual photography, namely, using parameters (such as direction, focal length, and the like) of a camera, an object is projected to a camera imaging plane by simulating the imaging principle of the camera in a computer).
It is possible to try to equally divide the plane several times, obtain a position closest to the acquisition means 300 as a target position, reduce the amount of movement of the second moving arm 100, and obtain a faster mechanical response speed.
After determining which direction to move along to avoid the occlusion and achieve continuous tracking of the target, the computer may transmit a command to the second motion arm 100, and the second motion arm 100 automatically completes the repositioning of the image capturing device.
If the new position is blocked again, the process is repeated until a position which is not blocked is found. If the attempt fails (e.g., the tracked object is wrapped by an object), the computer is notified that the location cannot be found and reports the error to the user.
The device provided by the invention facilitates the registration, the point cloud is obtained by shooting through a camera and is not in contact with a human body, the surface tissue (such as skin) of the operation part is ensured not to be deformed, and the obtained point cloud is more accurate.

Claims (5)

1. An automated collaborative work assembly, comprising:
the first moving arm comprises a plurality of first support arms and at least 1 first moving joint;
the second moving arm comprises a plurality of second support arms and at least 1 second moving joint;
the working component is arranged at one end of the first moving arm and is controlled by the first moving arm to move;
the image acquisition device is arranged at one end of the second moving arm, moves under the control of the second moving arm, and performs optical positioning and tracking on the target, and the identification and avoidance of the target shielding;
respectively digitizing the positions, postures and movement speeds of the first movement arm and the second movement arm, establishing the association relationship of the first movement arm and the second movement arm, and implementing cooperative movement under the control of a computer;
the method for implementing the cooperative motion is as follows:
the image acquisition device acquires the position and the posture of a tracked target in real time, the computer acquires the position and the posture of a working component in real time through the rotation angle of each first joint on the first moving arm, and the system completes the conversion between the position and the posture of the tracked target under the coordinate system of the image acquisition device and the position and the posture of the tracked target under the coordinate system of the first moving arm according to the incidence relation between the first moving arm and the second moving arm, so that the first moving arm acquires the position and the posture of the tracked target under the coordinate system of the first moving arm, further acquires the specific position and the direction of a preset working path, and drives the first moving arm to complete a working task according to the preset path near the tracked target;
the computer acquires the position information and the attitude information of the working component, and the position information and the attitude information of the first moving arm in real time;
at the moment, the image acquisition device arranged on the second moving arm adjusts the direction of acquiring the image in real time, faces the first moving arm and the working part and keeps tracking the first moving arm and the working part;
then, a three-dimensional computer scene of the surrounding environment where the working component and the first moving arm are located is obtained in time, namely three-dimensional information of surface points of objects in the surrounding environment is obtained, point cloud is obtained, and meshing is carried out;
then, according to the positions of all first motion joints on the first motion arm and the three-dimensional information of the surface points of the surrounding environment object, calculating the minimum distance between the first motion arm and the surrounding environment object and the minimum distance between the working component and the surrounding environment object, and prejudging whether the first motion arm or the working component collides with the surrounding environment object or not;
when there is a possibility of collision, the movement of the first moving arm is stopped.
2. The automated cooperative assembly of claim 1, wherein the image capturing device optically locates and tracks the target, and the method for identifying and avoiding occlusion of the target comprises:
firstly, calibrating an image acquisition device to obtain internal parameters and external parameters of each camera;
then, setting synchronous exposure shooting of each camera in the image acquisition device according to a set frequency, and repeating the following steps for each exposure:
firstly, identifying mark points of a target, presetting more than 1 mark point for one target, and identifying and determining the central position of each mark point of the target in an image acquired by each camera; meanwhile, the three-dimensional space position of the center of each mark point is also determined, and the center position of a target is calculated according to the center positions;
when the number of the mark points of the identified target is less than the preset number in the image acquired by any camera, judging that shielding occurs;
calculating three-dimensional position information of all object surfaces in the field of view of the current image acquisition device to obtain object surfaces represented by point clouds, performing meshing representation on the point clouds to obtain a geometric boundary of an object, and then obtaining the position and the direction of a new image acquisition device according to the following steps:
1) establishing a spherical or planar area or a randomly selected three-dimensional area of a regular space as a search area on the distance between the image acquisition device and the target center position, wherein the range of the search area is determined by the range which can be reached by the second moving arm;
2) obtaining a plurality of sampling points on a search area;
3) setting the central position of the image acquisition device on the sampling points in sequence, and setting the field direction of the image acquisition device as a vector between the sampling point and the target central position;
4) at each sampling point, rotating the image acquisition device along the field of view direction according to the internal parameters and the external parameters of the cameras, and virtually and continuously exposing and shooting in a computer until an intention position is found, so that each camera can simultaneously acquire all mark points of a target;
and finally, the second moving arm is used for moving and rotating the image acquisition device to the calculated intention position.
3. The automated cooperative assembly of claim 2, wherein the internal parameter comprises a focal length.
4. The automated cooperative assembly of claim 2, wherein the external parameters include position and orientation.
5. The automated cooperative work assembly according to claim 2, wherein if there is still an occlusion in the calculated intended position, the search is continued until both cameras acquire all landmark points preset by the target.
CN201810051219.2A 2018-01-18 2018-01-18 Automated cooperative work assembly and application thereof in surgical instrument Active CN110051436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810051219.2A CN110051436B (en) 2018-01-18 2018-01-18 Automated cooperative work assembly and application thereof in surgical instrument

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810051219.2A CN110051436B (en) 2018-01-18 2018-01-18 Automated cooperative work assembly and application thereof in surgical instrument

Publications (2)

Publication Number Publication Date
CN110051436A CN110051436A (en) 2019-07-26
CN110051436B true CN110051436B (en) 2020-04-17

Family

ID=67315617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810051219.2A Active CN110051436B (en) 2018-01-18 2018-01-18 Automated cooperative work assembly and application thereof in surgical instrument

Country Status (1)

Country Link
CN (1) CN110051436B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110464469B (en) * 2019-09-10 2020-12-01 深圳市精锋医疗科技有限公司 Surgical robot, method and device for controlling distal end instrument, and storage medium
FR3103097B1 (en) * 2019-11-19 2021-11-05 Quantum Surgical Navigation method for positioning a medical robot
CN111716352B (en) * 2020-05-13 2022-04-29 中国电力科学研究院有限公司 Power distribution network live working mechanical arm navigation obstacle avoidance method and system
CN111728696A (en) * 2020-06-12 2020-10-02 天津理工大学 Craniotomy robot system and craniotomy method thereof
CN111956329B (en) * 2020-08-12 2022-04-26 中国科学院深圳先进技术研究院 Calibration method, system, terminal and storage medium for double-arm robot
US20230256607A1 (en) * 2020-08-19 2023-08-17 Beijing Surgerii Robotics Company Limited Robot system and control method thereof
WO2022037356A1 (en) * 2020-08-19 2022-02-24 北京术锐技术有限公司 Robot system and control method
CN112704564A (en) * 2020-12-22 2021-04-27 上海微创医疗机器人(集团)股份有限公司 Surgical robot system, collision detection method, system, and readable storage medium
CN114795489A (en) * 2021-01-06 2022-07-29 深圳市精锋医疗科技股份有限公司 Surgical robot and method and control device for guiding surgical arm to move
CN112746812A (en) * 2021-01-22 2021-05-04 安百拓(南京)建筑矿山设备有限公司 Illumination and camera shooting follow-up system and control method of drill jumbo and drill jumbo
CN113499138B (en) * 2021-07-07 2022-08-09 南开大学 Active navigation system for surgical operation and control method thereof
CN113499137B (en) * 2021-07-07 2022-07-12 南开大学 Surgical robot navigation positioning system and measurement visual angle multi-target optimization method
CN113413216B (en) * 2021-07-30 2022-06-07 武汉大学 Double-arm puncture robot based on ultrasonic image navigation
WO2023040897A1 (en) * 2021-09-14 2023-03-23 武汉联影智融医疗科技有限公司 Method and system for calculating spatial registration pose of surgical robot
CN114305613B (en) * 2021-12-30 2024-01-30 武汉联影智融医疗科技有限公司 Image-guided interventional puncture system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1985773A (en) * 2005-12-22 2007-06-27 天津市华志计算机应用技术有限公司 Celebral operating robot system based on optical tracking and closed-loop control and its realizing method
CN102791214A (en) * 2010-01-08 2012-11-21 皇家飞利浦电子股份有限公司 Uncalibrated visual servoing using real-time velocity optimization
CN105468033A (en) * 2015-12-29 2016-04-06 上海大学 Control method for medical suspension alarm automatic obstacle avoidance based on multi-camera machine vision
CN106456263A (en) * 2014-03-17 2017-02-22 直观外科手术操作公司 Methods and devices for tele-surgical table registration

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6659939B2 (en) * 1998-11-20 2003-12-09 Intuitive Surgical, Inc. Cooperative minimally invasive telesurgical system
US8010180B2 (en) * 2002-03-06 2011-08-30 Mako Surgical Corp. Haptic guidance system and method
US8679096B2 (en) * 2007-06-21 2014-03-25 Board Of Regents Of The University Of Nebraska Multifunctional operational component for robotic devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1985773A (en) * 2005-12-22 2007-06-27 天津市华志计算机应用技术有限公司 Celebral operating robot system based on optical tracking and closed-loop control and its realizing method
CN102791214A (en) * 2010-01-08 2012-11-21 皇家飞利浦电子股份有限公司 Uncalibrated visual servoing using real-time velocity optimization
CN106456263A (en) * 2014-03-17 2017-02-22 直观外科手术操作公司 Methods and devices for tele-surgical table registration
CN105468033A (en) * 2015-12-29 2016-04-06 上海大学 Control method for medical suspension alarm automatic obstacle avoidance based on multi-camera machine vision

Also Published As

Publication number Publication date
CN110051436A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN110051436B (en) Automated cooperative work assembly and application thereof in surgical instrument
Hennersperger et al. Towards MRI-based autonomous robotic US acquisitions: a first feasibility study
JP6898030B2 (en) How to control a surgical robot for stereotactic surgery and a surgical robot for stereotactic surgery
CN114041875B (en) Integrated operation positioning navigation system
CN109152615B (en) System and method for identifying and tracking physical objects during robotic surgical procedures
CN107468350B (en) Special calibrator for three-dimensional image, operation positioning system and positioning method
JP6912481B2 (en) Image-based robot guidance
CN112472297B (en) Pose monitoring system, pose monitoring method, surgical robot system and storage medium
US20220117682A1 (en) Obstacle Avoidance Techniques For Surgical Navigation
Zhang et al. Autonomous scanning for endomicroscopic mosaicing and 3D fusion
US20100036393A1 (en) Methods, systems and devices for threedimensional input, and control methods and systems based thereon
CN111227935A (en) Surgical robot navigation positioning system
US20220401178A1 (en) Robotic surgical navigation using a proprioceptive digital surgical stereoscopic camera system
CN113197666A (en) Device and system for surgical navigation
CN112932667A (en) Special positioning scale for three-dimensional image, operation navigation system and positioning method thereof
CN113940755A (en) Surgical operation planning and navigation method integrating operation and image
Meng et al. Development of an intra-operative active navigation system for robot-assisted surgery
Wang et al. Robot-assisted occlusion avoidance for surgical instrument optical tracking system
EP4284287A1 (en) Multi-arm robotic systems for identifying a target
CN110051433B (en) Method for keeping track of target and application thereof in image-guided surgery
US20210315643A1 (en) System and method of displaying images from imaging devices
WO2021012142A1 (en) Surgical robot system and control method therefor
US20220241032A1 (en) Multi-arm robotic systems and methods for identifying a target
CN116725675A (en) Control method of robot system based on composite identification and robot system
CN117122414A (en) Active tracking type operation navigation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant