CN110051436A - Automation collaborative work package and its application in surgical instrument - Google Patents

Automation collaborative work package and its application in surgical instrument Download PDF

Info

Publication number
CN110051436A
CN110051436A CN201810051219.2A CN201810051219A CN110051436A CN 110051436 A CN110051436 A CN 110051436A CN 201810051219 A CN201810051219 A CN 201810051219A CN 110051436 A CN110051436 A CN 110051436A
Authority
CN
China
Prior art keywords
movement arm
target
movement
capturing device
video capturing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810051219.2A
Other languages
Chinese (zh)
Other versions
CN110051436B (en
Inventor
姚君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai She Cheng Cheng Medical Equipment Co Ltd
Original Assignee
Shanghai She Cheng Cheng Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai She Cheng Cheng Medical Equipment Co Ltd filed Critical Shanghai She Cheng Cheng Medical Equipment Co Ltd
Priority to CN201810051219.2A priority Critical patent/CN110051436B/en
Publication of CN110051436A publication Critical patent/CN110051436A/en
Application granted granted Critical
Publication of CN110051436B publication Critical patent/CN110051436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/371Surgical systems with images on a monitor during operation with simultaneous use of two cameras

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Gynecology & Obstetrics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A kind of automation collaborative work package, including the first movement arm, the second movement arm, workpiece and video capturing device.Video capturing device is set to one end of the second movement arm, by the second movement arm control and move, optical alignment and tracking implemented to target, the identification of target occlusion and is avoided.It quantizes respectively to the position of the first movement arm and the second movement arm, posture and movement velocity, establishes the incidence relation of the first movement arm and the second movement arm, and implement cooperative motion under the control of the computer.Component provided by the invention is used for image-guided surgery instrument, can cooperate with completion surgical procedure, it avoids collision, human body in surgical procedure, medical instrument and histoorgan are solved simultaneously to interrupt the caused operation of blocking of tracking target, so that operation is continued, guarantee the continuity of operation.

Description

Automation collaborative work package and its application in surgical instrument
Technical field
The present invention relates to the mechanical devices more particularly to a kind of component of a kind of energy automatic operating, in the control of computer Under, it can be cooperateed with mutually between all parts, automatically complete each task and its application in surgical instrument.
Background technique
Surgical robot system based on optical alignment, such as: the ROSA Spine system of French MedTech company, the U.S. The MAKO system of Stryker company and " Phegda " system of Tian Zhihang company all use optical alignment to realize to operative site With the tracking of operation tool (or mechanical arm).These systems only configure a mechanical arm, and for grasping operation tool, optics is fixed Position system is separated with the main body of installation mechanical arm, is additionally installed on a mobile bracket.The drawbacks of this system is, Mechanical arm carries out having risk when large range of automatic movement, may collide surrounding enviroment object, cause to damage.Separately Outside, this system needs the moment to keep operative site and operation tool simultaneously by optical alignment tracking system recognition and tracking, ability Ensure that performs the operation goes on smoothly.Meanwhile identifying that two mobile objects need locating and tracking system to have bigger recognition and tracking model It encloses, range is bigger, and the positioning accuracy of optical alignment tracking system is lower.In addition optical alignment tracking system is based on camera photography Principle, there is shooting optical path circumstance of occlusion can all cause hand when any one of operative site or operation tool are blocked Art is interrupted.Further, since all there may be movements for two tracked targets (such as: operative site and operation tool), the two is simultaneously When keeping tracking, operator needs to concede the optical path of shooting exclusively for tracking camera;It also needs to keep operation tool court constantly To locating and tracking system.When due to doctor's operation instrument, freedom of motion (such as: movement rotates and turn over) is very big, It is easy to cause surgical instrument to be blocked, loses tracking, implement operation to doctor and bring inconvenience, reduce doctor's equipment operation On experience.
Summary of the invention
It is an object of the present invention to provide a kind of components, can automate association between various components under the control of the computer With work, work in every task is completed.
It is another object of the present invention to provide a kind of components, and various components can be implemented to automate under the control of the computer Surgical procedure is completed in collaboration.
It is yet a further object of the present invention to provide a kind of components, are used for image-guided surgery instrument, solve surgical procedure Middle human body, medical instrument and histoorgan interrupt the caused operation of blocking of tracking target, so that operation is continued, guarantee The continuity of operation.
A kind of component, comprising:
First movement arm comprising the first movable joint of several first support arms and at least one;
Second movement arm comprising the second movable joint of several second support arms and at least one;
Workpiece is set to one end of the first movement arm, by the first movement arm control and move;
Video capturing device is set to one end of the second movement arm, by the second movement arm control and move, to target Implement optical alignment and tracking, the identification of target occlusion and avoids;
It quantizes respectively to the position of the first movement arm and the second movement arm, posture and movement velocity, by the first fortune The location information and posture information of swing arm and the location information and posture information of the second movement arm establish incidence relation, realize first Conversion between movement arm coordinate system and the second movement arm coordinate system, and implement cooperative motion under the control of the computer.
Video capturing device provided by the invention includes several cameras, is arranged regularly composition.
Component of the invention, the method for implementing cooperative motion are as follows:
Video capturing device obtains position and the posture of tracked target in real time, and computer passes through each on the first movement arm The rotation angle in the first joint knows position and the posture of workpiece (such as instrument used in operation) in real time, according to first Incidence relation between movement arm and the second movement arm, complete position of the tracked target under video capturing device coordinate system and Posture and in the position under the first movement arm coordinate system and the conversion between posture, makes the first movement arm obtain tracked target and exists Position and posture under first movement arm coordinate system, and then obtain specific location and the direction of scheduled operating path, driving the One movement arm presses scheduled path near tracked target and completes task.
Computer knows the location information and posture information of workpiece (such as instrument used in operation) in real time, and The location information and posture information of first movement arm;
At this point, the video capturing device being arranged on the second movement arm adjusts the direction for obtaining image in real time, towards first Movement arm and workpiece, and keep tracking the first movement arm and workpiece;
Then, the three-dimensional computer scene for obtaining surrounding enviroment locating for workpiece and the first movement arm in time, that is, obtain The three-dimensional information for obtaining surrounding enviroment object table millet cake, obtains a cloud, and carry out gridding;
Later, believed according to each first movable joint position on the first movement arm and the three-dimensional of surrounding enviroment object table millet cake Breath calculates the minimum range of the first movement arm and workpiece and surrounding enviroment object, prejudges the first movement arm or workpiece Whether can collide with surrounding enviroment object;
When there is the possibility to collide, then stop the movement of the first movement arm.
When executing task, the second movement arm and video capturing device also can solve human body in surgical procedure, medical treatment Instrument and histoorgan interrupt the caused operation of blocking of tracked target, so that operation is continued, guarantee the company of operation Coherence.
Video capturing device passes through tracking and the rigidly connected mark of operative site to the tracking of operative site at the time of surgery, Object is realized.Marker is also known as target, it includes several index points, and index point has specific shape and color, to specific Light has strong albedo, can be shot by camera shooting head height contrast and pass through algorithm efficient identification.Pass through distinguishing mark Point, and use binocular positioning method, it can be determined that the position of target and posture.By video capturing device keep to target with Track, while the surface three dimension location information of object in the scene of the visual field is obtained in real time.If video capturing device is because being blocked, or because of hand The movement at art position causes to lose target tracking, then automatic to count by obtaining the surface three dimension information of object in the scene of the visual field Calculate a new tracing positional and direction;Second movement arm driving video capturing device reaches new position and direction, once again Realize the real-time tracking to operative site.
A kind of method for avoiding blocking, keeping tracking target provided by the invention, includes the following steps:
1. a pair video capturing device is demarcated, the inner parameter (such as: focal length) and external parameter of each camera are obtained (such as: position and direction).
2. each camera exposes shooting by the Frequency Synchronization of setting in setting of image acquisition device, for exposing weight each time Multiple following steps:
Firstly, the index point of identification target, default 1 or more the index point of a target, identify and determine each camera shooting The center of each index point of target in image acquired in head;The three-dimensional space position of each mark dot center is also determined simultaneously, And the center of a target is calculated by these centers.
In the image acquired in any camera, when the index point quantity of the target identified is less than predetermined quantity, sentence Surely generation is blocked.
The three dimensional local information for calculating all objects surface in video capturing device field range is obtained with point cloud representation Body surface, and these cloud griddings are indicated, the geometrical boundary of object is obtained, then obtain image capturing dress according to the following steps The new position and direction set:
1) a spherical surface or plane domain are established in the distance between video capturing device and target's center position, also Or the 3D region of an arbitrarily selected rule space searches the range in region by the second movement arm as region is searched The range that can be reached determines;
2) several sampled points are obtained on searching region (such as: by way of gridding, using grid intersection as sampling Point);
3) center of video capturing device is successively set on these sampled points, and by the view of video capturing device Field direction is set as the vector between sampled point and target's center position;
4) it in each sampled point, is obtained according to the inner parameter and external parameter of camera, and along visual field direction image rotating Device is taken, in a computer virtual continuous exposure shooting, until finding an intention position, so that each camera is got simultaneously All index points of target;
Finally, video capturing device movement is rotated to the intention position being calculated by the second movement arm.
It blocks, continually looks for if being still remained on the intention position being calculated, until each camera obtains mesh Mark preset whole index points.
The movement of several freedom degrees may be implemented in second movement arm, guarantees that video capturing device reaches new position and side To (such as: 6DOF mechanical arm, the movement and rotation of x, y and z directionss).
Second movement arm has programming interface and motor, can receive order, and the position that active movement is indicated to order And posture.
Method provided by the invention is suitble to apply in image-guided surgery, solves human body, medical instrument in surgical procedure The caused operation of blocking of tracking target is interrupted with histoorgan, so that operation is continued, guarantees the continuity of operation.
In view of the coverage of video camera is of different sizes, it is larger range of to find the position and direction needs for avoiding blocking Obtain the information of target surrounding objects.Particular arrangement can be carried out to video capturing device, such as use two narrow angle camera The special real-time spatial position for obtaining blip point is used for locating and tracking, while judgement is blocked;Reuse other two big angle The case where video camera obtains target periphery is spent, new video capturing device position and direction is calculated, avoids blocking.
Video capturing device has a cloud obtaining mode, can contactless acquisition operative site surface point cloud information, with CT The operative site surface that image reconstruction obtains carries out surface adjustment, and the coordinate obtained between actual operation position and CT images converts pass System, realizes higher operation precision.Surface adjustment has generally acknowledged ripe algorithm at present.Common surface adjustment takes in image-guided surgery Point methods are to be taken point (as taken a little on the skin) using special operation tool in operative site stroke, the contactless registration of optics, Opposite contact is registrated, and can obtain higher precision to avoid operative site surface deformation.
Technical solution of the present invention realize the utility model has the advantages that
Component provided by the invention, the travel path of energy Automatic-searching workpiece (such as: surgical instrument), while avoiding sending out Raw touching, guarantees safety operation.
Component provided by the invention, Neng Shi image capturing mechanism avoids blocking, and keeps track target, guarantees automation Implement to cooperate, such as: the duration of operation does not interrupt.
By obtaining tracking of the movement arm articulation parameter realization to workpiece, optical alignment tracking is needed not rely on System interrupt operation will not because operation tool loses tracking, and what reduction may occur blocks.
Component provided by the invention is applied to image-guided surgery instrument, does not need to track operative site and operation simultaneously Tool, it is only necessary to track operative site, reduce the following range of optical alignment tracking system, promote precision, also reduce cost.
Optical alignment tracking system can be effectively ensured at a distance from operative site in optimum precision in component provided by the invention In range, precision is promoted.May be implemented in surgical procedure optical alignment tracking system avoid automatically shooting optical path block, facilitate doctor Raw operation obtains preferably operation experience.
Component provided by the invention is applied to image-guided surgery instrument, can contactless acquisition operative site surface point cloud Information obtains the initial data of higher precision, and registration matrix, lifting system precision
Detailed description of the invention
Fig. 1 is that the 3D of camera reconstructs the schematic diagram of an embodiment;
Fig. 2 is the schematic diagram of one embodiment of invention components;
Fig. 3 is the schematic diagram of another embodiment of invention components;
Fig. 4 is invention components according to tracking target periphery object dimensional position point cloud data progress one embodiment of gridding Schematic diagram;
Fig. 5 is the schematic diagram of one of the invention with rigidly connected one embodiment of mechanism of patient;
Fig. 6 be carry out gridding according to tracking target periphery object dimensional location information, and find new tracing positional with The schematic diagram of one embodiment of direction;
Fig. 7 is the schematic diagram with one embodiment of body surface of point cloud representation;
Fig. 8 is the embodiment schematic diagram that shown in Fig. 7 cloud is carried out to the resulting form of gridding;
Fig. 9 is for invention components as Application of device in an embodiment schematic diagram of operation.
Specific embodiment
Below in conjunction with attached drawing, the technical schemes of the invention are described in detail.The embodiment of the present invention is only to illustrate skill of the invention Art scheme rather than limit, although being described the invention in detail referring to preferred embodiment, those skilled in the art It should be appreciated that can be with modification or equivalent replacement of the invented technical scheme, without departing from the essence of technical solution of the present invention Mind and range, should all cover within the scope of the claims of the present invention.
In the present embodiment, video capturing device is loaded with different software modules, i.e. optical alignment tracking system and vision Monitor obstacle avoidance system.They can respectively based on one group of camera or based on one group of common camera, (one group includes two camera shootings Head).If Fig. 1, A and B are that (calibration can obtain the inner parameters and position and direction etc. such as the focal length of camera by calibration External parameter) two cameras center, the corresponding imaging plane of each camera;X point is c point in plane 1 Picture, x ' are picture of the c point in plane 2, raw into a line by x point from A point, pass through x ' point from the life of B point is into a line, two lines Crosspoint is the position of c, if establishing a coordinate system in the position where A, B, so that it may the three-dimensional space position of c is defined, it is real Shi Lianxu shoots c, so that it may realize tracking of the locating and tracking system to c.In addition, if using algorithm (such as SIFT algorithm) The corresponding relationship of multiple characteristic points (characteristic point of such as body surface) is found in plane 1 and plane 2, then available video camera The location information largely put on each body surface in photographed scene obtains cloud (such as Fig. 7), by point cloud gridding (as schemed 8) body form, can be restored, a three-dimensional computer scene is obtained.Vision monitoring obstacle avoidance system utilizes the three-dimensional of above-mentioned acquisition Scene information judges possible blocking or blocks.Because of locating and tracking system and respectively permissible accuracy or the bat of vision obstacle avoidance system Take the photograph range it is different (for example vision monitoring obstacle avoidance system permissible accuracy is opposite wants low, but coverage require it is bigger), can also be with Using two groups, every group of each two camera is realized respectively.
Movement arm is Parametric System, its posture can be calculated by the movement angle of each movable joint. By the way that video capturing device to be mounted on to one end of a movement arm, it is able to achieve the parametrization setting to video capturing device position With acquisition, the position and direction of video capturing device are obtained in real time.Surgical instrument is mounted on the first movement arm end by fixture End flanges, surgical instrument and the positional relationship of the first movement arm end flange have been obtained in Design of Mechanical Structure.Image obtains The coordinate relationship between device and the second movement arm is taken, the coordinate conversion relation between the first movement arm and the second movement arm all may be used To be obtained by the dimensional parameters set in Design of Mechanical Structure.Therefore, surgical instrument and image capturing may finally be obtained Coordinate conversion relation between device realizes opponent i.e. by above parameter of structure design and movable joint motion information The real-time tracing of art tool location and posture.
Again because movement arm is a Parametric System, the position of two movement arms and posture are completely by the fortune of each movable joint Dynamic angle is determining, the video capturing device that computer can accurately know any time any movement arm and be mounted on movement arm With the location information and posture information of operation tool, this makes two arms realize automation collaborative work.
When carrying out surgical action, video capturing device obtain operative site tracked target posture and position, by with Track target and operative site are rigidly connected, and then obtain operation pathway in the position and direction of operative site.Pass through the first movement Position and posture incidence relation between arm and the second movement arm, by the operation road at video capturing device coordinate system menisectomy position Path position and direction, the position and direction being converted under the first movement arm coordinate system, the first movement arm drive surgical instrument to reach Scheduled position and direction, and execute operation.In surgical procedure, system knows operation by each joint parameter of movement arm in real time The position and direction of instrument, Real-time Feedback procedure (such as: drilling depth) simultaneously judge whether there is deviation.
Carry out No operation movement when, the first movement arm obtain a target position, and drive itself by surgical instrument It is moved to target position, computer is realized avoid collision as follows:
Under the control of the computer, surgical instrument is known by the rotation angle in the first joint each on the first movement arm in real time Position and the first movement arm position and posture information;
At this point, the video capturing device being arranged on the second movement arm implements the direction that adjustment obtains image, towards first Movement arm and surgical instrument, and keep tracking the first movement arm and surgical instrument;
Then, the three-dimensional computer scene for obtaining surrounding enviroment locating for surgical instrument and the first movement arm in time, that is, obtain The three-dimensional information for obtaining surrounding enviroment object table millet cake, obtains a cloud, and carry out gridding;
Later, according to the first joint position each on the first movement arm and the three-dimensional information of surrounding enviroment object table millet cake, The minimum range of the first movement arm and surgical instrument and surrounding enviroment object is calculated, the first movement arm or surgical instrument and week are prejudged Whether surrounding environment object can collide;
When there is the possibility to collide, then stop the movement of the first movement arm.
During performing the operation progress, the present embodiment provides one kind by vision monitoring obstacle avoidance system software module and avoids hiding Gear, the method for keeping tracking target, comprising:
1. a pair video capturing device is demarcated, the inside (such as: focal length) for obtaining each camera and external parameter are (such as: position Set and direction)
2. each camera is by the synchronous exposure shooting of certain frequency in setting of image acquisition device, for exposing repetition each time Following steps:
Firstly, the index point of identification target, default 1 or more the index point of a target, identify and determine each camera shooting The center of each index point of target in image acquired in head;The three-dimensional space position of each mark dot center is also determined simultaneously, And the center of a target is calculated by these centers, it is denoted as DC.
In the image acquired in any camera, when the index point quantity of the target identified is less than predetermined quantity, sentence Surely blocking generation (possibly can not obtain target's center position DC at this time, can shoot the centre bit obtained with previous exposed frame It is set to the center DC of present frame target, and continues subsequent step).
The three dimensional local information for calculating all objects surface in the scene of the current image acquisition device visual field is obtained with point cloud table The body surface (such as Fig. 7) shown, and these cloud griddings are indicated into (such as Fig. 8), the geometrical boundary of object is obtained, by with lower section Method obtains a new camera position and direction:
1) position of heart DC apart in the target, establishes a spherical surface or plane domain, also or arbitrarily selected A rule space 3D region, this region is denoted as CS, and the range of CS is determined by the range that movement arm can reach.
2) several sampled points are obtained on CS.
3) center of video capturing device is successively set in these sampled points, and by the visual field of video capturing device Direction is set as the vector between sampled point and target's center DC.
4) in each sampled point, according to the inside and outside parameter of camera, virtual continuous exposure is shot in a computer, Until finding a position, it is ensured that each camera takes all index points of target simultaneously, and process stops.
It blocks if the direction being calculated is still remained with position, repeatedly above procedure, is not hidden until obtaining one Until the position of gear.
In view of the coverage of video camera is of different sizes, it is larger range of to find the position and direction needs for avoiding blocking Obtain the information of target surrounding objects.The real-time space bit that two narrow angle camera specially obtains blip point can be used It sets, is tracked for optical alignment, while judgement is blocked;Reuse the feelings that other two wide-angle video camera obtains target periphery Condition calculates new camera position and direction according to algorithm mentioned above, avoids blocking, and realizes vision monitoring avoidance.Here It needs to calculate the coordinate system relationship between two groups of cameras.
Such as Fig. 2, the UR3 mechanical arm of two Universal Robot companies is selected, UR3 has 6 freedom degrees.Second fortune The end flange of swing arm 100 is mounted with camera combination 300, it include two cameras (such as: Microsoft LifeCam Studio), video capturing device is constituted, the image obtained from it is transferred to computer, is used for optical alignment tracking and vision Monitor avoidance;The end flange of first movement arm 200 is mounted with surgical instrument 400 (for example: bone drill).
Alternatively possible embodiment such as Fig. 3 installs the first camera 500 in the end flange of the second movement arm 100 With second camera 510, video capturing device is constituted, the first camera 500 is tracked for optical alignment, second camera 510 For vision monitoring avoidance, the first camera 500 and second camera 510 can also be installed at 90 degree.This embodiment One the advantage is that for optical alignment and vision monitoring respective requirements difference, can choose taking the photograph for different performance and parameter As head.Guarantee the uniqueness of the installation site of the first camera 500 and second camera 510 in structure design, guarantees first Camera 500 is consistent with the shooting direction of second camera 510, the coordinate between the first camera 500 and second camera 510 If the translation matrix is M, then the first camera 500 and the can may be implemented by M by once translating realization in transformation relation Visual field object transformation between two cameras 510.For example: by M, second camera 510 being shot to obtained position coordinates and is believed Both breath or point cloud data information are converted into location coordinate information or point cloud data information under the first camera 500, realize The unification of data, this facilitate that subsequent vision monitoring avoidance.
In Fig. 2, between the second movement arm 100 and video capturing device 300, the first movement arm 200 and surgical instrument 400 Between and the positional relationship (coordinate conversion relation) between the second movement arm 100 and the first movement arm 200 can pass through machinery Dimensional measurement when structure is installed obtains.In addition, because the position of the second movement arm 100 and the first movement arm 200 is completely by each The movement angle of movable joint determines that the inner parameter that these angles can be returned by movement arm obtains.Therefore, image capturing Positional relationship (i.e. coordinate system transformation relationship) between device 300 and the first movement arm 200 and surgical instrument 400 can lead in real time It crosses calculating to obtain, does not need that any distinguishing mark and index point for optical alignment tracking is installed at surgical instrument 400.
Fig. 9 is for invention components as Application of device in an embodiment schematic diagram of operation.As shown in figure 9, executing hand When art task, tracked target 610 and operative site 900 are rigidly connected, the mobile reflection operative site of tracked target 610 900 movement.Contain operation pathway 910 (such as: boring direction) on operative site 900, obtains the conventional method of operation pathway 910 It is that operation pathway is marked in CT images by carrying out surgical planning in CT images, it is then several by being taken in CT images It is a, while corresponding several points are taken in operative site, a registration is carried out, realizes CT images coordinate system and operative site coordinate Operation pathway based on CT coordinate system is transferred to actual operation position by the unification of system, final realize.Image capturing mechanism 300 is real When obtain target 610 position and posture, computer calculate position and the posture of operation pathway, position and appearance here in real time State is all with the coordinate system description under image capturing mechanism 300.Pass through the pass between the second movement arm 100 and the first movement arm 200 Position of the operation pathway 910 under 300 coordinate system of image capturing mechanism and posture are converted the first movement by connection relationship, computer Position and posture under 200 coordinate system of arm, and the operation pathway of the first movement arm 200 according to schedule is driven to execute surgical procedure. Computer calculates the procedure of the first movement arm 200 (such as: boring according to the angle of each movable joint of the first movement arm 200 in real time Hole depth and direction), and judge whether offset direction, if reach predetermined position etc..
When carrying out No operation task (as the automatic of movement arm is packed up when shutting down), when the first movement arm 200 obtains one Order, when driving surgical instrument 400 reaches designated position, computer can obtain surgical instrument 400 in real time and obtain relative to image Take the positional relationship of device 300 and the second movement arm 100, so calculate video capturing device 300 a new position and Direction guarantees that surgical instrument 400 is within the scope of the tracking and monitoring of video capturing device 300.Video capturing device 300 is real-time The environmental objects on 400 periphery of surgical instrument are imaged, the point cloud on 400 local environment background object surface of surgical instrument is obtained Data, the coordinate of these point cloud datas are relative to what video capturing device 300 was calculated.These point cloud datas are by grid Chemical conversion is that solid object surface (puts cloud acquisition algorithm and an algorithm for cloud gridding is belonged to ripe algorithm, exist in the market Matured product can obtain point cloud data and gridding, such as: the kinect of Microsoft in real time).If Fig. 4 is a rough schematic view, week In surrounding environment, the first object 310 and the second object 320 are that the meshing point cloud data that video capturing device 300 obtains obtains Body surface passes through the angle of each movable joint of the first movement arm 200 and the phase of the first movement arm 200 and surgical instrument 400 To fixed positional relationship, the model of the first movement arm 200 and surgical instrument 400 can be virtually established in a computer, and in shadow As acquisition device 300 coordinate system in draw to get to the first movement arm 200 and surgical instrument 400 and the first object 310 with Relative positional relationship between second object 320, computer calculate in real time the first movement arm 200 and surgical instrument 400 relative to The distance between first object 310 and the second object 320.When distance is less than setting safety value, that is, stop the first movement arm 200 Movement, prevent collision.Here distance calculates the scope for belonging to collision detection, has corresponding algorithm in this field.
Fig. 5 is one and the rigidly connected mechanism 600 of patient, and it includes targets 610 to be tracked, because of mechanism 600 and is suffered from Person's rigid connection, the change in location of follower 600 are equal to the movement of tracking patient.Target 610 includes 4 index points 620, The center 630 of definition is in center relative to each index point 620.
It is exposed shooting using 300 pairs of targets, obtains each index point using previously mentioned binocular positioning principle 620 spatial positional information, and calculate the center 630 of tracked target.
All index points are identified on the image that each video camera generates in video capturing device 300, work as image capturing When the number of index point is less than 4 in the image that any one video camera is shot in device 300, then it is judged as and is blocked.
The image obtained using two cameras in video capturing device 300, it is common by computer vision field SIFT algorithm etc. obtains the three dimensional local information on all objects surface in 300 field range of video capturing device, by these spaces Position grid obtains object entity 710,720, as shown in Figure 6.
These entities are arranged in Effective Pictures By A Computer environment, are such as arranged under OpenGL environment, comprising above-mentioned in scene All entities and tracked target.Such as Fig. 6, the scene information 700 arranged in computer, entity wherein included, position Relationship is identical with real world.
In above-mentioned computer scene, it is well placed the position of two cameras in video capturing device 300 included, and A plane 800 is nearby established in camera shooting unit location, the plane perpendicular to the location of video capturing device 300 with Line (referring to Fig. 6) between tracked target (i.e. mechanism 600), the distance between this plane and tracked target are in Within the scope of the optimum precision of video camera, the size of plane is determined according to the range that the second movement arm 100 can reach.First by plane Equal part, for example 6 equal part, finds 12 positions.Such as Fig. 6,12 positions are shared in plane 800 and represent point 810, are successively attempted at this 12 positions are virtually shot.Video capturing device 300 is placed sequentially on 12 810 positions of representative point, alignment target Shooting, the distinguishing mark point on obtained photo, if the index point number that the photograph that two video cameras obtain identifies all is 4, Then think that there is no block for the position.(so-called virtual shooting is exactly the parameter (such as direction, focal length etc.) using camera, is counting The image-forming principle of analogue camera is by project objects to camera imaging plane in calculation machine).
It point plane such as can repeatedly attempt, obtain a position nearest from acquisition device 300 and be used as target position, reduction The amount of exercise of second movement arm 100 obtains faster mechanical response speed.
Judge to move along which direction can avoid blocking realization to the lasting tracking of target after, calculating can be passed through Order is passed to the second movement arm 100 by machine, and the second movement arm 100 is automatically performed the reorientation of video capturing device.
If new position is blocked again, repeat the above process, stopping when finding a position not being blocked. If still failing (such as tracked target is wrapped up by an object) after repeatedly attempting, notify computer that can not find position, counts Calculation machine is to user report mistake.
The surface point cloud data of tracking target and visual field object scene, tracking can be obtained by video capturing device 300 Target and patients surgery position are rigidly connected, and patients surgery position are identified in these point cloud datas, by the partial data and CT The operative site surface that three-dimensional reconstruction obtains carries out surface adjustment, and device provided by the invention facilitates the progress of this registration, point Cloud is shot by video camera and is obtained, contactless with human body, guarantees that operative site surface texture (such as skin) without any deformation, obtains Point cloud it is more accurate.

Claims (6)

1. a kind of component, characterized by comprising:
First movement arm comprising the first movable joint of several first support arms and at least one;
Second movement arm comprising the second movable joint of several second support arms and at least one;
Workpiece is set to one end of the first movement arm, by the first movement arm control and move;
Video capturing device is set to one end of the second movement arm, by the second movement arm control and move, to target implement Optical alignment and tracking, the identification of target occlusion and are avoided;
It quantizes respectively to the position of the first movement arm and the second movement arm, posture and movement velocity, establishes the first movement The incidence relation of arm and the second movement arm, and implement cooperative motion under the control of the computer.
2. component according to claim 1, it is characterised in that the method for implementing the cooperative motion is as follows:
Video capturing device obtains position and the posture of tracked target in real time, and computer passes through on the first movement arm each first The rotation angle in joint knows position and the posture of workpiece in real time, and system is according between the first movement arm and the second movement arm Incidence relation, complete position of the tracked target under video capturing device coordinate system and posture in the first movement arm coordinate The conversion between position and posture under system, makes the first movement arm obtain position of the tracked target under the first movement arm coordinate system It sets and posture, and then obtains specific location and the direction of scheduled operating path, the first movement arm of driving is attached in tracked target Closely task is completed by scheduled path.
Computer knows the location information of workpiece and the location information and posture of posture information and the first movement arm in real time Information;
At this point, the video capturing device being arranged on the second movement arm adjusts the direction for obtaining image in real time, moved towards first Arm and workpiece, and keep tracking the first movement arm and workpiece;
Then, the three-dimensional computer scene of surrounding enviroment locating for workpiece and the first movement arm is obtained in time, that is, obtains week The three-dimensional information of surrounding environment object table millet cake obtains a cloud, and carries out gridding;
Later, according to each first movable joint position on the first movement arm and the three-dimensional information of surrounding enviroment object table millet cake, The minimum range of the first movement arm and workpiece and surrounding enviroment object is calculated, the first movement arm or workpiece and week are prejudged Whether surrounding environment object can collide;
When there is the possibility to collide, then stop the movement of the first movement arm.
3. component according to claim 1, it is characterised in that shown in video capturing device to target implement optical alignment And tracking, the identification of target occlusion and the method avoided are as follows:
First video capturing device is demarcated, obtains the inner parameter and external parameter of each camera;
Then, each camera is shot by the Frequency Synchronization exposure of setting in setting of image acquisition device, for exposing weight each time Multiple following steps:
Firstly, the index point of identification target, default 1 or more the index point of a target, identify and determine each camera institute The center of each index point of target in the image of acquisition;The three-dimensional space position of each mark dot center is also determined simultaneously, and is led to Cross the center that these centers calculate a target;
In the image acquired in any camera, when the index point quantity of the target identified is less than predetermined quantity, it is determined with Block generation;
The three dimensional local information for calculating all objects surface in current image acquisition device field range is obtained with point cloud representation Body surface, and these cloud griddings are indicated, the geometrical boundary of object is obtained, then obtain a new shadow according to the following steps As the position and direction of acquisition device:
1) spherical surface or plane domain are established in the distance between video capturing device and target's center position, also or The 3D region of an arbitrarily selected rule space, as region is searched, the range for searching region can by the second movement arm The range of arrival determines;
2) several sampled points are obtained on searching region;
3) center of video capturing device is successively set on these sampled points, and by the visual field side of video capturing device To the vector being set as between sampled point and target's center position;
4) it in each sampled point, according to the inner parameter and external parameter of camera, and obtains and fills along visual field direction image rotating It sets, in a computer virtual continuous exposure shooting, until finding an intention position, so that each camera gets mesh simultaneously All index points of target.
Finally, video capturing device movement is rotated to the intention position being calculated using the second movement arm.
4. component according to claim 3, it is characterised in that the inner parameter includes focal length.
5. component according to claim 3, it is characterised in that the external parameter includes position and direction.
6. component according to claim 3, it is characterised in that if being still remained on the intention position being calculated It blocks, then continually looks for, until two cameras obtain the preset whole index points of target.
CN201810051219.2A 2018-01-18 2018-01-18 Automated cooperative work assembly and application thereof in surgical instrument Active CN110051436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810051219.2A CN110051436B (en) 2018-01-18 2018-01-18 Automated cooperative work assembly and application thereof in surgical instrument

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810051219.2A CN110051436B (en) 2018-01-18 2018-01-18 Automated cooperative work assembly and application thereof in surgical instrument

Publications (2)

Publication Number Publication Date
CN110051436A true CN110051436A (en) 2019-07-26
CN110051436B CN110051436B (en) 2020-04-17

Family

ID=67315617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810051219.2A Active CN110051436B (en) 2018-01-18 2018-01-18 Automated cooperative work assembly and application thereof in surgical instrument

Country Status (1)

Country Link
CN (1) CN110051436B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110464469A (en) * 2019-09-10 2019-11-19 深圳市精锋医疗科技有限公司 The control method of operating robot and end instrument, control device, storage medium
CN111716352A (en) * 2020-05-13 2020-09-29 中国电力科学研究院有限公司 Power distribution network live working mechanical arm navigation obstacle avoidance method and system
CN111728696A (en) * 2020-06-12 2020-10-02 天津理工大学 Craniotomy robot system and craniotomy method thereof
CN111956329A (en) * 2020-08-12 2020-11-20 中国科学院深圳先进技术研究院 Calibration method, system, terminal and storage medium for double-arm robot
CN112704564A (en) * 2020-12-22 2021-04-27 上海微创医疗机器人(集团)股份有限公司 Surgical robot system, collision detection method, system, and readable storage medium
CN112746812A (en) * 2021-01-22 2021-05-04 安百拓(南京)建筑矿山设备有限公司 Illumination and camera shooting follow-up system and control method of drill jumbo and drill jumbo
CN113413216A (en) * 2021-07-30 2021-09-21 武汉大学 Double-arm puncture robot based on ultrasonic image navigation and puncture method
CN113499138A (en) * 2021-07-07 2021-10-15 南开大学 Active navigation system for surgical operation and control method thereof
CN113499137A (en) * 2021-07-07 2021-10-15 南开大学 Surgical robot navigation positioning system and measurement visual angle multi-target optimization method
CN113729944A (en) * 2020-05-29 2021-12-03 上海舍成医疗器械有限公司 Automated surgical device using monocular positioning tracker
CN113768623A (en) * 2020-06-09 2021-12-10 上海舍成医疗器械有限公司 Surgical navigation system using monocular positioning tracker
CN113966205A (en) * 2019-11-19 2022-01-21 康坦手术股份有限公司 Navigation method for positioning medical robot
CN114074328A (en) * 2020-08-19 2022-02-22 北京术锐技术有限公司 Robot system and control method thereof
WO2022037356A1 (en) * 2020-08-19 2022-02-24 北京术锐技术有限公司 Robot system and control method
CN114305613A (en) * 2021-12-30 2022-04-12 武汉联影智融医疗科技有限公司 Image-guided interventional puncture system
WO2022148336A1 (en) * 2021-01-06 2022-07-14 深圳市精锋医疗科技有限公司 Surgical robot, and surgical arm movement guiding method thereof and control device thereof
WO2023040897A1 (en) * 2021-09-14 2023-03-23 武汉联影智融医疗科技有限公司 Method and system for calculating spatial registration pose of surgical robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020082612A1 (en) * 1998-11-20 2002-06-27 Intuitive Surgical, Inc. Arm cart for telerobotic surgical system
CN1985773A (en) * 2005-12-22 2007-06-27 天津市华志计算机应用技术有限公司 Celebral operating robot system based on optical tracking and closed-loop control and its realizing method
US20090000626A1 (en) * 2002-03-06 2009-01-01 Mako Surgical Corp. Haptic guidance system and method
US20090171373A1 (en) * 2007-06-21 2009-07-02 Farritor Shane M Multifunctional operational component for robotic devices
CN102791214A (en) * 2010-01-08 2012-11-21 皇家飞利浦电子股份有限公司 Uncalibrated visual servoing using real-time velocity optimization
CN105468033A (en) * 2015-12-29 2016-04-06 上海大学 Control method for medical suspension alarm automatic obstacle avoidance based on multi-camera machine vision
CN106456263A (en) * 2014-03-17 2017-02-22 直观外科手术操作公司 Methods and devices for tele-surgical table registration

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020082612A1 (en) * 1998-11-20 2002-06-27 Intuitive Surgical, Inc. Arm cart for telerobotic surgical system
US20090000626A1 (en) * 2002-03-06 2009-01-01 Mako Surgical Corp. Haptic guidance system and method
CN1985773A (en) * 2005-12-22 2007-06-27 天津市华志计算机应用技术有限公司 Celebral operating robot system based on optical tracking and closed-loop control and its realizing method
US20090171373A1 (en) * 2007-06-21 2009-07-02 Farritor Shane M Multifunctional operational component for robotic devices
CN102791214A (en) * 2010-01-08 2012-11-21 皇家飞利浦电子股份有限公司 Uncalibrated visual servoing using real-time velocity optimization
CN106456263A (en) * 2014-03-17 2017-02-22 直观外科手术操作公司 Methods and devices for tele-surgical table registration
CN105468033A (en) * 2015-12-29 2016-04-06 上海大学 Control method for medical suspension alarm automatic obstacle avoidance based on multi-camera machine vision

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110464469A (en) * 2019-09-10 2019-11-19 深圳市精锋医疗科技有限公司 The control method of operating robot and end instrument, control device, storage medium
CN113966205A (en) * 2019-11-19 2022-01-21 康坦手术股份有限公司 Navigation method for positioning medical robot
CN113966205B (en) * 2019-11-19 2024-03-29 康坦手术股份有限公司 Navigation method for positioning medical robot
CN111716352A (en) * 2020-05-13 2020-09-29 中国电力科学研究院有限公司 Power distribution network live working mechanical arm navigation obstacle avoidance method and system
CN113729944A (en) * 2020-05-29 2021-12-03 上海舍成医疗器械有限公司 Automated surgical device using monocular positioning tracker
CN113768623A (en) * 2020-06-09 2021-12-10 上海舍成医疗器械有限公司 Surgical navigation system using monocular positioning tracker
CN111728696A (en) * 2020-06-12 2020-10-02 天津理工大学 Craniotomy robot system and craniotomy method thereof
CN111956329A (en) * 2020-08-12 2020-11-20 中国科学院深圳先进技术研究院 Calibration method, system, terminal and storage medium for double-arm robot
CN111956329B (en) * 2020-08-12 2022-04-26 中国科学院深圳先进技术研究院 Calibration method, system, terminal and storage medium for double-arm robot
WO2022037356A1 (en) * 2020-08-19 2022-02-24 北京术锐技术有限公司 Robot system and control method
CN114074328A (en) * 2020-08-19 2022-02-22 北京术锐技术有限公司 Robot system and control method thereof
WO2022037209A1 (en) * 2020-08-19 2022-02-24 北京术锐技术有限公司 Robot system and control method therefor
CN112704564A (en) * 2020-12-22 2021-04-27 上海微创医疗机器人(集团)股份有限公司 Surgical robot system, collision detection method, system, and readable storage medium
WO2022148336A1 (en) * 2021-01-06 2022-07-14 深圳市精锋医疗科技有限公司 Surgical robot, and surgical arm movement guiding method thereof and control device thereof
CN112746812A (en) * 2021-01-22 2021-05-04 安百拓(南京)建筑矿山设备有限公司 Illumination and camera shooting follow-up system and control method of drill jumbo and drill jumbo
CN113499137A (en) * 2021-07-07 2021-10-15 南开大学 Surgical robot navigation positioning system and measurement visual angle multi-target optimization method
CN113499138A (en) * 2021-07-07 2021-10-15 南开大学 Active navigation system for surgical operation and control method thereof
CN113413216B (en) * 2021-07-30 2022-06-07 武汉大学 Double-arm puncture robot based on ultrasonic image navigation
CN113413216A (en) * 2021-07-30 2021-09-21 武汉大学 Double-arm puncture robot based on ultrasonic image navigation and puncture method
WO2023040897A1 (en) * 2021-09-14 2023-03-23 武汉联影智融医疗科技有限公司 Method and system for calculating spatial registration pose of surgical robot
CN114305613A (en) * 2021-12-30 2022-04-12 武汉联影智融医疗科技有限公司 Image-guided interventional puncture system
CN114305613B (en) * 2021-12-30 2024-01-30 武汉联影智融医疗科技有限公司 Image-guided interventional puncture system

Also Published As

Publication number Publication date
CN110051436B (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN110051436A (en) Automation collaborative work package and its application in surgical instrument
US11272985B2 (en) Patient-specific preoperative planning simulation techniques
CN112472297B (en) Pose monitoring system, pose monitoring method, surgical robot system and storage medium
CN114041875A (en) Integrated operation positioning navigation system
EP3716879A1 (en) Motion compensation platform for image guided percutaneous access to bodily organs and structures
WO2022083372A1 (en) Surgical robot adjustment system and method, medium, and computer device
CN111227935A (en) Surgical robot navigation positioning system
CN103519895A (en) Orthopedic operation auxiliary guide method
CN113197666A (en) Device and system for surgical navigation
CN112533556A (en) System method and computer program product for computer-assisted surgery
US20220079687A1 (en) Robot mounted camera registration and tracking system for orthopedic and neurological surgery
US20240268899A1 (en) System and method of displaying images from imaging devices
Beyl et al. Time-of-flight-assisted Kinect camera-based people detection for intuitive human robot cooperation in the surgical operating room
Meng et al. Development of an intra-operative active navigation system for robot-assisted surgery
CN113208731A (en) Binocular vision system-based hand and eye calibration method for surgical puncture robot
CN114730454A (en) Scene awareness system and method
CN111658142A (en) MR-based focus holographic navigation method and system
CN108577980A (en) A kind of method, system and device ultrasonic cutter head carried out from motion tracking
WO2023214398A1 (en) Robotic arm navigation using virtual bone mount
CN110051433B (en) Method for keeping track of target and application thereof in image-guided surgery
Vogt et al. Light fields for minimal invasive surgery using an endoscope positioning robot
AU2023200152A1 (en) Robot mounted camera registration and tracking system for orthopedic and neurological surgery
US20230139402A1 (en) Systems and methods for registration feature integrity checking
TWM484404U (en) Imaging projection system equipment application
Nicolai et al. A novel 3D camera based supervision system for safe human-robot interaction in the operating room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant