US20160019716A1 - Computer assisted surgical system with position registration mechanism and method of operation thereof - Google Patents
Computer assisted surgical system with position registration mechanism and method of operation thereof Download PDFInfo
- Publication number
- US20160019716A1 US20160019716A1 US14/331,541 US201414331541A US2016019716A1 US 20160019716 A1 US20160019716 A1 US 20160019716A1 US 201414331541 A US201414331541 A US 201414331541A US 2016019716 A1 US2016019716 A1 US 2016019716A1
- Authority
- US
- United States
- Prior art keywords
- data
- surface image
- current surface
- point cloud
- transform
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 230000007246 mechanism Effects 0.000 title description 10
- 230000003190 augmentative effect Effects 0.000 claims abstract description 19
- 238000005070 sampling Methods 0.000 claims abstract description 5
- 230000001131 transforming effect Effects 0.000 claims abstract description 5
- 238000001356 surgical procedure Methods 0.000 claims description 18
- 239000002131 composite material Substances 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 12
- 230000009466 transformation Effects 0.000 description 7
- 238000002591 computed tomography Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 6
- 239000003550 marker Substances 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 239000010410 layer Substances 0.000 description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 3
- 230000002860 competitive effect Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000005211 surface analysis Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000003467 diminishing effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 239000002344 surface layer Substances 0.000 description 1
- 210000002435 tendon Anatomy 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Z—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
- G16Z99/00—Subject matter not provided for in other main groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G06F19/3406—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/0068—Geometric image transformation in the plane of the image for image registration, e.g. elastic snapping
-
- G06T3/14—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
Definitions
- the present invention relates generally to a computer assisted surgical system, and more particularly to a system for establishing the reference position with pre-surgery medical data.
- Image-based surgical navigation systems display the positions of surgical tools with respect to preoperative (prior to surgery) or intraoperative (during surgery) image data sets.
- Two and three dimensional image data sets are used, as well as time-variant images data, such as multiple data sets taken at different times.
- Types of data sets that are primarily used include two-dimensional fluoroscopic images and three-dimensional data sets include magnetic resonance imaging (MRI) scans, computer tomography (CT) scans, positron emission tomography (PET) scans, and angiographic data.
- Intraoperative images are typically fluoroscopic, as a C-arm fluoroscope is relatively easily positioned with respect to patient and does not require that a patient be moved.
- Other types of imaging modalities require extensive patient movement and thus are typically used only for preoperative and post-operative imaging, but may still be used intra-operatively.
- the most popular surgical navigation systems make use of a tracking or localizing system to track tools, instruments and patients during surgery. These systems identify predefined coordinate space via. uniquely recognizable markers that are manually attached or affixed to, or possibly inherently a part of, an object such as an instrument or a mask. Markers can take several forms, including those that can be manually located using optical (or visual), electromagnetic., radio or acoustic methods. Furthermore, at least e case of optical or visual systems, location of the marker's position may be based on intrinsic features or landmarks that, in effect, function as recognizable marker sites, while the actual marker is positioned manually by a person. Markers will have a known, geometrical arrangement with respect to, typically, an end point and/or axis of the instrument.
- objects can be recognized at least in pan from the geometry of the markers (assuming that the geometry is unique), and the orientation of the axis and location of endpoint within a frame of reference deduced from the positions of the markers. Any error in the position of the markers represents a reduction in the safety margin of the operation where fractions of a millimeter can be critical.
- the present invention provides a method of operation of a computer assisted surgical system including: capturing historic scan data from a three dimensional object; sampling a current surface image from the three dimensional object in a different position; automatically transforming the historical scan data to align with the current surface image for forming a transform data; and displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.
- the present invention provides a computer assisted surgical system, including: a pre-operation medical scan configured to record historic scan data from a three dimensional object; a position image capture module configured to sample a current surface image from the three dimensional object in a different position; a 3D registration module configured to automatically transform the historical scan data to align with the current surface image for forming a transform data; and a display controller configured to display, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.
- a pre-operation medical scan configured to record historic scan data from a three dimensional object
- a position image capture module configured to sample a current surface image from the three dimensional object in a different position
- a 3D registration module configured to automatically transform the historical scan data to align with the current surface image for forming a transform data
- a display controller configured to display, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.
- FIG. 1 is a functional block diagram of a computer assisted surgical system with position registration in an embodiment of the present invention.
- FIG. 2 is a functional block diagram of a surgical plan generation mechanism in an embodiment of the present invention.
- FIG. 3 is a functional block diagram of a region of interest capture mechanism in an embodiment of the present invention.
- FIG. 4 is a functional block diagram of an alignment and presentation mechanism in an embodiment of the present invention.
- FIG. 5 is a flow chart of a method of operation of a computer assisted surgical system in a further embodiment of the present invention.
- the term “horizontal” as used herein is defined as a plane parallel to the active surface of the integrated circuit, having the non-volatile memory system, regardless of its orientation.
- the term “vertical” refers to a direction perpendicular to the horizontal as just defined. Terms, such as “above”, “below”, “bottom”, “top”, “side” (as in “sidewall”), “higher”, “lower”, “upper”, “over”, and “under”, are defined with respect to the horizontal plane, as shown in the figures.
- the term “directly on” means that there is direct contact between elements with no intervening elements.
- module can include software, hardware, or a combination thereof in an embodiment of the present invention in accordance with the context in which the term is used.
- the software can be machine code, firmware, embedded code, and application software.
- the hardware can be circuitry, processor, computer, integrated circuit, integrated circuit cores, a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), passive devices, or a combination thereof
- Image processing can relate to projective imaging and tomographic imaging using imagers.
- the projective imaging employs planar view of an object using a camera and X-ray, as examples.
- the tomographic imaging employs slicing through an object using penetrating waves including sonar, computed tomography (CT) scan, magnetic resonance imaging (MRI), as examples.
- CT computed tomography
- MRI magnetic resonance imaging
- FIG. 1 therein is shown a functional block diagram of a computer assisted surgical system 100 with position registration in an embodiment of the present invention.
- the functional block diagram of a computer assisted surgical system 100 depicts a pre-operation medical scan 102 , such as magnetic resonance imaging (MRI) scans, computer tomography (CT) scans, positron emission tomography (PET) scans, or angiographic data of a three dimensional object 104 , such as a surgical patient.
- MRI magnetic resonance imaging
- CT computer tomography
- PET positron emission tomography
- the pre-operation medical scan 102 can provide historical image data 106 , to a computer 107 , that represents the internal composition of the three dimensional object 104 .
- the historical image data 106 can be used by a physician or medical specialist to formulate a surgical plan 108 that will be executed during a surgical operation performed on the three dimensional object 104 .
- physical models 110 such as organ models, vessel maps, nerve maps, muscle and tendon maps, tissue structures, or combinations thereof, can formulate a surgical strategy with optimum egress paths, and safe regions that can accept intrusion by surgical tools during the surgical operation.
- the combination of the surgical plan 108 and the physical models 110 can generate a surgical plan and highlights 109 capable of highlighting areas of the operation that can pose a danger if entered, or safe areas of the planned surgery that can allow access for the surgeon (not shown).
- the historic image data 106 can be conveyed to a surface of interest extract module 112 in order to isolate a historic point cloud 114 can represent the outer layer of the skin that covers the area of the intended access of the surgical plan 108 .
- the historical image data can be captured up to several days prior to a scheduled surgical operation represented by the surgical plan 108 .
- the three dimensional object 104 can be in a substantially different position than the position used to capture the historical image data 106 .
- a position image capture module 116 such as a stereo camera, structured light camera, laser scanner, can provide a detailed surface image of the three dimensional object 104 in a surgery position for the surgical operation.
- the position image capture module 116 can provide a current surface image 118 , of the three dimensional object 104 , to a pre-surgery 3D capture module 120 for analysis.
- the pre-surgery 3D capture module 120 can process the current surface image 118 to remove obstructions, such as hair, surgical masking, sterile dressings, or the like, from the current surface image 118 .
- a surface of the three dimensional object 104 can be captured as a current image data 122 .
- the current image data 122 can be coupled to a region of interest extract module 124 for further reduction.
- the region of interest extract module 124 can generate a current point cloud 126 .
- An intended point cloud 128 can be coupled from the surface of interest extract module 112 to a 3D registration module 130 .
- An actual point cloud 132 such as an array of related points that represent the three dimensional topology of the surface of the three dimensional object 104 , can be coupled from the region of interest extract module 124 to the 3D registration module 130 .
- a 3D registration algorithm module 134 can perform a feature by feature alignment of the intended point cloud 128 and the actual point cloud 132 .
- the 3D registration module 130 can manipulate the results of the 3D registration algorithm module 134 based on a transform parameter module 136 .
- the transform parameter module 136 can provide visual queues or highlights when generating a composite image data 138 .
- a transform module 140 can be coupled to the composite image data 138 , the surgical plan and highlights 109 , and a historic scan data 142 , such as the data from the pre-operation medical scan 102 to automatically align the historic scan data 142 based on the composite image data 138 .
- the transform module 140 can maintain the positional correlation between the composite image data 138 and the surgical plan 108 based on the historic scan data 142 .
- the transform module 140 can overlay the surgical plan and highlights 109 capable of highlighting areas of the operation that can pose a danger if entered, or safe areas of the planned surgery that can allow access for the surgeon.
- the surgical plan 108 can be formulated by the surgeon analyzing the pre-operation medical scan 102 in preparation for the surgery.
- the transform module 140 can provide continuous updates to a transformed data 144 without manual intervention. Since the historic scan data 142 has many layers that are all in positional correlation with the surface layer identified by the surface of interest extract module 112 , all of the internal layers can be in positional correlation to the current surface image 118 of the three dimensional object 104 .
- the computer assisted surgical system 100 can provide highly accurate positional correlation between historic scan data 142 , the surgical plan 108 , and the current surface image 118 with no manual intervention or markers applied to the three dimensional object 104 .
- the transform data 144 can provide highly accurate positional information with computer generated highlights indicating safe zones and danger zones for every step of the surgical plan 108 .
- the transform data 144 can be coupled to an augmented reality display 146 managed by a display controller 148 .
- the current surface image 118 can be coupled to the augmented reality display 146 for establishing a patient coordinate space in which the transformed data 144 can be displayed by the display controller 148 .
- a tool tracking module 150 can present tool tracking data 152 to the augmented reality display 146 .
- the tool tracking module 150 can be in position correlation with the current surface image 118 .
- the transform data 144 is also in position correlation with the current surface image 118 , which allows the augmented reality display 146 to present the actual position of the surgical tools used to execute the surgical plan 108 in real time. It has been discovered that the computer assisted surgical system 100 can provide positional correlation between the current surface image 118 and the historical scan data 142 having a mean square error less than 2 mm, which represents a significant improvement over prior art marker systems that that can induce more than twice the positional error in placing a single marker on the three dimensional object 104 .
- FIG. 2 therein is shown a functional block diagram of a surgical plan generation mechanism 201 in an embodiment of the present invention.
- the functional block diagram of the surgical plan generation mechanism 201 depicts the pre-operation medical scan 102 having captured the image data of the three dimensional object 104 .
- the pre-operation medical scan 102 can convey the historical scan data 142 to the surgical plan 108 .
- a surgeon (not shown) can access the historical scan data 142 and with the use of the physical models 110 in order to develop a strategy to complete the surgery on the three dimensional object 104 , such as a surgical patient.
- the surgical plan 108 can provide extensive details of the requirements of the operation including safe areas, an entry path, the location, shape and size of the object of the operation, and danger zones, which if entered could harm the surgical patient 104 .
- the key to the success of the plan is the absolute position of registration between the position of the three dimensional object 104 during the operation and the historical scan data 142 .
- the surgical plan 108 can provide visual queues to the surgeon performing the operation.
- the surgical plan 108 can convey the surgical plan and highlights 109 of FIG. 1 to the surface of interest extract module 112 .
- the surface of interest extract module 112 can use the historical image data 106 to extract the surface of interest to form the point clouds 114 , which can be assembled as the intended point cloud 128 to define the outer surface of the three dimensional object 104 .
- the surgical plan 108 can provide the surgical plan and highlights 109 , including specific coordinates that can be highlighted during the display of the transform data 144 of FIG. 1 on the augmented reality display 146 of FIG. 1 .
- the surgical plan and highlights 109 can identify safe zones and danger zones in the intended point cloud 128 that can assist the surgeon (not shown) who is performing the operation.
- FIG. 3 therein is shown a functional block diagram of a region of interest capture mechanism 301 in an embodiment of the present invention.
- the functional block diagram of the region of interest capture mechanism 301 depicts the position image capture module 116 , such as a stereo image camera, ultra-sonic surface analysis device, structured light or laser surface analysis device, or the like, coupled to the pre-surgery 3D capture module 120 .
- the position image capture module 116 such as a stereo image camera, ultra-sonic surface analysis device, structured light or laser surface analysis device, or the like, coupled to the pre-surgery 3D capture module 120 .
- the position image capture module 116 can capture the surface of the three dimensional object 104 in a surgical position, which can be significantly different that the position of the three dimensional object 104 captured by the pre-operation medical scan 102 of FIG. 1 .
- the pre-surgery 3D capture module 120 can process the current surface image 118 provided by the position image capture module 116 .
- a complete surface topology of the three dimensional object 104 can be provided, through the current image data 122 , to the region of interest extract module 124 . It is understood that the current image data 122 includes a visible surface topology of the three dimensional object 104 .
- the region of interest extract module 124 can identify the detail of the surface and can algorithmically remove undesired regions, such as hair, from the surface of the region of interest.
- the current point cloud 126 can represent a detailed surface of the three dimensional object 104 in the operative surgical position.
- the region of interest extract module 124 can produce the actual point cloud 132 , such as an array of related points that represent the three dimensional topology of the visible surface of the three dimensional object 104 , from the current point cloud 126 . It is understood that the actual point cloud 132 can contain a subset cloud of points as those contained in the intended point cloud 128 of FIG. 1 because they both originate with the three dimensional object 104 , but in different positions.
- the region of interest extract module 124 can generate the actual point cloud 132 as a visible surface topology of the three dimensional object 104 that is a subset of the intended point cloud 128 of the surface of interest extract module 112 of FIG. 1 . It is understood that the position image capture module 116 only monitors the outer surface of the three dimensional object 104 to perform automatic registration and alignment between the intended point cloud 128 and the actual point cloud 132 without additional human intervention. This alignment process can remove the human induced position error that can accompany the use of markers or masks adhered to the surface of the three dimensional object 104 .
- FIG. 4 therein is shown a functional block diagram of an alignment and presentation mechanism 401 in an embodiment of the present invention.
- the functional block diagram of the alignment and presentation mechanism 401 depicts the 3D registration module 130 coupled to the intended point cloud 128 and the actual point cloud 132 .
- the 3D registration module 130 can employ a feature selection module for determining subsets of point clouds, the subsets selected based on key points of a three-dimensional object; a feature matching module, coupled to the feature selection module, for generating matched results based on a matching transformation of the subsets; and a point registration module, coupled to the feature matching module, for refining the matched results based on a refinement transformation to optionally align different data sets of the point clouds for displaying the aligned data sets on a device, wherein the refinement transformation includes a refinement error less than a matching error of the matching transformation.
- An example embodiment of the 3D registration module 130 can include a three dimensional registration alignment module 134 , which can implement a feature identification structure that can operate on both the intended point cloud 128 and the actual point cloud 132 to identify similar features.
- the three dimensional registration alignment module 134 can also implement a feature matching structure for rough alignment providing positional alignment to within less than 5 millimeter.
- the three dimensional registration alignment module 134 can also implement a registration refinement structure that can improve the positional alignment to less than 2 millimeter without the need for any human intervention to identify portions of the three dimensional object 104 .
- the 3D registration module 130 can have the transform parameter module 136 that can determine the three dimensional transformation, such as translation, rotation and scaling, required to align the intended point cloud 128 with the actual point cloud 132 .
- the composite image data 138 can include the transformation information that is required to position the historic scan data 142 in the proper alignment to coincide with the actual point cloud 132 and reflect the actual position of the three dimensional object 104 , such as the surgical patient.
- the historic scan data 142 can be collected, by the pre-operation medical scan 102 , from the three dimensional object 104 at a time prior to the capture of the current surface image 118 by the position image capture module 116 . It is further understood that the difference in position between the historic scan data 142 and the current surface image can be significant.
- the computer assisted surgical system 100 of FIG. 1 can resolve the difference in position without manual intervention by any medical staff and without external markers applied to the three dimensional object 104 either during the pre-operation medical scan 102 or during the capture of the current surface image 118 .
- the composite image data 138 is coupled to the transform module 140 .
- the transform module 140 can apply the positional transformation, such as translation, rotation and scaling, from the composite image data 138 and the surgical plan and highlights 109 to the historic scan data 142 provided by the pre-operation medical scan 102 .
- the transform module 140 can complete the merge of the highlighted information from the surgical plan 108 with the properly oriented version of the historic scan data 142 in order to provide the transform data 144 that is coupled to the augmented reality display 146 .
- the display controller 148 can receive the current surface image 118 , the transform data 144 , and the tool tracking data 152 to form a composite display in the augmented reality display 146 .
- the positional conformity of the transform data 144 to the current surface image 118 allows the display controller 148 to overlay the data with minimal resources.
- the tool tracking data 152 can be calibrated, through the position image capture module 116 and the tool tracking module 150 , by the surgical staff prior to the initiation of the surgical plan 108 .
- a surgeon (not shown) can supervise the execution of the surgical plan 108 , or the surgeon can articulate the tools with computer assistance in order to execute the surgical plan with visual aids provided through the transform data 144 .
- the execution of the surgical plan 108 can be completely performed by a computer in a remote location from the surgeon with minimal risk to the three dimensional object 104 , such as the surgical patient. It has been discovered that an embodiment of the computer assisted surgical system 100 can be used to provide intricate surgical procedures to remote locations of the world with only a rudimentary surgical team in the area, while the surgeon can manage the operation from a location on the opposite side of the planet.
- the method 500 includes: capturing historic scan data from a three dimensional object in a block 502 ; sampling a current surface image from the three dimensional object in a different position in a block 504 ; automatically transforming the historical scan data to align with the current surface image for forming a transform data in a block 506 ; and displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention in a block 508 .
- the resulting method, process, apparatus, device, product, and/or system is straightforward, cost-effective, uncomplicated, highly versatile, accurate, sensitive, and effective, and can be implemented by adapting known components for ready, efficient, and economical manufacturing, application, and utilization.
- Another important aspect of the present invention is that it valuably supports and services the historical trend of reducing costs, simplifying systems, and increasing performance.
Abstract
A computer assisted surgical system and method of operation thereof includes: capturing historic scan data from a three dimensional object; sampling a current surface image from the three dimensional object in a different position; automatically transforming the historical scan data to align with the current surface image for forming a transform data; and displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.
Description
- The present application contains subject matter related to U.S. patent application Ser. No. 14/202,677 filed Mar. 10, 2014, and the subject matter thereof is incorporated herein by reference thereto.
- The present invention relates generally to a computer assisted surgical system, and more particularly to a system for establishing the reference position with pre-surgery medical data.
- Image-based surgical navigation systems display the positions of surgical tools with respect to preoperative (prior to surgery) or intraoperative (during surgery) image data sets. Two and three dimensional image data sets are used, as well as time-variant images data, such as multiple data sets taken at different times. Types of data sets that are primarily used include two-dimensional fluoroscopic images and three-dimensional data sets include magnetic resonance imaging (MRI) scans, computer tomography (CT) scans, positron emission tomography (PET) scans, and angiographic data. Intraoperative images are typically fluoroscopic, as a C-arm fluoroscope is relatively easily positioned with respect to patient and does not require that a patient be moved. Other types of imaging modalities require extensive patient movement and thus are typically used only for preoperative and post-operative imaging, but may still be used intra-operatively.
- The most popular surgical navigation systems make use of a tracking or localizing system to track tools, instruments and patients during surgery. These systems identify predefined coordinate space via. uniquely recognizable markers that are manually attached or affixed to, or possibly inherently a part of, an object such as an instrument or a mask. Markers can take several forms, including those that can be manually located using optical (or visual), electromagnetic., radio or acoustic methods. Furthermore, at least e case of optical or visual systems, location of the marker's position may be based on intrinsic features or landmarks that, in effect, function as recognizable marker sites, while the actual marker is positioned manually by a person. Markers will have a known, geometrical arrangement with respect to, typically, an end point and/or axis of the instrument. Thus, objects can be recognized at least in pan from the geometry of the markers (assuming that the geometry is unique), and the orientation of the axis and location of endpoint within a frame of reference deduced from the positions of the markers. Any error in the position of the markers represents a reduction in the safety margin of the operation where fractions of a millimeter can be critical.
- Thus, a need still remains for a computer assisted surgical system that can provide position registration without the position error induced by the manual positioning of markers. In view of the increased popularity in the use of computer assisted surgery, it is increasingly critical that answers be found to these problems. In view of the ever-increasing commercial competitive pressures, along with growing consumer expectations and the diminishing opportunities for meaningful product differentiation in the marketplace, it is critical that answers be found for these problems. Additionally, the need to reduce costs, improve efficiencies and performance, and meet competitive pressures adds an even greater urgency to the critical necessity for finding answers to these problems.
- Solutions to these problems have been long sought but prior developments have not taught or suggested any solutions and, thus, solutions to these problems have long eluded those skilled in the art.
- The present invention provides a method of operation of a computer assisted surgical system including: capturing historic scan data from a three dimensional object; sampling a current surface image from the three dimensional object in a different position; automatically transforming the historical scan data to align with the current surface image for forming a transform data; and displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.
- The present invention provides a computer assisted surgical system, including: a pre-operation medical scan configured to record historic scan data from a three dimensional object; a position image capture module configured to sample a current surface image from the three dimensional object in a different position; a 3D registration module configured to automatically transform the historical scan data to align with the current surface image for forming a transform data; and a display controller configured to display, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.
- Certain embodiments of the invention have other steps or elements in addition to or in place of those mentioned above. The steps or elements will become apparent to those skilled in the art from a reading of the following detailed description when taken with reference to the accompanying drawings.
-
FIG. 1 is a functional block diagram of a computer assisted surgical system with position registration in an embodiment of the present invention. -
FIG. 2 is a functional block diagram of a surgical plan generation mechanism in an embodiment of the present invention. -
FIG. 3 is a functional block diagram of a region of interest capture mechanism in an embodiment of the present invention. -
FIG. 4 is a functional block diagram of an alignment and presentation mechanism in an embodiment of the present invention. -
FIG. 5 is a flow chart of a method of operation of a computer assisted surgical system in a further embodiment of the present invention. - The following embodiments are described in sufficient detail to enable those skilled in the art to make and use the invention. It is to be understood that other embodiments would be evident based on the present disclosure, and that system, process, or mechanical changes may be made without departing from the scope of the present invention.
- In the following description, numerous specific details are given to provide a thorough understanding of the invention. However, it will be apparent that the invention may be practiced without these specific details. In order to avoid obscuring the present invention, some well-known circuits, system configurations, and process steps are not disclosed in detail.
- The drawings showing embodiments of the system are semi-diagrammatic and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing FIGS. Similarly, although the views in the drawings for ease of description generally show similar orientations, this depiction in the FIGS. is arbitrary for the most part. Generally, the invention can be operated in any orientation.
- Where multiple embodiments are disclosed and described having some features in common, for clarity and ease of illustration, description, and comprehension thereof, similar and like features one to another will ordinarily be described with similar reference numerals. For expository purposes, the term “horizontal” as used herein is defined as a plane parallel to the active surface of the integrated circuit, having the non-volatile memory system, regardless of its orientation. The term “vertical” refers to a direction perpendicular to the horizontal as just defined. Terms, such as “above”, “below”, “bottom”, “top”, “side” (as in “sidewall”), “higher”, “lower”, “upper”, “over”, and “under”, are defined with respect to the horizontal plane, as shown in the figures. The term “directly on” means that there is direct contact between elements with no intervening elements.
- The term “module” referred to herein can include software, hardware, or a combination thereof in an embodiment of the present invention in accordance with the context in which the term is used. For example, the software can be machine code, firmware, embedded code, and application software. Also for example, the hardware can be circuitry, processor, computer, integrated circuit, integrated circuit cores, a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), passive devices, or a combination thereof
- Image processing can relate to projective imaging and tomographic imaging using imagers. The projective imaging employs planar view of an object using a camera and X-ray, as examples. The tomographic imaging employs slicing through an object using penetrating waves including sonar, computed tomography (CT) scan, magnetic resonance imaging (MRI), as examples.
- Referring now to
FIG. 1 , therein is shown a functional block diagram of a computer assistedsurgical system 100 with position registration in an embodiment of the present invention. The functional block diagram of a computer assistedsurgical system 100 depicts a pre-operationmedical scan 102, such as magnetic resonance imaging (MRI) scans, computer tomography (CT) scans, positron emission tomography (PET) scans, or angiographic data of a threedimensional object 104, such as a surgical patient. - The pre-operation
medical scan 102 can providehistorical image data 106, to acomputer 107, that represents the internal composition of the threedimensional object 104. Thehistorical image data 106 can be used by a physician or medical specialist to formulate asurgical plan 108 that will be executed during a surgical operation performed on the threedimensional object 104. During the formulation of thesurgical plan 108,physical models 110, such as organ models, vessel maps, nerve maps, muscle and tendon maps, tissue structures, or combinations thereof, can formulate a surgical strategy with optimum egress paths, and safe regions that can accept intrusion by surgical tools during the surgical operation. The combination of thesurgical plan 108 and thephysical models 110 can generate a surgical plan and highlights 109 capable of highlighting areas of the operation that can pose a danger if entered, or safe areas of the planned surgery that can allow access for the surgeon (not shown). - The
historic image data 106 can be conveyed to a surface ofinterest extract module 112 in order to isolate ahistoric point cloud 114 can represent the outer layer of the skin that covers the area of the intended access of thesurgical plan 108. The historical image data can be captured up to several days prior to a scheduled surgical operation represented by thesurgical plan 108. - At the scheduled time of the surgical operation, the three
dimensional object 104 can be in a substantially different position than the position used to capture thehistorical image data 106. A positionimage capture module 116, such as a stereo camera, structured light camera, laser scanner, can provide a detailed surface image of the threedimensional object 104 in a surgery position for the surgical operation. The positionimage capture module 116 can provide acurrent surface image 118, of the threedimensional object 104, to a pre-surgery3D capture module 120 for analysis. The pre-surgery3D capture module 120 can process thecurrent surface image 118 to remove obstructions, such as hair, surgical masking, sterile dressings, or the like, from thecurrent surface image 118. A surface of the threedimensional object 104 can be captured as acurrent image data 122. - The
current image data 122 can be coupled to a region ofinterest extract module 124 for further reduction. The region ofinterest extract module 124 can generate acurrent point cloud 126. - An intended
point cloud 128 can be coupled from the surface ofinterest extract module 112 to a3D registration module 130. Anactual point cloud 132, such as an array of related points that represent the three dimensional topology of the surface of the threedimensional object 104, can be coupled from the region ofinterest extract module 124 to the3D registration module 130. A 3Dregistration algorithm module 134 can perform a feature by feature alignment of the intendedpoint cloud 128 and theactual point cloud 132. The3D registration module 130 can manipulate the results of the 3Dregistration algorithm module 134 based on atransform parameter module 136. Thetransform parameter module 136 can provide visual queues or highlights when generating acomposite image data 138. - A
transform module 140 can be coupled to thecomposite image data 138, the surgical plan and highlights 109, and ahistoric scan data 142, such as the data from the pre-operationmedical scan 102 to automatically align thehistoric scan data 142 based on thecomposite image data 138. Thetransform module 140 can maintain the positional correlation between thecomposite image data 138 and thesurgical plan 108 based on thehistoric scan data 142. Thetransform module 140 can overlay the surgical plan and highlights 109 capable of highlighting areas of the operation that can pose a danger if entered, or safe areas of the planned surgery that can allow access for the surgeon. Thesurgical plan 108 can be formulated by the surgeon analyzing the pre-operationmedical scan 102 in preparation for the surgery. - The
transform module 140 can provide continuous updates to a transformeddata 144 without manual intervention. Since thehistoric scan data 142 has many layers that are all in positional correlation with the surface layer identified by the surface ofinterest extract module 112, all of the internal layers can be in positional correlation to thecurrent surface image 118 of the threedimensional object 104. - It has been discovered that the computer assisted
surgical system 100 can provide highly accurate positional correlation betweenhistoric scan data 142, thesurgical plan 108, and thecurrent surface image 118 with no manual intervention or markers applied to the threedimensional object 104. Thetransform data 144 can provide highly accurate positional information with computer generated highlights indicating safe zones and danger zones for every step of thesurgical plan 108. - The
transform data 144 can be coupled to anaugmented reality display 146 managed by adisplay controller 148. Thecurrent surface image 118 can be coupled to theaugmented reality display 146 for establishing a patient coordinate space in which the transformeddata 144 can be displayed by thedisplay controller 148. - A
tool tracking module 150 can present tool tracking data 152 to theaugmented reality display 146. Thetool tracking module 150 can be in position correlation with thecurrent surface image 118. Thetransform data 144 is also in position correlation with thecurrent surface image 118, which allows theaugmented reality display 146 to present the actual position of the surgical tools used to execute thesurgical plan 108 in real time. It has been discovered that the computer assistedsurgical system 100 can provide positional correlation between thecurrent surface image 118 and thehistorical scan data 142 having a mean square error less than 2 mm, which represents a significant improvement over prior art marker systems that that can induce more than twice the positional error in placing a single marker on the threedimensional object 104. - Referring now to
FIG. 2 , therein is shown a functional block diagram of a surgicalplan generation mechanism 201 in an embodiment of the present invention. The functional block diagram of the surgicalplan generation mechanism 201 depicts the pre-operationmedical scan 102 having captured the image data of the threedimensional object 104. The pre-operationmedical scan 102 can convey thehistorical scan data 142 to thesurgical plan 108. A surgeon (not shown) can access thehistorical scan data 142 and with the use of thephysical models 110 in order to develop a strategy to complete the surgery on the threedimensional object 104, such as a surgical patient. - The
surgical plan 108 can provide extensive details of the requirements of the operation including safe areas, an entry path, the location, shape and size of the object of the operation, and danger zones, which if entered could harm thesurgical patient 104. The key to the success of the plan is the absolute position of registration between the position of the threedimensional object 104 during the operation and thehistorical scan data 142. Thesurgical plan 108 can provide visual queues to the surgeon performing the operation. Thesurgical plan 108 can convey the surgical plan and highlights 109 ofFIG. 1 to the surface ofinterest extract module 112. - The surface of
interest extract module 112 can use thehistorical image data 106 to extract the surface of interest to form the point clouds 114, which can be assembled as the intendedpoint cloud 128 to define the outer surface of the threedimensional object 104. - It has been discovered that the
surgical plan 108 can provide the surgical plan and highlights 109, including specific coordinates that can be highlighted during the display of thetransform data 144 ofFIG. 1 on theaugmented reality display 146 ofFIG. 1 . The surgical plan and highlights 109 can identify safe zones and danger zones in the intendedpoint cloud 128 that can assist the surgeon (not shown) who is performing the operation. - Referring now to
FIG. 3 , therein is shown a functional block diagram of a region ofinterest capture mechanism 301 in an embodiment of the present invention. The functional block diagram of the region ofinterest capture mechanism 301 depicts the positionimage capture module 116, such as a stereo image camera, ultra-sonic surface analysis device, structured light or laser surface analysis device, or the like, coupled to the pre-surgery3D capture module 120. - The position
image capture module 116 can capture the surface of the threedimensional object 104 in a surgical position, which can be significantly different that the position of the threedimensional object 104 captured by the pre-operationmedical scan 102 ofFIG. 1 . The pre-surgery3D capture module 120 can process thecurrent surface image 118 provided by the positionimage capture module 116. A complete surface topology of the threedimensional object 104 can be provided, through thecurrent image data 122, to the region ofinterest extract module 124. It is understood that thecurrent image data 122 includes a visible surface topology of the threedimensional object 104. The region ofinterest extract module 124 can identify the detail of the surface and can algorithmically remove undesired regions, such as hair, from the surface of the region of interest. - The
current point cloud 126 can represent a detailed surface of the threedimensional object 104 in the operative surgical position. The region ofinterest extract module 124 can produce theactual point cloud 132, such as an array of related points that represent the three dimensional topology of the visible surface of the threedimensional object 104, from thecurrent point cloud 126. It is understood that theactual point cloud 132 can contain a subset cloud of points as those contained in the intendedpoint cloud 128 ofFIG. 1 because they both originate with the threedimensional object 104, but in different positions. - It has been discovered that the region of
interest extract module 124 can generate theactual point cloud 132 as a visible surface topology of the threedimensional object 104 that is a subset of the intendedpoint cloud 128 of the surface ofinterest extract module 112 ofFIG. 1 . It is understood that the positionimage capture module 116 only monitors the outer surface of the threedimensional object 104 to perform automatic registration and alignment between the intendedpoint cloud 128 and theactual point cloud 132 without additional human intervention. This alignment process can remove the human induced position error that can accompany the use of markers or masks adhered to the surface of the threedimensional object 104. - Referring now to
FIG. 4 , therein is shown a functional block diagram of an alignment andpresentation mechanism 401 in an embodiment of the present invention. The functional block diagram of the alignment andpresentation mechanism 401 depicts the3D registration module 130 coupled to the intendedpoint cloud 128 and theactual point cloud 132. The3D registration module 130 can employ a feature selection module for determining subsets of point clouds, the subsets selected based on key points of a three-dimensional object; a feature matching module, coupled to the feature selection module, for generating matched results based on a matching transformation of the subsets; and a point registration module, coupled to the feature matching module, for refining the matched results based on a refinement transformation to optionally align different data sets of the point clouds for displaying the aligned data sets on a device, wherein the refinement transformation includes a refinement error less than a matching error of the matching transformation. - An example embodiment of the
3D registration module 130 can include a three dimensionalregistration alignment module 134, which can implement a feature identification structure that can operate on both the intendedpoint cloud 128 and theactual point cloud 132 to identify similar features. The three dimensionalregistration alignment module 134 can also implement a feature matching structure for rough alignment providing positional alignment to within less than 5 millimeter. The three dimensionalregistration alignment module 134 can also implement a registration refinement structure that can improve the positional alignment to less than 2 millimeter without the need for any human intervention to identify portions of the threedimensional object 104. - The
3D registration module 130 can have thetransform parameter module 136 that can determine the three dimensional transformation, such as translation, rotation and scaling, required to align the intendedpoint cloud 128 with theactual point cloud 132. Thecomposite image data 138 can include the transformation information that is required to position thehistoric scan data 142 in the proper alignment to coincide with theactual point cloud 132 and reflect the actual position of the threedimensional object 104, such as the surgical patient. - It is understood that the
historic scan data 142 can be collected, by the pre-operationmedical scan 102, from the threedimensional object 104 at a time prior to the capture of thecurrent surface image 118 by the positionimage capture module 116. It is further understood that the difference in position between thehistoric scan data 142 and the current surface image can be significant. The computer assistedsurgical system 100 ofFIG. 1 can resolve the difference in position without manual intervention by any medical staff and without external markers applied to the threedimensional object 104 either during the pre-operationmedical scan 102 or during the capture of thecurrent surface image 118. - The
composite image data 138 is coupled to thetransform module 140. Thetransform module 140 can apply the positional transformation, such as translation, rotation and scaling, from thecomposite image data 138 and the surgical plan and highlights 109 to thehistoric scan data 142 provided by the pre-operationmedical scan 102. Thetransform module 140 can complete the merge of the highlighted information from thesurgical plan 108 with the properly oriented version of thehistoric scan data 142 in order to provide thetransform data 144 that is coupled to theaugmented reality display 146. - The
display controller 148 can receive thecurrent surface image 118, thetransform data 144, and the tool tracking data 152 to form a composite display in theaugmented reality display 146. The positional conformity of thetransform data 144 to thecurrent surface image 118 allows thedisplay controller 148 to overlay the data with minimal resources. The tool tracking data 152 can be calibrated, through the positionimage capture module 116 and thetool tracking module 150, by the surgical staff prior to the initiation of thesurgical plan 108. A surgeon (not shown) can supervise the execution of thesurgical plan 108, or the surgeon can articulate the tools with computer assistance in order to execute the surgical plan with visual aids provided through thetransform data 144. - It is understood that the execution of the
surgical plan 108 can be completely performed by a computer in a remote location from the surgeon with minimal risk to the threedimensional object 104, such as the surgical patient. It has been discovered that an embodiment of the computer assistedsurgical system 100 can be used to provide intricate surgical procedures to remote locations of the world with only a rudimentary surgical team in the area, while the surgeon can manage the operation from a location on the opposite side of the planet. - Referring now to
FIG. 5 , therein is shown a flow chart of amethod 500 of operation of a computer assistedsurgical system 100 in a further embodiment of the present invention. Themethod 500 includes: capturing historic scan data from a three dimensional object in a block 502; sampling a current surface image from the three dimensional object in a different position in a block 504; automatically transforming the historical scan data to align with the current surface image for forming a transform data in ablock 506; and displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention in ablock 508. - The resulting method, process, apparatus, device, product, and/or system is straightforward, cost-effective, uncomplicated, highly versatile, accurate, sensitive, and effective, and can be implemented by adapting known components for ready, efficient, and economical manufacturing, application, and utilization.
- Another important aspect of the present invention is that it valuably supports and services the historical trend of reducing costs, simplifying systems, and increasing performance.
- These and other valuable aspects of the present invention consequently further the state of the technology to at least the next level.
- While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the scope of the included claims. All matters hithertofore set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.
Claims (20)
1. A method of operation of a computer assisted surgical system comprising:
capturing historic scan data from a three dimensional object;
sampling a current surface image from the three dimensional object in a different position;
automatically transforming the historical scan data to align with the current surface image for forming a transform data; and
displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.
2. The method as claimed in claim 1 further comprising extracting an actual point cloud from the current surface image.
3. The method as claimed in claim 1 further comprising generating a composite image data from the historical scan data and the current surface image.
4. The method as claimed in claim 1 further comprising establishing a surgical plan for highlighting the transform data in the augmented reality display.
5. The method as claimed in claim 1 wherein forming the transform data includes aligning an intended point cloud with an actual point cloud.
6. A method of operation of a computer assisted surgical system comprising:
capturing historic scan data from a three dimensional object includes scanning with a pre-operation medical scan;
sampling a current surface image from the three dimensional object in a different position includes viewing the three dimensional object in a surgery position;
automatically transforming the historical scan data to align with the current surface image for forming a transform data; and
displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention including highlighting the transform data by a surgical plan.
7. The method as claimed in claim 6 further comprising extracting an actual point cloud from the current surface image.
8. The method as claimed in claim 6 further comprising generating a composite image data from the historical scan data and the current surface image.
9. The method as claimed in claim 6 further comprising establishing the surgical plan for highlighting the transform data in the augmented reality display.
10. The method as claimed in claim 6 wherein forming the transform data includes aligning an intended point cloud with an actual point cloud.
11. A computer assisted surgical system comprising:
a pre-operation medical scan configured to record historic scan data from a three dimensional object;
a position image capture module configured to sample a current surface image from the three dimensional object in a different position;
a 3D registration module configured to automatically transform the historical scan data to align with the current surface image for forming a transform data; and
a display controller configured to display, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.
12. The system as claimed in claim 11 further comprising a region of interest extract module configured to extract an actual point cloud from the current surface image.
13. The system as claimed in claim 11 wherein the 3D registration module is further configured to generate a composite image data from the historical scan data and the current surface image.
14. The system as claimed in claim 11 wherein the pre-operation medical scan is further configured to establish a surgical plan for highlighting the transform data in the augmented reality display.
15. The system as claimed in claim 11 further comprising a transform module configured to form the transform data includes an intended point cloud automatically aligned with an actual point cloud.
16. The system as claimed in claim 11 further comprising:
a pre-surgery 3D capture module configured to sample the current surface image from the three dimensional object in a different position includes the three dimensional object viewed in a surgery position;
a 3D registration algorithm module configured to automatically transform the historical scan data to align with the current surface image for forming the transform data; and
the display controller configured to display, on the augmented reality display, the current surface image overlaid by the transform data with no manual intervention includes the transform data highlighted by a surgical plan.
17. The system as claimed in claim 16 further comprising a region of interest extract module configured to extract an actual point cloud from the current surface image.
18. The system as claimed in claim 16 wherein the 3D registration module is further configured to generate a composite image data from the historical scan data and the current surface image includes a three dimensional registration alignment module for aligning an intended point cloud and an actual point cloud.
19. The system as claimed in claim 16 wherein the pre-operation medical scan is further configured to establish the surgical plan for highlighting the transform data in the augmented reality display.
20. The system as claimed in claim 16 further comprising a transform module configured to form the transform data includes an intended point cloud automatically aligned with an actual point cloud.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/331,541 US20160019716A1 (en) | 2014-03-10 | 2014-07-15 | Computer assisted surgical system with position registration mechanism and method of operation thereof |
EP15822889.0A EP3151736A2 (en) | 2014-07-15 | 2015-07-01 | Computer assisted surgical system with position registration mechanism and method of operation thereof |
PCT/US2015/038838 WO2016010737A2 (en) | 2014-07-15 | 2015-07-01 | Computer assisted surgical system with position registration mechanism and method of operation thereof |
CN201580036901.9A CN106470596A (en) | 2014-07-15 | 2015-07-01 | There is review of computer aided surgery system and its operational approach of position registration mechanism |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/202,677 US9280825B2 (en) | 2014-03-10 | 2014-03-10 | Image processing system with registration mechanism and method of operation thereof |
US14/331,541 US20160019716A1 (en) | 2014-03-10 | 2014-07-15 | Computer assisted surgical system with position registration mechanism and method of operation thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160019716A1 true US20160019716A1 (en) | 2016-01-21 |
Family
ID=54017860
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/202,677 Active 2034-03-17 US9280825B2 (en) | 2014-03-10 | 2014-03-10 | Image processing system with registration mechanism and method of operation thereof |
US14/331,541 Abandoned US20160019716A1 (en) | 2014-03-10 | 2014-07-15 | Computer assisted surgical system with position registration mechanism and method of operation thereof |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/202,677 Active 2034-03-17 US9280825B2 (en) | 2014-03-10 | 2014-03-10 | Image processing system with registration mechanism and method of operation thereof |
Country Status (1)
Country | Link |
---|---|
US (2) | US9280825B2 (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180315181A1 (en) * | 2017-04-27 | 2018-11-01 | Siemens Healthcare Gmbh | Deformable registration of magnetic resonance and ultrasound images using biomechanical models |
WO2019060276A1 (en) * | 2017-09-21 | 2019-03-28 | Becton, Dickinson And Company | Augmented reality devices for hazardous contaminant testing |
US10499997B2 (en) | 2017-01-03 | 2019-12-10 | Mako Surgical Corp. | Systems and methods for surgical navigation |
US10593052B2 (en) | 2017-08-23 | 2020-03-17 | Synaptive Medical (Barbados) Inc. | Methods and systems for updating an existing landmark registration |
US10650594B2 (en) | 2015-02-03 | 2020-05-12 | Globus Medical Inc. | Surgeon head-mounted display apparatuses |
US10646283B2 (en) | 2018-02-19 | 2020-05-12 | Globus Medical Inc. | Augmented reality navigation systems for use with robotic surgical systems and methods of their use |
US10699448B2 (en) * | 2017-06-29 | 2020-06-30 | Covidien Lp | System and method for identifying, marking and navigating to a target using real time two dimensional fluoroscopic data |
US11002642B2 (en) | 2017-09-21 | 2021-05-11 | Becton, Dickinson And Company | Demarcation template for hazardous contaminant testing |
US11153555B1 (en) | 2020-05-08 | 2021-10-19 | Globus Medical Inc. | Extended reality headset camera system for computer assisted navigation in surgery |
US11199529B2 (en) | 2017-09-21 | 2021-12-14 | Becton, Dickinson And Company | Hazardous contaminant collection kit and rapid testing |
US11207150B2 (en) | 2020-02-19 | 2021-12-28 | Globus Medical, Inc. | Displaying a virtual model of a planned instrument attachment to ensure correct selection of physical instrument attachment |
US11280801B2 (en) | 2019-01-28 | 2022-03-22 | Becton, Dickinson And Company | Hazardous contaminant collection device with integrated swab and test device |
US20220093236A1 (en) * | 2020-09-01 | 2022-03-24 | Aibolit Technologies, Llc | System, method, and computer-accessible medium for automatically tracking and/or identifying at least one portion of an anatomical structure during a medical procedure |
US11360001B2 (en) | 2017-09-21 | 2022-06-14 | Becton, Dickinson And Company | Reactive demarcation template for hazardous contaminant testing |
US11382700B2 (en) | 2020-05-08 | 2022-07-12 | Globus Medical Inc. | Extended reality headset tool tracking and control |
US11385146B2 (en) | 2017-09-21 | 2022-07-12 | Becton, Dickinson And Company | Sampling systems and techniques to collect hazardous contaminants with high pickup and shedding efficiencies |
US11382699B2 (en) | 2020-02-10 | 2022-07-12 | Globus Medical Inc. | Extended reality visualization of optical tool tracking volume for computer assisted navigation in surgery |
US11391748B2 (en) | 2017-09-21 | 2022-07-19 | Becton, Dickinson And Company | High dynamic range assays in hazardous contaminant testing |
US11464581B2 (en) | 2020-01-28 | 2022-10-11 | Globus Medical, Inc. | Pose measurement chaining for extended reality surgical navigation in visible and near infrared spectrums |
US11510750B2 (en) | 2020-05-08 | 2022-11-29 | Globus Medical, Inc. | Leveraging two-dimensional digital imaging and communication in medicine imagery in three-dimensional extended reality applications |
US11585733B2 (en) | 2017-09-21 | 2023-02-21 | Becton, Dickinson And Company | Hazardous contaminant collection kit and rapid testing |
US11607277B2 (en) | 2020-04-29 | 2023-03-21 | Globus Medical, Inc. | Registration of surgical tool with reference array tracked by cameras of an extended reality headset for assisted navigation during surgery |
US11737831B2 (en) | 2020-09-02 | 2023-08-29 | Globus Medical Inc. | Surgical object tracking template generation for computer assisted navigation during surgical procedure |
US11817201B2 (en) | 2020-09-08 | 2023-11-14 | Medtronic, Inc. | Imaging discovery utility for augmenting clinical image management |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6230442B2 (en) * | 2014-02-20 | 2017-11-15 | 株式会社東芝 | Calculation apparatus, method and program |
EP3074956B1 (en) * | 2014-03-21 | 2018-04-11 | St. Jude Medical, Cardiology Division, Inc. | Methods and systems for generating a multi-dimensional surface model of a geometric structure |
US10574974B2 (en) * | 2014-06-27 | 2020-02-25 | A9.Com, Inc. | 3-D model generation using multiple cameras |
US10360469B2 (en) * | 2015-01-15 | 2019-07-23 | Samsung Electronics Co., Ltd. | Registration method and apparatus for 3D image data |
US9858640B1 (en) * | 2015-07-15 | 2018-01-02 | Hrl Laboratories, Llc | Device and method for merging 3D point clouds from sparsely distributed viewpoints |
US9760996B2 (en) * | 2015-08-11 | 2017-09-12 | Nokia Technologies Oy | Non-rigid registration for large-scale space-time 3D point cloud alignment |
CN108027984B (en) | 2015-09-25 | 2022-05-13 | 奇跃公司 | Method and system for detecting and combining structural features in 3D reconstruction |
CN105488459A (en) * | 2015-11-23 | 2016-04-13 | 上海汽车集团股份有限公司 | Vehicle-mounted 3D road real-time reconstruction method and apparatus |
CN105654427B (en) * | 2015-12-31 | 2019-07-16 | 上海皓腾模型有限公司 | A kind of improvement 3D model dither method and device |
EP3236286B1 (en) * | 2016-04-18 | 2023-01-25 | Otis Elevator Company | Auto commissioning system and method |
US9972067B2 (en) * | 2016-10-11 | 2018-05-15 | The Boeing Company | System and method for upsampling of sparse point cloud for 3D registration |
US10593042B1 (en) * | 2017-04-11 | 2020-03-17 | Zoox, Inc. | Perspective conversion for multi-dimensional data analysis |
US10509947B1 (en) * | 2017-04-11 | 2019-12-17 | Zoox, Inc. | Converting multi-dimensional data for image analysis |
WO2018214086A1 (en) * | 2017-05-25 | 2018-11-29 | 深圳先进技术研究院 | Method and apparatus for three-dimensional reconstruction of scene, and terminal device |
CN107292949B (en) * | 2017-05-25 | 2020-06-16 | 深圳先进技术研究院 | Three-dimensional reconstruction method and device of scene and terminal equipment |
JP2019015553A (en) * | 2017-07-05 | 2019-01-31 | ソニーセミコンダクタソリューションズ株式会社 | Information processing device, information processing method, and solid-state imaging device |
US10776951B2 (en) * | 2017-08-10 | 2020-09-15 | Here Global B.V. | Method, apparatus, and system for an asymmetric evaluation of polygon similarity |
KR102195168B1 (en) * | 2017-11-21 | 2020-12-24 | 한국전자통신연구원 | 3d reconstruction terrain matching method of and apparatus thereof |
US10989795B2 (en) * | 2017-11-21 | 2021-04-27 | Faro Technologies, Inc. | System for surface analysis and method thereof |
CN108228798B (en) * | 2017-12-29 | 2021-09-17 | 百度在线网络技术(北京)有限公司 | Method and device for determining matching relation between point cloud data |
US10679372B2 (en) * | 2018-05-24 | 2020-06-09 | Lowe's Companies, Inc. | Spatial construction using guided surface detection |
CN108898132B (en) * | 2018-05-25 | 2022-08-12 | 广东工业大学 | Terahertz image dangerous article identification method based on shape context description |
US10861173B2 (en) * | 2018-06-22 | 2020-12-08 | The Boeing Company | Hole-based 3D point data alignment |
CN108921929A (en) * | 2018-06-26 | 2018-11-30 | 开放智能机器(上海)有限公司 | A kind of recognition methods of identifying system and training method and individual monocular image |
US10614579B1 (en) | 2018-10-10 | 2020-04-07 | The Boeing Company | Three dimensional model generation using heterogeneous 2D and 3D sensor fusion |
KR102335389B1 (en) * | 2019-01-30 | 2021-12-03 | 바이두닷컴 타임즈 테크놀로지(베이징) 컴퍼니 리미티드 | Deep Learning-Based Feature Extraction for LIDAR Position Estimation of Autonomous Vehicles |
CN110008904A (en) * | 2019-04-08 | 2019-07-12 | 万维科研有限公司 | The method for generating the shape recognition list based on video file format |
CN110097582B (en) * | 2019-05-16 | 2023-03-31 | 广西师范大学 | Point cloud optimal registration and real-time display system and working method |
CN110246166A (en) * | 2019-06-14 | 2019-09-17 | 北京百度网讯科技有限公司 | Method and apparatus for handling point cloud data |
CN110458938B (en) * | 2019-07-22 | 2021-11-12 | 武汉理工大学 | Real-time three-dimensional reconstruction method and system for bulk material pile |
CN110547766B (en) * | 2019-08-22 | 2023-04-28 | 苏州佳世达光电有限公司 | Operation method of mouth sweeping machine |
CN110992407B (en) * | 2019-11-07 | 2023-10-27 | 武汉多谱多勒科技有限公司 | Infrared and visible light image matching method |
CN111127667B (en) * | 2019-11-19 | 2023-03-31 | 西北大学 | Point cloud initial registration method based on region curvature binary descriptor |
CN112418250A (en) * | 2020-12-01 | 2021-02-26 | 怀化学院 | Optimized matching method for complex 3D point cloud |
CN112634336A (en) * | 2020-12-31 | 2021-04-09 | 华科精准(北京)医疗科技有限公司 | Registration method and system |
KR20220112072A (en) | 2021-02-03 | 2022-08-10 | 한국전자통신연구원 | Apparatus and Method for Searching Global Minimum of Point Cloud Registration Error |
US20230169690A1 (en) * | 2021-11-30 | 2023-06-01 | Verizon Patent And Licensing Inc. | Methods and Systems for Scalable Compression of Point Cloud Data |
CN114187178B (en) * | 2021-12-13 | 2024-04-02 | 浙大城市学院 | Porcelain fragment classification and splicing system and method for auxiliary cultural relic restoration |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120022365A1 (en) * | 2010-07-21 | 2012-01-26 | Mansfield Enterprises | Diagnosing Airway Obstructions |
US20130060146A1 (en) * | 2010-04-28 | 2013-03-07 | Ryerson University | System and methods for intraoperative guidance feedback |
US20130211232A1 (en) * | 2012-02-01 | 2013-08-15 | The Johns Hopkins University | Arthroscopic Surgical Planning and Execution with 3D Imaging |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5644689A (en) * | 1992-01-13 | 1997-07-01 | Hitachi, Ltd. | Arbitrary viewpoint three-dimensional imaging method using compressed voxel data constructed by a directed search of voxel data representing an image of an object and an arbitrary viewpoint |
US5715166A (en) * | 1992-03-02 | 1998-02-03 | General Motors Corporation | Apparatus for the registration of three-dimensional shapes |
US8083745B2 (en) | 2001-05-25 | 2011-12-27 | Conformis, Inc. | Surgical tools for arthroplasty |
US5935063A (en) * | 1997-10-29 | 1999-08-10 | Irvine Biomedical, Inc. | Electrode catheter system and methods thereof |
US6573912B1 (en) * | 2000-11-07 | 2003-06-03 | Zaxel Systems, Inc. | Internet system for virtual telepresence |
US7023432B2 (en) * | 2001-09-24 | 2006-04-04 | Geomagic, Inc. | Methods, apparatus and computer program products that reconstruct surfaces from data point sets |
AU2003246906A1 (en) * | 2002-06-25 | 2004-01-06 | Michael Nicholas Dalton | Apparatus and method for superimposing images over an object |
US7103399B2 (en) | 2003-09-08 | 2006-09-05 | Vanderbilt University | Apparatus and methods of cortical surface registration and deformation tracking for patient-to-image alignment in relation to image-guided surgery |
US7693325B2 (en) * | 2004-01-14 | 2010-04-06 | Hexagon Metrology, Inc. | Transprojection of geometry data |
ATE555455T1 (en) | 2004-02-20 | 2012-05-15 | Koninkl Philips Electronics Nv | DEVICE AND PROCESS FOR MULTI-MODAL REGISTRATION OF IMAGES |
US20060020204A1 (en) | 2004-07-01 | 2006-01-26 | Bracco Imaging, S.P.A. | System and method for three-dimensional space management and visualization of ultrasound data ("SonoDEX") |
US7738687B2 (en) * | 2005-04-07 | 2010-06-15 | L-3 Communications Security And Detection Systems, Inc. | Method of registration in a contraband detection system |
WO2008017999A1 (en) | 2006-08-08 | 2008-02-14 | Koninklijke Philips Electronics N.V. | Registration of electroanatomical mapping points to corresponding image data |
WO2009045827A2 (en) | 2007-09-30 | 2009-04-09 | Intuitive Surgical, Inc. | Methods and systems for tool locating and tool tracking robotic instruments in robotic surgical systems |
US8100692B2 (en) | 2007-10-19 | 2012-01-24 | Cagenix Incorporated | Dental framework |
US8000941B2 (en) * | 2007-12-30 | 2011-08-16 | St. Jude Medical, Atrial Fibrillation Division, Inc. | System and method for surface reconstruction from an unstructured point set |
US8442304B2 (en) * | 2008-12-29 | 2013-05-14 | Cognex Corporation | System and method for three-dimensional alignment of objects using machine vision |
US8605989B2 (en) * | 2009-02-13 | 2013-12-10 | Cognitech, Inc. | Registration and comparison of three dimensional objects in facial imaging |
US8290305B2 (en) | 2009-02-13 | 2012-10-16 | Harris Corporation | Registration of 3D point cloud data to 2D electro-optical image data |
US8948501B1 (en) * | 2009-12-22 | 2015-02-03 | Hrl Laboratories, Llc | Three-dimensional (3D) object detection and multi-agent behavior recognition using 3D motion data |
JP5430456B2 (en) * | 2010-03-16 | 2014-02-26 | キヤノン株式会社 | Geometric feature extraction device, geometric feature extraction method, program, three-dimensional measurement device, object recognition device |
WO2012141235A1 (en) * | 2011-04-13 | 2012-10-18 | 株式会社トプコン | Three-dimensional point group position data processing device, three-dimensional point group position data processing system, three-dimensional point group position data processing method and program |
US8879828B2 (en) | 2011-06-29 | 2014-11-04 | Matterport, Inc. | Capturing and aligning multiple 3-dimensional scenes |
KR101907081B1 (en) * | 2011-08-22 | 2018-10-11 | 삼성전자주식회사 | Method for separating object in three dimension point clouds |
US8774504B1 (en) * | 2011-10-26 | 2014-07-08 | Hrl Laboratories, Llc | System for three-dimensional object recognition and foreground extraction |
US8416240B1 (en) * | 2012-04-02 | 2013-04-09 | Google Inc. | Determining 3D model information from stored images |
US8363930B1 (en) | 2012-07-23 | 2013-01-29 | Google Inc. | Use of materials and appearances to merge scanned images |
US9098773B2 (en) * | 2013-06-27 | 2015-08-04 | Chevron U.S.A. Inc. | System and method of detecting objects in scene point cloud |
-
2014
- 2014-03-10 US US14/202,677 patent/US9280825B2/en active Active
- 2014-07-15 US US14/331,541 patent/US20160019716A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130060146A1 (en) * | 2010-04-28 | 2013-03-07 | Ryerson University | System and methods for intraoperative guidance feedback |
US20120022365A1 (en) * | 2010-07-21 | 2012-01-26 | Mansfield Enterprises | Diagnosing Airway Obstructions |
US20130211232A1 (en) * | 2012-02-01 | 2013-08-15 | The Johns Hopkins University | Arthroscopic Surgical Planning and Execution with 3D Imaging |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11734901B2 (en) | 2015-02-03 | 2023-08-22 | Globus Medical, Inc. | Surgeon head-mounted display apparatuses |
US11763531B2 (en) | 2015-02-03 | 2023-09-19 | Globus Medical, Inc. | Surgeon head-mounted display apparatuses |
US11217028B2 (en) | 2015-02-03 | 2022-01-04 | Globus Medical, Inc. | Surgeon head-mounted display apparatuses |
US10650594B2 (en) | 2015-02-03 | 2020-05-12 | Globus Medical Inc. | Surgeon head-mounted display apparatuses |
US11176750B2 (en) | 2015-02-03 | 2021-11-16 | Globus Medical, Inc. | Surgeon head-mounted display apparatuses |
US11062522B2 (en) | 2015-02-03 | 2021-07-13 | Global Medical Inc | Surgeon head-mounted display apparatuses |
US11461983B2 (en) | 2015-02-03 | 2022-10-04 | Globus Medical, Inc. | Surgeon head-mounted display apparatuses |
US11707330B2 (en) | 2017-01-03 | 2023-07-25 | Mako Surgical Corp. | Systems and methods for surgical navigation |
US10499997B2 (en) | 2017-01-03 | 2019-12-10 | Mako Surgical Corp. | Systems and methods for surgical navigation |
US10621720B2 (en) * | 2017-04-27 | 2020-04-14 | Siemens Healthcare Gmbh | Deformable registration of magnetic resonance and ultrasound images using biomechanical models |
US20180315181A1 (en) * | 2017-04-27 | 2018-11-01 | Siemens Healthcare Gmbh | Deformable registration of magnetic resonance and ultrasound images using biomechanical models |
US10846893B2 (en) * | 2017-06-29 | 2020-11-24 | Covidien Lp | System and method for identifying, marking and navigating to a target using real time three dimensional fluoroscopic data |
US10699448B2 (en) * | 2017-06-29 | 2020-06-30 | Covidien Lp | System and method for identifying, marking and navigating to a target using real time two dimensional fluoroscopic data |
US11341692B2 (en) | 2017-06-29 | 2022-05-24 | Covidien Lp | System and method for identifying, marking and navigating to a target using real time two dimensional fluoroscopic data |
US10593052B2 (en) | 2017-08-23 | 2020-03-17 | Synaptive Medical (Barbados) Inc. | Methods and systems for updating an existing landmark registration |
US11782042B2 (en) | 2017-09-21 | 2023-10-10 | Becton, Dickinson And Company | Hazardous contaminant collection kit and rapid testing |
US11821819B2 (en) | 2017-09-21 | 2023-11-21 | Becton, Dickinson And Company | Demarcation template for hazardous contaminant testing |
US11585733B2 (en) | 2017-09-21 | 2023-02-21 | Becton, Dickinson And Company | Hazardous contaminant collection kit and rapid testing |
US10916058B2 (en) | 2017-09-21 | 2021-02-09 | Becton, Dickinson And Company | Augmented reality devices for hazardous contaminant testing |
US11360001B2 (en) | 2017-09-21 | 2022-06-14 | Becton, Dickinson And Company | Reactive demarcation template for hazardous contaminant testing |
US11002642B2 (en) | 2017-09-21 | 2021-05-11 | Becton, Dickinson And Company | Demarcation template for hazardous contaminant testing |
WO2019060276A1 (en) * | 2017-09-21 | 2019-03-28 | Becton, Dickinson And Company | Augmented reality devices for hazardous contaminant testing |
US11199529B2 (en) | 2017-09-21 | 2021-12-14 | Becton, Dickinson And Company | Hazardous contaminant collection kit and rapid testing |
US11385146B2 (en) | 2017-09-21 | 2022-07-12 | Becton, Dickinson And Company | Sampling systems and techniques to collect hazardous contaminants with high pickup and shedding efficiencies |
US11380074B2 (en) | 2017-09-21 | 2022-07-05 | Becton, Dickinson And Company | Augmented reality devices for hazardous contaminant testing |
US11391748B2 (en) | 2017-09-21 | 2022-07-19 | Becton, Dickinson And Company | High dynamic range assays in hazardous contaminant testing |
US10646283B2 (en) | 2018-02-19 | 2020-05-12 | Globus Medical Inc. | Augmented reality navigation systems for use with robotic surgical systems and methods of their use |
US11280801B2 (en) | 2019-01-28 | 2022-03-22 | Becton, Dickinson And Company | Hazardous contaminant collection device with integrated swab and test device |
US11860173B2 (en) | 2019-01-28 | 2024-01-02 | Becton, Dickinson And Company | Hazardous contaminant collection device with integrated swab and test device |
US11464581B2 (en) | 2020-01-28 | 2022-10-11 | Globus Medical, Inc. | Pose measurement chaining for extended reality surgical navigation in visible and near infrared spectrums |
US11883117B2 (en) | 2020-01-28 | 2024-01-30 | Globus Medical, Inc. | Pose measurement chaining for extended reality surgical navigation in visible and near infrared spectrums |
US11382699B2 (en) | 2020-02-10 | 2022-07-12 | Globus Medical Inc. | Extended reality visualization of optical tool tracking volume for computer assisted navigation in surgery |
US11690697B2 (en) | 2020-02-19 | 2023-07-04 | Globus Medical, Inc. | Displaying a virtual model of a planned instrument attachment to ensure correct selection of physical instrument attachment |
US11207150B2 (en) | 2020-02-19 | 2021-12-28 | Globus Medical, Inc. | Displaying a virtual model of a planned instrument attachment to ensure correct selection of physical instrument attachment |
US11607277B2 (en) | 2020-04-29 | 2023-03-21 | Globus Medical, Inc. | Registration of surgical tool with reference array tracked by cameras of an extended reality headset for assisted navigation during surgery |
US11382700B2 (en) | 2020-05-08 | 2022-07-12 | Globus Medical Inc. | Extended reality headset tool tracking and control |
US11510750B2 (en) | 2020-05-08 | 2022-11-29 | Globus Medical, Inc. | Leveraging two-dimensional digital imaging and communication in medicine imagery in three-dimensional extended reality applications |
US11838493B2 (en) | 2020-05-08 | 2023-12-05 | Globus Medical Inc. | Extended reality headset camera system for computer assisted navigation in surgery |
US11839435B2 (en) | 2020-05-08 | 2023-12-12 | Globus Medical, Inc. | Extended reality headset tool tracking and control |
US11153555B1 (en) | 2020-05-08 | 2021-10-19 | Globus Medical Inc. | Extended reality headset camera system for computer assisted navigation in surgery |
US20220093236A1 (en) * | 2020-09-01 | 2022-03-24 | Aibolit Technologies, Llc | System, method, and computer-accessible medium for automatically tracking and/or identifying at least one portion of an anatomical structure during a medical procedure |
US11896323B2 (en) * | 2020-09-01 | 2024-02-13 | Aibolit Technologies, Llc | System, method, and computer-accessible medium for automatically tracking and/or identifying at least one portion of an anatomical structure during a medical procedure |
US11737831B2 (en) | 2020-09-02 | 2023-08-29 | Globus Medical Inc. | Surgical object tracking template generation for computer assisted navigation during surgical procedure |
US11817201B2 (en) | 2020-09-08 | 2023-11-14 | Medtronic, Inc. | Imaging discovery utility for augmenting clinical image management |
Also Published As
Publication number | Publication date |
---|---|
US20150254857A1 (en) | 2015-09-10 |
US9280825B2 (en) | 2016-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20160019716A1 (en) | Computer assisted surgical system with position registration mechanism and method of operation thereof | |
EP3081184B1 (en) | System and method for fused image based navigation with late marker placement | |
CN109996511B (en) | System for booting a process | |
US10593052B2 (en) | Methods and systems for updating an existing landmark registration | |
US11883118B2 (en) | Using augmented reality in surgical navigation | |
US11944390B2 (en) | Systems and methods for performing intraoperative guidance | |
US10166079B2 (en) | Depth-encoded fiducial marker for intraoperative surgical registration | |
EP2874556B1 (en) | Augmented reality imaging system for surgical instrument guidance | |
US10154239B2 (en) | Image-guided surgery with surface reconstruction and augmented reality visualization | |
CN108701170B (en) | Image processing system and method for generating three-dimensional (3D) views of an anatomical portion | |
US11416995B2 (en) | Systems, devices, and methods for contactless patient registration for a medical procedure | |
WO2017011892A1 (en) | System and method for mapping navigation space to patient space in a medical procedure | |
Gerard et al. | Combining intraoperative ultrasound brain shift correction and augmented reality visualizations: a pilot study of eight cases | |
Wen et al. | Projection-based visual guidance for robot-aided RF needle insertion | |
JP6493885B2 (en) | Image alignment apparatus, method of operating image alignment apparatus, and image alignment program | |
EP3151736A2 (en) | Computer assisted surgical system with position registration mechanism and method of operation thereof | |
JP6745998B2 (en) | System that provides images to guide surgery | |
EP3931799B1 (en) | Interventional device tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SONY CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUANG, ALBERT;LIU, MING-CHANG;HARRES, DENNIS;REEL/FRAME:033316/0134 Effective date: 20140711 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |