EP3311251A1 - Augmented reality imaging system, apparatus and method - Google Patents
Augmented reality imaging system, apparatus and methodInfo
- Publication number
- EP3311251A1 EP3311251A1 EP16747901.3A EP16747901A EP3311251A1 EP 3311251 A1 EP3311251 A1 EP 3311251A1 EP 16747901 A EP16747901 A EP 16747901A EP 3311251 A1 EP3311251 A1 EP 3311251A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- real
- head mounted
- time
- imaging device
- augmented reality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
Definitions
- This invention relates to a system, and in particular, but without limitation to, an augmented reality imaging system, apparatus and method suitable for use in intraoperative procedures.
- a surgeon When carrying out a surgical procedure, a surgeon usually needs to locate a target within the body, for example, a particular tissue structure, growth or bone. Where the target is located within the body, and where open surgery is contraindicated, the surgeon is unable to see the target, which makes the operation more complicated. It is known to use endoscopes and the like, in so-called "keyhole surgery" to enable the surgeon to see, and hence operate on, a target within the body. Even so, an endoscope cannot be used until the surgeon has located, and placed the tip of the endoscope adjacent, the target. Finding the target with an endoscope is sometimes a matter of trial and error, but unsuccessful attempts to locate the target can result in undesirable tissue damage.
- various navigational aids have been developed to facilitate locating the target, such as the use of pre-operative medical imaging (e.g. CT or M I) in conjunction with visual comparison of the resultant images, by the surgeon, against the patient in theatre.
- pre-operative medical imaging e.g. CT or M I
- 3D imagery can be very useful, unless it can be accurately overlaid or transposed onto the actual patient, at the time of the surgical procedure, the surgeon still needs to use estimation and judgment to gauge how the 3D imagery relates to the patient before her/him.
- AR augmented reality
- HMD head mounted display
- AR systems have their drawbacks, chief amongst which is the temporal separation of the pre-operative imagery and the use of that imagery in the AR.
- objects inside the body can move, so unless the pre-operative imagery is taken immediately before the procedure, and the patient has been immobilised in the interim, the absolute accuracy of the AR system is unlikely to be error-free.
- a further drawback of known AR systems is that the pre-operative imagery often needs to be taken with the patient in one position (e.g. with the patient lying supine), whereas the position of the patient on the operating table is often different (e.g. lying prone, or seated). Gravity causes the body's internal organs to displace, and this too can lead to tissues and/or targets moving inside the body, thus decreasing accuracy and reliability and increasing the need for the surgeon to fall back on experience and judgment.
- an augmented reality imaging system comprising: a head mounted display comprising a display; a real-time imaging device adapted, in use, to capture a first real-time video image of a target; a processor and means for sensing the position and attitude of the head mounted display and the real-time imaging device, wherein the processor is adapted to: create a three-dimensional model comprising the relative positions and orientations of the head mounted display and the real-time imaging device; receive the first real-time video image from the real-time imaging device; and based on the said positions and orientations in the three-dimensional model, to re-sample the first real-time video image from the real-time imaging device as a video image as viewed from the position of, and along the line of sight of a wearer of the head mounted display; and to output the re-sampled video image as a second real-time video image on the display of head mounted display.
- the augmented reality imaging system of the invention may additionally be adapted to sense the position and orientation of an instrument, such as a surgical instrument.
- the processor of the augmented reality imaging system of the invention is suitably adapted to insert into the second real-time video image, a representation of the instrument. This suitably enables a wearer of the head mounted display to see, in real-time, a virtual representation of the instrument in the re- sampled image, relative to the target.
- the means for sensing the position and attitude of any of the head mounted display, the real-time imaging device and the instrument comprises a position and attitude sensor operatively (and preferably fixedly) connected to each of the head mounted display, the real-time imaging device and the instrument, where provided.
- the position and attitude sensors for each of the head mounted display, the real-time imaging device and the instrument are integrated, for example into a six-axis (e.g. with six degrees of freedom (“6DOF"), i.e. pan (P), tilt (T), roll (R), elevation (E), track sideward (X) and track forwards (Y)) sensor.
- the six- axis sensor suitably comprises a magnetic and/or gyroscopic sensor.
- trakSTAR Ascension Technology Corporation 3D Guidance "trakSTAR" RTM, which comprises a magnetic field emitter, one or more 6DOF magnetic receivers and a processor unit that converts the 6 magnetic field readings of each 6DOF sensor into positon and orientation outputs.
- the invention can be used to address or overcome the aforementioned problem of tissues moving within the body in the time between pre-operative, high definition, 3D images being taken, and the time when the patient is on the operating table in a different position.
- the solution to this problem, as proposed by the invention is to supplement detailed preoperative imagery with real-time medical imaging, such as ultrasonic imaging. This facilitates "correcting" the pre-operative imaging by morphing it onto the real-time imagery.
- a surgeon is able to use a real-time imaging device, such as an ultrasonic probe, to identify immovable or substantially immovable features within the body, and to "mark" them on real-time.
- a real-time imaging device such as an ultrasonic probe
- the ultrasonic probe having a scroll-wheel type input device enabling a surgeon to move a cursor in her/his A version of the ultrasonic imaging footage.
- the surgeon can "mark” a feature in 3D space within the 3D model formed by the processor. This can be repeated a number of times for different structures within the body to build up a "point map" of features identifiable in the ultrasonic imagery that are also identifiable within the pre-operative 3D imagery.
- the processor can then morph the pre-operative 3D imagery onto the point map created in real time, thus matching the pre-operative 3D model with the real-time situation. This compensates for differences between the pre-operative imagery/model and the real-time situation.
- the surgeon can toggle between point-of-view-compensated real-time ultrasonic imagery (the second real-time video image); and displacement- and point-of-view-corrected, pre-operative 3D imagery, in her/his AR representation (the second real-time image) as viewed using the head mounted display.
- the surgeon has the benefit of displacement-corrected, high resolution, pre-operative 3D imagery in her/his AR representation, as well as the option to switch to real-time ultrasonic imagery in cases of ambiguity.
- the second real-time video image can comprise a mixture of displacement- and point-of-view-corrected, pre-operative 3D imagery; and point-of-view- compensated real-time ultrasonic imagery.
- the means for sensing the position and attitude of each of the head mounted display, the real-time imaging device and the instrument comprises a plurality of observation cameras adapted to observe one or more of the head mounted display, the real-time imaging device and the instrument; and wherein the processor is adapted to: receive footage from the observation cameras and to create a three-dimensional model comprising the relative positions and orientations of the head mounted display, and the real-time imaging device
- the invention is able to determine, in real-time, the relative position and orientation of the real-time imaging device and the head mounted display and to reconstitute the image from the point of view of the wearer of the head mounted display.
- the image visible in the display screen of the head mounted display is resampled so that it is correct from the instantaneous point of view.
- the real-time imaging device comprises an ultrasonic imaging device. This is particularly advantageous because it avoids the need to use fluoroscopy in theatre, which poses a radiation hazard to both the patient and theatre staff.
- the image formed by the ultrasonic imaging device is dependent on the orientation (i.e. the rotation and tilt angle of the probe), and it is up to the operator to adjust the orientation so that the image formed corresponds to the point of view of the observer, e.g. by tilting and rotating the probe.
- the image formed by the ultrasonic probe is automatically resampled as if from the point of view of the observer, thus obviating manual manipulation of the probe.
- the invention automatically resamples the real-time video image to compensate for this.
- Advantages of the invention are manifold, but include: reducing or removing the need for the wearer of the head mounted display to continuously reorient the real-time imaging device to correspond to her/his point of view; and the ability to move round the target to view it from different angles.
- This latter mentioned advantage is of particular benefit in endoscopic procedures in which the endoscopic instrument can only be viewed in two dimensions. Ordinarily, the surgeon has to judge the third dimension (e.g. depth). However, with the invention, the surgeon can move around the target, and the "missing dimension" becomes apparent in the head mounted display by virtue of being continuously resampled from her/his instantaneous point of view.
- a surgeon in the case of a surgical instrument is better able to gauge the point of insertion, the trajectory of the instrument within the body and so on. In combination with the ability to continuously resample the real-time video image from the imaging device, this affords considerable scope for avoiding "missing" the target, during a surgical procedure.
- the invention can be used in other applications, where guiding a hidden instrument towards a hidden target is required.
- the invention may be useful in certain inspection/maintenance systems, for example, guiding/working on wiring within a cavity wall, drilling into a surface to intersect a target (e.g. in locksmithing), etc.
- the various possible applications of the invention are manifold, and a detailed description of all possible applications of the invention is omitted from this disclosure purely for the sake of brevity, but the invention's application in other fields clearly fall within the scope of this disclosure.
- a second aspect of the invention provides a method forming an augmented reality image comprising the steps of: determining the position and orientation of at least one head mounted display at least one real-time imaging device; forming a three-dimensional model comprising a vector for each head mounted display, each real-time imaging device, each vector comprising an origin corresponding to the position of a datum of, and an orientation corresponding to an axis of, the or each real-time imaging device and head mounted display; receiving a real-time images from the or each real-time imaging device; re-sampling the real-time images as if from the point of view of a wearer of the or each head mounted display based on the determined position and orientation of the respective head mounted display; and outputting, as an image in the head mounted display, a resampled image as a video image viewed perpendicular to the line of sight of a wearer of the head mounted display.
- the position and orientation of at least one instrument may also be determined and the resampled image suitably comprises an indication of the position and orientation of the instrument.
- the step of determining the position and orientation of at least one head mounted display and at least one real-time imaging device suitably comprises: monitoring the outputs of six-axis position and orientation sensors affixed to any one or more of the head mounted display, the real-time imaging and the instrument.
- the step of determining the position and orientation of at least one head mounted display and at least one real-time imaging device suitably comprises: observing from a plurality of viewpoints, encoding markers affixed to the or each real-time imaging device and head mounted display, and determining, based on stereoscopic image analysis of the relative positions and visible portions of the encoding markers, the relative positons and orientations of the or each head mounted display and real-time imaging device.
- the method may further comprise the steps of using the real-time imaging to identify substantially immoveable or fixed points of reference within the body and marking those points; placing the marked points within the 3D model; morphing a pre-operative 3D model containing some or all of the same points of reference to the real-time images (for example, by warping or stretching the pre-operative image so that its points of reference overlie the real-time points of reference); and outputting in the second real-time video image, either or both of the displacement- and point-of-view- corrected, pre-operative 3D imagery; and the point-of-view-compensated real-time video footage.
- the head mounted display can be of any suitable type, for example comprising a transparent visor placed in front of the wearer's eyes.
- An edge, or back projection system for example, built into a headband of the visor, can project the re-sampled real-time images onto the visor.
- Such a configuration enables the wearer to see directly through the visor (e.g. to see the exterior of the patient before her/him), as well as having a ghosted image of the interior of the patient overlaid.
- the real-time imaging device may be any one or more of the group comprising: an x-ray imager, an ultrasonic imager, a CT scanner, an MRI scanner, or any medical imaging device capable of forming a real-time image of the inside of a patient.
- more than one real-time imaging device may be employed, and in such a situation a wearer of the head mounted display may be able to toggle between, or overlay, the real-time images from the different real-time imaging devices.
- the surgical instrument can be an endoscopic instrument.
- the plurality of observation cameras is adapted to observe one or more of the head mounted display, the real-time imaging device and the surgical instrument.
- any one or more of the head mounted display, the real-time imaging device and the surgical instrument may comprise an encoding device, such as a barcode, QR code or other identifying marker, from which the observation cameras can determine the identity and/or positon and/or orientation o the head mounted display, the real-time imaging device or the surgical instrument.
- the processor suitably comprises a computer adapted to carry out the functions of the invention.
- the computer where provided, comprises I/O ports operative connected to the or each of the head mounted display, the real-time imaging device and optionally, to the surgical instrument.
- the three-dimensional model suitably comprises a Cartesian coordinate system comprising a series of vectors corresponding to one or more the head mounted display, the real-time imaging device and the surgical instrument.
- the use of vectors is preferred because it enables each vector's origin to correspond to the position in 3-D space of each of the objects in the model, and the direction to correspond to an axis relative to the line of sight or another known axis of the device.
- Re-sampling the real-time video image is suitably accomplished by capturing a plurality of simultaneous real-time images using the real-time imaging device, and by interpolating between the available views to obtain a resampled image viewed perpendicular to the line of sight of a wearer of the head mounted display.
- the realtime imaging device is adapted to capture images at different rotational angles. Additionally, or alternatively, the real-time imaging device may be adapted to simultaneously capture a series of spaced-apart slices.
- the processor comprises a machine vision system adapted to identify the aforesaid markers and to populate the three-dimensional model from the visible parts of the markers.
- a machine vision system adapted to identify the aforesaid markers and to populate the three-dimensional model from the visible parts of the markers.
- the observation cameras may be fixed of moveable. In certain embodiments of the invention, the observation cameras are adapted to automatically pan, tilt, roll etc. if one of the markers moves out of view.
- the cameras may be configured to move so as to scan a particular field of view.
- each of the users may be able to select real-time imagery from their own point of view, of from the point of view of another head mounted display wearer.
- a switch means such as a foot pedal, is suitably provided to enable each wearer of each head mounted display to switch between different augmented reality views on her/his display.
- Figure 1 is a schematic side view of a surgical set up in accordance with a first embodiment of the invention
- Figure 2 is a schematic plan view of Figure 1;
- Figure 3 is a schematic system diagram illustrating the first embodiment of invention
- Figure 4 is a schematic side view of a surgical set up in accordance with a second embodiment of the invention
- Figure 5 is a schematic plan view of Figure 4;
- Figure 6 is a schematic system diagram illustrating the second embodiment of invention.
- Figure 7 is a perspective schematic view of an A image formed by the invention using realtime ultrasonic images.
- Figure 8 is a perspective schematic view of an AR image formed by the invention using a morphed version of a prior 3D scan matched to markers in the real-time ultrasonic images.
- a typical surgical procedure involves operating on a target structure 12 located within the body 10.
- the surgeon (not shown) needs to access the target structure 12 using a laparoscopic instrument 14, which is inserted into the body 10 via an entry point 16, and whose orientation 18 is adjusted so that when advanced into the body 10, it intersects the target structure 12.
- a real-time imaging device in this case, an ultrasonic imaging device, comprises a probe 20 that is placed on the surface 22 of the body 10 adjacent the target structure 12.
- the ultrasonic imaging device captures an internal image of the body 10, in which the target structure 12 and the tip of the laparoscopic instrument 14 are visible.
- a 2D image from the real-time imaging device 20 provides insufficient information to enable the surgeon to correctly guide the laparoscopic instrument.
- a single 2D image may indicate that the horizontal and vertical positions of the instrument 14 are correctly placed relative to the target structure 12, the surgeon cannot check that the depth of the instrument is correct (cf. Figure 2).
- the "missing dimension" is gauged by judging parallax movements within the captured 2D image, or by moving the probe 20 to obtain a real-time image from a different perspective.
- the invention utilises a head mounted display unit 30, which the surgeon (not shown) wears on her/his head.
- the head mounted display unit 30 comprises a head band 32 bearing an encoding pattern 34, which can be seen by three observation cameras 50 arranged at different positions within the operating theatre.
- the head mounted display unit 30 further comprises a transparent visor 36, placed in front of the surgeon's eyes, through which the body 10 is directly visible, and onto which, a real-time image is displayed. The result is a ghosted internal image of the interior of the body 10 overlaid onto the "real" view - an augmented reality display.
- the real-time imaging probe 20 also comprises encoding pattern 38, which can be seen by three observation cameras 50 as well.
- a system 60 in accordance with the invention comprises a processor 62 operatively connected to the (or several) real-time imaging devices 20, which produces (or each of which produce) a real-time video image 64 of the interior of the body 10. Meanwhile, several observation cameras 50 scan the scene, and capture real-time video images 66 of the encoding patterns 32, 38 from the head mounted display unit 30 and the real-time imaging probe 20.
- the real-time images 66 from the observation cameras 50 are fed into a machine vision module 68 of the processor 62, which builds a 3D model 70 of the scene.
- the 3D model 70 comprises a Cartesian coordinate system 72 containing a set of vectors 74 each having an origin and an orientation.
- the origin of each vector corresponds to the position of each observed item in the scene, and the orientation corresponds to an axis - conveniently, a "line of sight".
- the origin is determined by comparing the position, in each of the captured observations cameras' images 66, the positions of the target, i.e. the real-time imaging probe 20 and the head mounted display unit 30. Their positions in Cartesian space can be obtained by stereoscopic analysis of the captured observation cameras' images 60, and this is carried out in the machine vision module 68 of the processor.
- the orientation of each of the components can be determined, by the machine vision module 68, by comparing which parts of the encoder patterns 34, 38 are visible in each of the observation cameras' captured images 66.
- the machine vision module 68 of the processor 62 thus creates the 3D model 70 in which the vectors 74 correspond to the position and line of sight 80 of the head mounted display unit 30; the position and orientation 82 of the probe 20; and the position and orientation 18 of the laparoscopic instrument 14.
- the real-time images 64 from the probe 20 can thus be resampled within a resampling module 84 of the processor 62: the real-time image 64 being resampled from as if from the point of view 80 of the wearer of the head mounted display 30.
- the resampled image is then rendered as an overlay image 88 in a rendering module 90 of the processor, which overlay image 88 is projected onto the visor 36 of the head mounted display 30 so that the surgeon can see the real-time medical image as if from her/his point of view/perspective.
- the laparoscopic instrument 14 also comprises an encoder pattern 92, and this too is observed by the observation cameras 50 and inputted into the rendered image 88 that is projected onto the visor 36 of the head mounted display unit 30.
- an image or line representing it can be overlaid into the rendered image 88. This greatly facilitates guiding the instrument 14 towards the target 12 within the body 10.
- a surgeon is able to see, in augmented reality, the interior of the body 10 as a virtual overlay on top of the exterior view of the body 10.
- Various controls may be made available to the surgeon, for example, by using a foot pedal control (not shown), to enable the apparent transparency of the rendered image to be adjusted in the visor 36 (so that an overlay, or only the rendered image is visible).
- the rendered image 88 is automatically updated, enabling the surgeon to view the target structure 12, the instrument 14 from any desired angle, thus obviating the need to gauge depth perception based on parallax or other cues (e.g. depth of field, apparent relative sizes etc.).
- FIG. 4 to 7 of the drawings a second embodiment of the invention is shown, which is broadly similar to that described previously, in which the positon and orientation of a real-time imaging device 20, a the head mounted display unit 30 and an instrument 14 are sensed using 6DOF sensors 100, 102, 104 rigidly affixed to each of the real-time imaging device 20, the head mounted display unit 30 and the instrument 14, respectively.
- the head mounted display unit 30 is different to that described previously inasmuch as it comprises a pair of display screens 108 located, in use, in front of the eyes (not shown) of a wearer (not shown), and a pair of forward-facing cameras 110 each capturing one half of a composite stereoscopic image, which is displayed on the display screens 108.
- a wearer of the head mounted 108 sees a scene in front of him/her normally through the head mounted display 30.
- the 6DOF sensors 100, 102, 104 each comprise 6-axis magnetic sensors that interact with a transmitter unit 112.
- the 6DOF sensors 100, 102, 104 thus each output six sensor readings, one sensor reading corresponding to each degree of freedom.
- the sensor readings are collected via wires or wirelessly, and are fed to inputs of a processor (not shown), whose operation shall be described in greater detail below.
- the invention 100 comprises a head mounted display unit 30 comprising the stereoscopic cameras 110 and the stereoscopic display screens 108 described previously.
- the cameras 110 each capture one half of a stereoscopic image 120, which is fed to an image capture part 84 of the processor 62.
- the real-time imaging device 20 captures a first real-time image 122 along its point of view 82, and this image 122 is also fed into the image capture part 84 of the processor 62.
- Each of the head mounted display 30, the real-time imaging device 20 and the instrument 14 comprise a 6DOF sensor 102, 104, 106, which are in-range 124 of a transmitter 112.
- the six sensor readings corresponding to each degree of freedom, from each of the 6DOF sensors 102, 104, 106 are fed into a mapping part 126 of the processor 62, which calculates a three-dimensional model 70 of the positions and orientations of each of the had mounted display 30, the real-time imaging device 20 and the instrument 14.
- a rendering module 90 of the processor 62 resamples the captured footage 122 from the realtime imaging device 20, as if from the point of view 80 of a wearer of the head mounted display unit 30 and outputs a re-sampled feed 122'.
- the rendering module 90 of the processor mixes the re- sampled feed 122' from the real-time imaging device 20 with the feeds 120 from each of the head mounted display's cameras 110 and outputs a composite image 130, corresponding to each of the feeds 120, i.e. one for each eye of the wearer of the head mounted display 30.
- the composite image is then displayed on the display screens 118 of the head mounted display unit 30, thus enabling the wearer of the head mounted display 30 to see an overlay of the footage 122 of the real-time imaging device, albeit re-sampled so as to appear as if it were captured along the line of sight 80 of the wearer of the head mounted display, as opposed to along the line of sight 82 of the real-time imaging device.
- the system is also configured to capture the positon and orientation of an instrument, such as a laparoscopic instrument 14, and a virtual representation 132 of the instrument 14 can be mixed into the composite image 130.
- an instrument such as a laparoscopic instrument 14
- a virtual representation 132 of the instrument 14 can be mixed into the composite image 130.
- Figure 7 of the drawings shows what a wearer of a second head mounted display unit would see, for example, in theatre.
- a surgeon 150 is wearing a first head mounted display unit 30 similar to that described above in relation to Figures 4, 5 and 6 above, and she holds in one hand, an ultrasonic probe 20 capturing a real-time image of a surgical site 152, including a target structure 12; and in the other hand, a surgical instrument 14.
- the captured image 122 from the ultrasonic probe 20 is re-sampled 122' as if from the point of view shown in Figure 7, whereas, of course, the surgeon 150 would see a different re-sampled image 122' as if from her point of view 82.
- the virtual reality overlay 130 comprises a representation 132 of the hidden part of the instrument 14, as well as a re-sampled version 122' of the real-time footage 122 from the ultrasonic probe 20. Also included in the virtual reality overlay 130 is a topographical grid 154 representing the surface of the patient, as well as gridlines indicating depth etc. into the body of the patient.
- the ultrasonic probe 20 comprises a click-wheel type button 160 that enables the surgeon 150 to move and place a cursor 162 within the AR image 130 to identify various points of reference.
- the points of reference correspond to substantially fixed or immoveable features within the patient's body, such as the target structure 12.
- the processor 62 can warp, stretch and/or reorientate a pre-operative 3D model of the subject such that similar points of reference in the preoperative 3D model correspond to the cursor position 162 in 3D space.
- the processor can thus display in the A depiction 130, either or both of the displacement- and point-of-view-corrected, preoperative 3D imagery; and the point-of-view-compensated real-time video footage.
- This latter modification of the invention is shown schematically in Figure 8 of the drawings in which this time, the second real-time video image 130 comprises a morphed version of a pre-operative 3D model.
- the invention provides a number of advantages over known imaging systems insofar as it enables the surgeon 150 to re-orient the ultrasonic probe 20, and or to move her head, to view the target 12 and/or instrument 14 from different angles. Also, since each wearer of each head mounted display unit is presented with the same set of images, albeit re-sampled from their own point of view, it enables two or more surgeons to work on the same operative site without having to duplicate equipment. The ability of a number of people to see the same augmented reality footage, albeit rendered from their individual points of view, affords a great deal of scope for teamwork and cooperation during a surgical procedure. It also offers considerable scope for crosschecking, thus avoiding, or reducing unnecessary iterations of an "approach" towards a surgical target structure 12. Further, any suspected errors or deviations in the pre-operative 3D model can be crosschecked against real-time imagery.
- this disclosure relates to an augmented reality imaging system
- a head mounted augmented reality display unit (30) a real-time imaging device (20), such as an ultrasound probe, that captures a real-time video image (122) of a target; a processor; and means for sensing the position and attitude of the head mounted unit (30) and the real-time imaging device (20).
- the system adjusts the real-time video image (122) so that it appears, in the head-mounted display unit (30), to have been taken from the point of view of the user (150).
- a user (150) is able to move his/her head around a point of interest (12) to better gauge the "missing dimension" in what would otherwise be a 2D image.
- the system By placing virtual markers (162) in the AR output displayed in the user's head unit (30), and by matching them to known points of interest in a subject, the system is able to correct for displacement etc. in offline (previously captured) 3D imagery (132), such as an M I scan, to the actual position of those features.
- any number of head mounted display units 30 may be provided, each having its own rendered images 88; any number of instruments 14 may be used, each having its own vector 42 in the 3D model 70; and any number of real-time medical imaging devices may be used, each outputting its own real- or near-time images, which may be rendered according to any desired number of users' points of view.
- a monitor may also be provided, which displays any one or more of the rendered 88 or real-time images 64.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Primary Health Care (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Processing Or Creating Images (AREA)
Abstract
An augmented reality imaging system comprising: a head mounted augmented reality display unit (30), a real-time imaging device (20), such as an ultrasound probe, that captures a real-time video image (122) of a target; a processor;and means for sensing the position and attitude of the head mounted unit(30) and the real-time imaging device (20). The system adjusts the real-time video image (122) so that it appears, in the head-mounted display unit (30), to have been taken from the point of view of the user(150). Thus, a user (150) is able to move his/her head around a point of interest (12) to better gauge the "missing dimension"in what would otherwise be a 2D image. By placing virtual markers (162) in the AR output displayed in the user's head unit(30), and by matching them to known points of interest in a subject, the system is able to correct for displacement etc. in offline (previously captured) 3D imagery (132), such as an MRI scan,to the actual position of those features.
Description
Title: Augmented reality imaging system, apparatus and method Description:
This invention relates to a system, and in particular, but without limitation to, an augmented reality imaging system, apparatus and method suitable for use in intraoperative procedures.
When carrying out a surgical procedure, a surgeon usually needs to locate a target within the body, for example, a particular tissue structure, growth or bone. Where the target is located within the body, and where open surgery is contraindicated, the surgeon is unable to see the target, which makes the operation more complicated. It is known to use endoscopes and the like, in so-called "keyhole surgery" to enable the surgeon to see, and hence operate on, a target within the body. Even so, an endoscope cannot be used until the surgeon has located, and placed the tip of the endoscope adjacent, the target. Finding the target with an endoscope is sometimes a matter of trial and error, but unsuccessful attempts to locate the target can result in undesirable tissue damage.
A need therefore exists for a system that assists the surgeon in locating a hidden target. To this end, various navigational aids have been developed to facilitate locating the target, such as the use of pre-operative medical imaging (e.g. CT or M I) in conjunction with visual comparison of the resultant images, by the surgeon, against the patient in theatre. Although 3D imagery can be very useful, unless it can be accurately overlaid or transposed onto the actual patient, at the time of the surgical procedure, the surgeon still needs to use estimation and judgment to gauge how the 3D imagery relates to the patient before her/him.
To reduce the need for judgment and estimation, augmented reality (AR) systems have been developed in which the surgeon wears a pair of AR spectacles or head mounted display (HMD) through which the exterior of the patient can be seen directly, but onto which, the pre-operative 3D imagery can be overlaid, as a "ghost" in the field of view of the surgeon. Such an arrangement puts the 3D
imagery into context, making it much easier for the surgeon to see how the 3D imagery relates to the patient before her/him.
However, even AR systems have their drawbacks, chief amongst which is the temporal separation of the pre-operative imagery and the use of that imagery in the AR. In short, objects inside the body can move, so unless the pre-operative imagery is taken immediately before the procedure, and the patient has been immobilised in the interim, the absolute accuracy of the AR system is unlikely to be error-free.
A further drawback of known AR systems is that the pre-operative imagery often needs to be taken with the patient in one position (e.g. with the patient lying supine), whereas the position of the patient on the operating table is often different (e.g. lying prone, or seated). Gravity causes the body's internal organs to displace, and this too can lead to tissues and/or targets moving inside the body, thus decreasing accuracy and reliability and increasing the need for the surgeon to fall back on experience and judgment.
A need therefore exists for a solution to one or more of the above problems, and/or for an alternative to known systems such as those described above. In particular, a need exists for a navigational system for use in intraoperative surgery that allows the surgeon to view 360° around a subject in real-time, in order to potentially improve the accuracy and reduce error associated with such procedures.
Various aspects of the invention are set forth in the appended independent claims.
According to a first aspect of the invention there is provided an augmented reality imaging system comprising: a head mounted display comprising a display; a real-time imaging device adapted, in use, to capture a first real-time video image of a target; a processor and means for sensing the position and attitude of the head mounted display and the real-time imaging device, wherein the processor is adapted to: create a three-dimensional model comprising the relative positions and orientations of the head mounted display and the real-time imaging device; receive the first real-time video image from the real-time imaging device; and based on the said positions and orientations in
the three-dimensional model, to re-sample the first real-time video image from the real-time imaging device as a video image as viewed from the position of, and along the line of sight of a wearer of the head mounted display; and to output the re-sampled video image as a second real-time video image on the display of head mounted display.
The augmented reality imaging system of the invention may additionally be adapted to sense the position and orientation of an instrument, such as a surgical instrument. In such a situation, the processor of the augmented reality imaging system of the invention is suitably adapted to insert into the second real-time video image, a representation of the instrument. This suitably enables a wearer of the head mounted display to see, in real-time, a virtual representation of the instrument in the re- sampled image, relative to the target.
In a preferred embodiment of the invention, the means for sensing the position and attitude of any of the head mounted display, the real-time imaging device and the instrument comprises a position and attitude sensor operatively (and preferably fixedly) connected to each of the head mounted display, the real-time imaging device and the instrument, where provided. Suitably, the position and attitude sensors for each of the head mounted display, the real-time imaging device and the instrument are integrated, for example into a six-axis (e.g. with six degrees of freedom ("6DOF"), i.e. pan (P), tilt (T), roll (R), elevation (E), track sideward (X) and track forwards (Y)) sensor. The six- axis sensor suitably comprises a magnetic and/or gyroscopic sensor. An example of a proprietary system suitable for sensing the position and attitude of each of the head mounted display, the realtime imaging device and the instrument, in accordance with the invention, is the Ascension Technology Corporation 3D Guidance "trakSTAR" RTM, which comprises a magnetic field emitter, one or more 6DOF magnetic receivers and a processor unit that converts the 6 magnetic field readings of each 6DOF sensor into positon and orientation outputs.
Suitably, the invention can be used to address or overcome the aforementioned problem of tissues moving within the body in the time between pre-operative, high definition, 3D images being taken, and the time when the patient is on the operating table in a different position.
The solution to this problem, as proposed by the invention, is to supplement detailed preoperative imagery with real-time medical imaging, such as ultrasonic imaging. This facilitates "correcting" the pre-operative imaging by morphing it onto the real-time imagery.
Specifically, using the invention, a surgeon is able to use a real-time imaging device, such as an ultrasonic probe, to identify immovable or substantially immovable features within the body, and to "mark" them on real-time. This can be accomplished, in practice, by the ultrasonic probe having a scroll-wheel type input device enabling a surgeon to move a cursor in her/his A version of the ultrasonic imaging footage. By clicking the scroll wheel when the cursor overlies a particular feature, the surgeon can "mark" a feature in 3D space within the 3D model formed by the processor. This can be repeated a number of times for different structures within the body to build up a "point map" of features identifiable in the ultrasonic imagery that are also identifiable within the pre-operative 3D imagery. The processor can then morph the pre-operative 3D imagery onto the point map created in real time, thus matching the pre-operative 3D model with the real-time situation. This compensates for differences between the pre-operative imagery/model and the real-time situation.
When a sufficient number of markers have been identified using the ultrasonic probe, the surgeon can toggle between point-of-view-compensated real-time ultrasonic imagery (the second real-time video image); and displacement- and point-of-view-corrected, pre-operative 3D imagery, in her/his AR representation (the second real-time image) as viewed using the head mounted display. Thus, the surgeon has the benefit of displacement-corrected, high resolution, pre-operative 3D imagery in her/his AR representation, as well as the option to switch to real-time ultrasonic imagery in cases of ambiguity. In certain embodiments, the second real-time video image can comprise a mixture of displacement- and point-of-view-corrected, pre-operative 3D imagery; and point-of-view- compensated real-time ultrasonic imagery. This addresses or overcomes current limitations of ultrasonic imaging, namely it being a 2D sectional imaging procedure, and working out the exact relationship between the ultrasonic images and those of the pre-operative imagery.
Additionally or alternatively, the means for sensing the position and attitude of each of the head mounted display, the real-time imaging device and the instrument comprises a plurality of observation cameras adapted to observe one or more of the head mounted display, the real-time imaging device and the instrument; and wherein the processor is adapted to: receive footage from the observation cameras and to create a three-dimensional model comprising the relative positions and orientations of the head mounted display, and the real-time imaging device
In all embodiments, the invention is able to determine, in real-time, the relative position and orientation of the real-time imaging device and the head mounted display and to reconstitute the image from the point of view of the wearer of the head mounted display. Thus, as the wearer of the head mounted display moves relative to the target and/or the real-time imaging device, the image visible in the display screen of the head mounted display is resampled so that it is correct from the instantaneous point of view.
In one embodiment of the invention, the real-time imaging device comprises an ultrasonic imaging device. This is particularly advantageous because it avoids the need to use fluoroscopy in theatre, which poses a radiation hazard to both the patient and theatre staff.
Ordinarily, the image formed by the ultrasonic imaging device is dependent on the orientation (i.e. the rotation and tilt angle of the probe), and it is up to the operator to adjust the orientation so that the image formed corresponds to the point of view of the observer, e.g. by tilting and rotating the probe. However, by virtue of the invention, the image formed by the ultrasonic probe is automatically resampled as if from the point of view of the observer, thus obviating manual manipulation of the probe. Further, even if the probe is kept still, but the wearer of the head mounted display moves (i.e. changes her/his point of view) the invention automatically resamples the real-time video image to compensate for this.
Advantages of the invention are manifold, but include: reducing or removing the need for the wearer of the head mounted display to continuously reorient the real-time imaging device to correspond to her/his point of view; and the ability to move round the target to view it from different
angles. This latter mentioned advantage is of particular benefit in endoscopic procedures in which the endoscopic instrument can only be viewed in two dimensions. Ordinarily, the surgeon has to judge the third dimension (e.g. depth). However, with the invention, the surgeon can move around the target, and the "missing dimension" becomes apparent in the head mounted display by virtue of being continuously resampled from her/his instantaneous point of view.
By monitoring the position and orientation of the instrument, a surgeon (in the case of a surgical instrument) is better able to gauge the point of insertion, the trajectory of the instrument within the body and so on. In combination with the ability to continuously resample the real-time video image from the imaging device, this affords considerable scope for avoiding "missing" the target, during a surgical procedure.
It will be appreciated that the invention can be used in other applications, where guiding a hidden instrument towards a hidden target is required. For example, the invention may be useful in certain inspection/maintenance systems, for example, guiding/working on wiring within a cavity wall, drilling into a surface to intersect a target (e.g. in locksmithing), etc. The various possible applications of the invention are manifold, and a detailed description of all possible applications of the invention is omitted from this disclosure purely for the sake of brevity, but the invention's application in other fields clearly fall within the scope of this disclosure.
A second aspect of the invention provides a method forming an augmented reality image comprising the steps of: determining the position and orientation of at least one head mounted display at least one real-time imaging device; forming a three-dimensional model comprising a vector for each head mounted display, each real-time imaging device, each vector comprising an origin corresponding to the position of a datum of, and an orientation corresponding to an axis of, the or each real-time imaging device and head mounted display; receiving a real-time images from the or each real-time imaging device; re-sampling the real-time images as if from the point of view of a wearer of the or each head mounted display based on the determined position and orientation of the respective head mounted display; and outputting, as an image in the head mounted display, a
resampled image as a video image viewed perpendicular to the line of sight of a wearer of the head mounted display.
The position and orientation of at least one instrument may also be determined and the resampled image suitably comprises an indication of the position and orientation of the instrument.
In a preferred embodiment, the step of determining the position and orientation of at least one head mounted display and at least one real-time imaging device suitably comprises: monitoring the outputs of six-axis position and orientation sensors affixed to any one or more of the head mounted display, the real-time imaging and the instrument.
Additionally or alternatively, the step of determining the position and orientation of at least one head mounted display and at least one real-time imaging device suitably comprises: observing from a plurality of viewpoints, encoding markers affixed to the or each real-time imaging device and head mounted display, and determining, based on stereoscopic image analysis of the relative positions and visible portions of the encoding markers, the relative positons and orientations of the or each head mounted display and real-time imaging device.
The method may further comprise the steps of using the real-time imaging to identify substantially immoveable or fixed points of reference within the body and marking those points; placing the marked points within the 3D model; morphing a pre-operative 3D model containing some or all of the same points of reference to the real-time images (for example, by warping or stretching the pre-operative image so that its points of reference overlie the real-time points of reference); and outputting in the second real-time video image, either or both of the displacement- and point-of-view- corrected, pre-operative 3D imagery; and the point-of-view-compensated real-time video footage.
The head mounted display can be of any suitable type, for example comprising a transparent visor placed in front of the wearer's eyes. An edge, or back projection system, for example, built into a headband of the visor, can project the re-sampled real-time images onto the visor. Such a
configuration enables the wearer to see directly through the visor (e.g. to see the exterior of the patient before her/him), as well as having a ghosted image of the interior of the patient overlaid.
The real-time imaging device may be any one or more of the group comprising: an x-ray imager, an ultrasonic imager, a CT scanner, an MRI scanner, or any medical imaging device capable of forming a real-time image of the inside of a patient.
In certain embodiments of the invention, more than one real-time imaging device may be employed, and in such a situation a wearer of the head mounted display may be able to toggle between, or overlay, the real-time images from the different real-time imaging devices.
The surgical instrument, where provided, can be an endoscopic instrument.
Where provided, the plurality of observation cameras is adapted to observe one or more of the head mounted display, the real-time imaging device and the surgical instrument. In certain embodiments of the invention, any one or more of the head mounted display, the real-time imaging device and the surgical instrument may comprise an encoding device, such as a barcode, QR code or other identifying marker, from which the observation cameras can determine the identity and/or positon and/or orientation o the head mounted display, the real-time imaging device or the surgical instrument.
The processor suitably comprises a computer adapted to carry out the functions of the invention. The computer, where provided, comprises I/O ports operative connected to the or each of the head mounted display, the real-time imaging device and optionally, to the surgical instrument.
The three-dimensional model suitably comprises a Cartesian coordinate system comprising a series of vectors corresponding to one or more the head mounted display, the real-time imaging device and the surgical instrument. The use of vectors is preferred because it enables each vector's origin to correspond to the position in 3-D space of each of the objects in the model, and the direction to correspond to an axis relative to the line of sight or another known axis of the device.
Re-sampling the real-time video image, in certain embodiments, is suitably accomplished by capturing a plurality of simultaneous real-time images using the real-time imaging device, and by
interpolating between the available views to obtain a resampled image viewed perpendicular to the line of sight of a wearer of the head mounted display. In one embodiment of the invention, the realtime imaging device is adapted to capture images at different rotational angles. Additionally, or alternatively, the real-time imaging device may be adapted to simultaneously capture a series of spaced-apart slices.
In certain embodiments, the processor comprises a machine vision system adapted to identify the aforesaid markers and to populate the three-dimensional model from the visible parts of the markers. By providing a plurality of observation cameras, the accuracy of the system is increased by stereo machine vision, which enables the positons and orientations the markers, and hence their associated components, to be determined.
The observation cameras may be fixed of moveable. In certain embodiments of the invention, the observation cameras are adapted to automatically pan, tilt, roll etc. if one of the markers moves out of view. The cameras may be configured to move so as to scan a particular field of view.
In certain embodiments of the invention, there are several head mounted displays, and each of the users may be able to select real-time imagery from their own point of view, of from the point of view of another head mounted display wearer.
A switch means, such as a foot pedal, is suitably provided to enable each wearer of each head mounted display to switch between different augmented reality views on her/his display.
An embodiment of the invention shall now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 is a schematic side view of a surgical set up in accordance with a first embodiment of the invention;
Figure 2 is a schematic plan view of Figure 1;
Figure 3 is a schematic system diagram illustrating the first embodiment of invention;
Figure 4 is a schematic side view of a surgical set up in accordance with a second embodiment of the invention;
Figure 5 is a schematic plan view of Figure 4;
Figure 6 is a schematic system diagram illustrating the second embodiment of invention;
Figure 7 is a perspective schematic view of an A image formed by the invention using realtime ultrasonic images; and
Figure 8 is a perspective schematic view of an AR image formed by the invention using a morphed version of a prior 3D scan matched to markers in the real-time ultrasonic images.
Referring to Figures 1 and 2, a typical surgical procedure involves operating on a target structure 12 located within the body 10. The surgeon (not shown) needs to access the target structure 12 using a laparoscopic instrument 14, which is inserted into the body 10 via an entry point 16, and whose orientation 18 is adjusted so that when advanced into the body 10, it intersects the target structure 12. A real-time imaging device, in this case, an ultrasonic imaging device, comprises a probe 20 that is placed on the surface 22 of the body 10 adjacent the target structure 12. The ultrasonic imaging device captures an internal image of the body 10, in which the target structure 12 and the tip of the laparoscopic instrument 14 are visible.
It will be appreciated, by comparing Figures 1 and 2, that a 2D image from the real-time imaging device 20 provides insufficient information to enable the surgeon to correctly guide the laparoscopic instrument. Specifically, whilst, as can be seen from Figure 1, a single 2D image may indicate that the horizontal and vertical positions of the instrument 14 are correctly placed relative to the target structure 12, the surgeon cannot check that the depth of the instrument is correct (cf. Figure 2). Traditionally, the "missing dimension" is gauged by judging parallax movements within the captured 2D image, or by moving the probe 20 to obtain a real-time image from a different perspective.
The invention, however, utilises a head mounted display unit 30, which the surgeon (not shown) wears on her/his head. The head mounted display unit 30 comprises a head band 32 bearing an encoding pattern 34, which can be seen by three observation cameras 50 arranged at different positions within the operating theatre. The head mounted display unit 30 further comprises a
transparent visor 36, placed in front of the surgeon's eyes, through which the body 10 is directly visible, and onto which, a real-time image is displayed. The result is a ghosted internal image of the interior of the body 10 overlaid onto the "real" view - an augmented reality display. The real-time imaging probe 20 also comprises encoding pattern 38, which can be seen by three observation cameras 50 as well.
Referring now to Figure 3, a system 60 in accordance with the invention comprises a processor 62 operatively connected to the (or several) real-time imaging devices 20, which produces (or each of which produce) a real-time video image 64 of the interior of the body 10. Meanwhile, several observation cameras 50 scan the scene, and capture real-time video images 66 of the encoding patterns 32, 38 from the head mounted display unit 30 and the real-time imaging probe 20.
The real-time images 66 from the observation cameras 50 are fed into a machine vision module 68 of the processor 62, which builds a 3D model 70 of the scene.
The 3D model 70 comprises a Cartesian coordinate system 72 containing a set of vectors 74 each having an origin and an orientation. The origin of each vector corresponds to the position of each observed item in the scene, and the orientation corresponds to an axis - conveniently, a "line of sight". The origin is determined by comparing the position, in each of the captured observations cameras' images 66, the positions of the target, i.e. the real-time imaging probe 20 and the head mounted display unit 30. Their positions in Cartesian space can be obtained by stereoscopic analysis of the captured observation cameras' images 60, and this is carried out in the machine vision module 68 of the processor. Similarly, the orientation of each of the components can be determined, by the machine vision module 68, by comparing which parts of the encoder patterns 34, 38 are visible in each of the observation cameras' captured images 66.
Thus, the machine vision module 68 of the processor 62 thus creates the 3D model 70 in which the vectors 74 correspond to the position and line of sight 80 of the head mounted display unit 30; the position and orientation 82 of the probe 20; and the position and orientation 18 of the laparoscopic instrument 14.
The real-time images 64 from the probe 20 can thus be resampled within a resampling module 84 of the processor 62: the real-time image 64 being resampled from as if from the point of view 80 of the wearer of the head mounted display 30. The resampled image is then rendered as an overlay image 88 in a rendering module 90 of the processor, which overlay image 88 is projected onto the visor 36 of the head mounted display 30 so that the surgeon can see the real-time medical image as if from her/his point of view/perspective.
It will be noted that the laparoscopic instrument 14 also comprises an encoder pattern 92, and this too is observed by the observation cameras 50 and inputted into the rendered image 88 that is projected onto the visor 36 of the head mounted display unit 30. Thus, even if the laparoscopic instrument 14 is out of the field of view 94 of the real-time medical imaging device 20, an image or line representing it can be overlaid into the rendered image 88. This greatly facilitates guiding the instrument 14 towards the target 12 within the body 10.
In use, therefore, a surgeon is able to see, in augmented reality, the interior of the body 10 as a virtual overlay on top of the exterior view of the body 10. Various controls may be made available to the surgeon, for example, by using a foot pedal control (not shown), to enable the apparent transparency of the rendered image to be adjusted in the visor 36 (so that an overlay, or only the rendered image is visible). Further, simply by moving her/his head or point of view, the rendered image 88 is automatically updated, enabling the surgeon to view the target structure 12, the instrument 14 from any desired angle, thus obviating the need to gauge depth perception based on parallax or other cues (e.g. depth of field, apparent relative sizes etc.).
Turning now to Figures 4 to 7 of the drawings, a second embodiment of the invention is shown, which is broadly similar to that described previously, in which the positon and orientation of a real-time imaging device 20, a the head mounted display unit 30 and an instrument 14 are sensed using 6DOF sensors 100, 102, 104 rigidly affixed to each of the real-time imaging device 20, the head mounted display unit 30 and the instrument 14, respectively.
In Figures 4 to 6, it will be noted that the head mounted display unit 30 is different to that described previously inasmuch as it comprises a pair of display screens 108 located, in use, in front of the eyes (not shown) of a wearer (not shown), and a pair of forward-facing cameras 110 each capturing one half of a composite stereoscopic image, which is displayed on the display screens 108. Thus, a wearer of the head mounted 108 sees a scene in front of him/her normally through the head mounted display 30.
The 6DOF sensors 100, 102, 104 each comprise 6-axis magnetic sensors that interact with a transmitter unit 112. The 6DOF sensors 100, 102, 104 thus each output six sensor readings, one sensor reading corresponding to each degree of freedom. The sensor readings are collected via wires or wirelessly, and are fed to inputs of a processor (not shown), whose operation shall be described in greater detail below.
Referring to Figure 6 of the drawings, the invention 100 comprises a head mounted display unit 30 comprising the stereoscopic cameras 110 and the stereoscopic display screens 108 described previously. The cameras 110 each capture one half of a stereoscopic image 120, which is fed to an image capture part 84 of the processor 62.
Meanwhile, the real-time imaging device 20 captures a first real-time image 122 along its point of view 82, and this image 122 is also fed into the image capture part 84 of the processor 62.
Each of the head mounted display 30, the real-time imaging device 20 and the instrument 14 comprise a 6DOF sensor 102, 104, 106, which are in-range 124 of a transmitter 112. The six sensor readings corresponding to each degree of freedom, from each of the 6DOF sensors 102, 104, 106 are fed into a mapping part 126 of the processor 62, which calculates a three-dimensional model 70 of the positions and orientations of each of the had mounted display 30, the real-time imaging device 20 and the instrument 14.
A rendering module 90 of the processor 62 resamples the captured footage 122 from the realtime imaging device 20, as if from the point of view 80 of a wearer of the head mounted display unit 30 and outputs a re-sampled feed 122'. The rendering module 90 of the processor mixes the re-
sampled feed 122' from the real-time imaging device 20 with the feeds 120 from each of the head mounted display's cameras 110 and outputs a composite image 130, corresponding to each of the feeds 120, i.e. one for each eye of the wearer of the head mounted display 30. The composite image is then displayed on the display screens 118 of the head mounted display unit 30, thus enabling the wearer of the head mounted display 30 to see an overlay of the footage 122 of the real-time imaging device, albeit re-sampled so as to appear as if it were captured along the line of sight 80 of the wearer of the head mounted display, as opposed to along the line of sight 82 of the real-time imaging device.
The system is also configured to capture the positon and orientation of an instrument, such as a laparoscopic instrument 14, and a virtual representation 132 of the instrument 14 can be mixed into the composite image 130.
The result is shown schematically in Figure 7 of the drawings, which shows what a wearer of a second head mounted display unit would see, for example, in theatre. In Figure 7, a surgeon 150 is wearing a first head mounted display unit 30 similar to that described above in relation to Figures 4, 5 and 6 above, and she holds in one hand, an ultrasonic probe 20 capturing a real-time image of a surgical site 152, including a target structure 12; and in the other hand, a surgical instrument 14.
The captured image 122 from the ultrasonic probe 20 is re-sampled 122' as if from the point of view shown in Figure 7, whereas, of course, the surgeon 150 would see a different re-sampled image 122' as if from her point of view 82. The virtual reality overlay 130 comprises a representation 132 of the hidden part of the instrument 14, as well as a re-sampled version 122' of the real-time footage 122 from the ultrasonic probe 20. Also included in the virtual reality overlay 130 is a topographical grid 154 representing the surface of the patient, as well as gridlines indicating depth etc. into the body of the patient.
In Figure 7, the ultrasonic probe 20 comprises a click-wheel type button 160 that enables the surgeon 150 to move and place a cursor 162 within the AR image 130 to identify various points of reference. Suitably, the points of reference correspond to substantially fixed or immoveable features within the patient's body, such as the target structure 12.
Using the marked points of reference 162, the processor 62 can warp, stretch and/or reorientate a pre-operative 3D model of the subject such that similar points of reference in the preoperative 3D model correspond to the cursor position 162 in 3D space. The processor can thus display in the A depiction 130, either or both of the displacement- and point-of-view-corrected, preoperative 3D imagery; and the point-of-view-compensated real-time video footage. This latter modification of the invention is shown schematically in Figure 8 of the drawings in which this time, the second real-time video image 130 comprises a morphed version of a pre-operative 3D model.
It will be appreciated that the invention provides a number of advantages over known imaging systems insofar as it enables the surgeon 150 to re-orient the ultrasonic probe 20, and or to move her head, to view the target 12 and/or instrument 14 from different angles. Also, since each wearer of each head mounted display unit is presented with the same set of images, albeit re-sampled from their own point of view, it enables two or more surgeons to work on the same operative site without having to duplicate equipment. The ability of a number of people to see the same augmented reality footage, albeit rendered from their individual points of view, affords a great deal of scope for teamwork and cooperation during a surgical procedure. It also offers considerable scope for crosschecking, thus avoiding, or reducing unnecessary iterations of an "approach" towards a surgical target structure 12. Further, any suspected errors or deviations in the pre-operative 3D model can be crosschecked against real-time imagery.
In summary: this disclosure relates to an augmented reality imaging system comprising: a head mounted augmented reality display unit (30), a real-time imaging device (20), such as an ultrasound probe, that captures a real-time video image (122) of a target; a processor; and means for sensing the position and attitude of the head mounted unit (30) and the real-time imaging device (20). The system adjusts the real-time video image (122) so that it appears, in the head-mounted display unit (30), to have been taken from the point of view of the user (150). Thus, a user (150) is able to move his/her head around a point of interest (12) to better gauge the "missing dimension" in what would otherwise be a 2D image. By placing virtual markers (162) in the AR output displayed in the user's head unit (30),
and by matching them to known points of interest in a subject, the system is able to correct for displacement etc. in offline (previously captured) 3D imagery (132), such as an M I scan, to the actual position of those features.
The invention is not restricted to the details of the foregoing embodiment, which is merely exemplary of the invention. In particular, the invention has been descried in fairly basic terms, but it will be appreciated that any number of head mounted display units 30 may be provided, each having its own rendered images 88; any number of instruments 14 may be used, each having its own vector 42 in the 3D model 70; and any number of real-time medical imaging devices may be used, each outputting its own real- or near-time images, which may be rendered according to any desired number of users' points of view. A monitor may also be provided, which displays any one or more of the rendered 88 or real-time images 64.
Claims
1. An augmented reality imaging system comprising:
a head mounted unit comprising a display;
a real-time imaging device adapted, in use, to capture a first real-time video image of a target;
a processor; and
means for sensing the position and attitude of the head mounted unit and the real-time imaging device, wherein the processor is adapted to:
create a three-dimensional model comprising the relative positions and orientations of the head mounted unit and the real-time imaging device;
receive the first real-time video image from the real-time imaging device; and
based on the said positions and orientations in the three-dimensional model, to re-sample the first real-time video image from the real-time imaging device as a video image as viewed from the position, and along the line of sight, of a wearer of the head mounted unit; and
to output the re-sampled video image as a second real-time video image on the display of head mounted unit.
2. The augmented reality imaging system of claim 1, further comprising means for sensing the position and orientation of an instrument, such as a surgical instrument, and the processor being adapted to insert into the second real-time video image, a representation of the instrument.
3. The augmented reality imaging system of claim 2, wherein the surgical instrument comprises an endoscopic instrument.
4. The augmented reality imaging system of any preceding claim, wherein the means for sensing the position and attitude of any one or more of the group comprising: the head mounted unit;
the real-time imaging device; and the instrument, comprises a position and attitude sensor fixedly connected, respectively, thereto.
5. The augmented reality imaging system of claim 4, wherein the position and attitude sensor comprises a six-axis sensor.
6. The augmented reality imaging system of claim 4 or claim 5, wherein the six-axis sensor comprises a magnetic and/or gyroscopic sensor.
7. The augmented reality imaging system of any preceding claim, wherein the means for sensing the position and attitude of any one or more of the group comprising: the head mounted unit; the real-time imaging device; and the instrument, comprises a plurality of observation cameras adapted to observe one or more of the head mounted unit, the real-time imaging device and the instrument; and wherein the processor is adapted to: receive footage from the observation cameras and to create a three-dimensional model comprising the relative positions and orientations of the head mounted unit and the real-time imaging device.
8. The augmented reality imaging system of claim 7, wherein any one or more of the head mounted unit, the real-time imaging device and the surgical instrument comprise any one or more of the group comprising: an encoding device; a barcode; a Q code; and an identifying marker, from which the observation cameras can determine the identity and/or positon and/or orientation of the head mounted unit, the real-time imaging device or the surgical instrument, respectively.
9. The augmented reality imaging system of claim 7 or claim 8, wherein the observation cameras are adapted to automatically pan, tilt and roll if one of the markers moves out of view.
10. The augmented reality imaging system of claim 7 or claim 8, wherein the observation cameras are adapted to scan a particular field of view.
11. The augmented reality imaging system of any preceding claim, wherein the real-time imaging device comprises any one or more of the group comprising: an x-ray imager, an ultrasonic imager, a CT scanner, an M I scanner, or any medical imaging device capable of forming a real-time image of the inside of a patient.
12. The augmented reality imaging system claim 11, wherein the real-time imaging device comprises an ultrasound probe and wherein the real-time video image comprises an ultrasonograph.
13. The augmented reality imaging system of claim 12, wherein the ultrasound probe comprises a trackball or a scroll-wheel type input device via which a user can move and/or place one or more cursors in the second real-time video image.
14. The augmented reality imaging system of claim 13, wherein the processor is adapted to incorporate the or each placed cursor into the three-dimensional model to create a "point map" of identifiable features in the ultrasonograph.
15. The augmented reality imaging system of claim 14, wherein the processor is adapted incorporate into the second real-time video image, a three-dimensional representation of a subject obtained previously, and to morph the three-dimensional representation of the subject obtained previously by fitting identifiable features in the three-dimensional representation of the subject obtained previously to the point map of identifiable features in the ultrasonograph.
16. The augmented reality imaging system of any preceding claim, wherein the head mounted unit comprises a transparent visor placed in front of the wearer's eyes, and an edge, or back projection system adapted to project the re-sampled real-time video image onto the visor.
The augmented reality imaging system of any preceding claim, wherein the head mounted unit comprises a pair of display screens located, in use, in front of the eyes of a wearer and a pair of
forward-facing cameras each capturing one half of a composite stereoscopic image, which is displayed on the display screens, along with the second real-time video image.
18. The augmented reality imaging system of any preceding claim, wherein the processor comprises a computer comprising I/O ports operatively connected to the or each of the head mounted unit, the real-time imaging device and optionally, to the surgical instrument.
19. The augmented reality imaging system of any preceding claim, wherein the three-dimensional model comprises a Cartesian coordinate system comprising a series of vectors corresponding to one or more the head mounted unit, the real-time imaging device and the surgical instrument.
20. A method forming an augmented reality image comprising the steps of:
determining the position and orientation of at least one head mounted unit and at least one real-time imaging device;
forming a three-dimensional model comprising a vector for each head mounted unit and each real-time imaging device, each vector comprising an origin corresponding to the position of a datum of, and an orientation corresponding to an axis of, the or each real-time imaging device and head mounted unit;
receiving real-time images from the or each real-time imaging device;
re-sampling the real-time images as if from the point of view of a wearer of the or each head mounted unit based on the determined position and orientation of the respective head mounted unit; and
outputting, as an image in a display of the head mounted unit, a resampled image as a video image viewed perpendicular to the line of sight of a wearer of the head mounted unit.
21. The method of claim 20, further comprising the step of determining the position and orientation of at least one instrument, and incorporating into the resampled image, an indication of the position and orientation of the instrument.
22. The method of claim 20 or claim 21, wherein the step of determining the position and orientation of the at least one head mounted unit and the at least one real-time imaging device comprises: monitoring the outputs of six-axis position and orientation sensors affixed to any one or more of the head mounted unit, the real-time imaging device and the instrument.
23. The method of claim 20, 21 or 22, wherein the step of determining the position and orientation of the at least one head mounted unit and the at least one real-time imaging device comprises: observing from a plurality of viewpoints, encoding markers affixed to the or each real-time imaging device, head mounted unit and/or instrument, and determining, based on stereoscopic image analysis of the relative positions and visible portions of the encoding markers, the relative positons and orientations of the or each head mounted unit, real-time imaging device and/or instrument.
24. The method of any of claims 20 to 23, further comprising the steps of: using the real-time imaging device to identify substantially immoveable or fixed points of reference within a subject; marking those points; placing the marked points within the 3D model; and morphing a previously- obtained three-dimensional model of the subject containing some or all of the same points of reference to the real-time images.
25. The method of claim 24, wherein the morphing step comprises warping, stretching and/or reorientating the previously-obtained three-dimensional model of the subject so that its points of reference overlie the real-time-identified points of reference in the real-time image.
26. The method of claim 24 or claim 25, further comprising the step of outputting in the second realtime video image, either or both of the displacement- and point-of-view-corrected, previously- obtained three-dimensional model of the subject; and the point-of-view-compensated real-time video footage.
27. The method of any of claims 20 to 26, wherein the re-sampling step comprises capturing a plurality of simultaneous real-time images using the real-time imaging device, and by interpolating between the available views, obtaining a resampled image viewed perpendicular to the line of sight of a wearer of the head mounted display.
28. The method of any of claims 20 to 27, comprising the real-time imaging device capturing images at different rotational angles.
29. The method of any of claims 20 to 28, comprising the real-time imaging device simultaneously capturing a series of spaced-apart slices.
30. The method of any of claims 20 to 29, comprising forming a plurality of augmented reality images for a corresponding plurality of head mounted units.
31. The method of claim 30, comprising toggling between the resampled images as a video images viewed perpendicular to the line of sight of wearers of the plurality of head mounted units.
32. An augmented reality imaging system, apparatus or method substantially as hereinbefore described, with reference to, and as illustrated in, the accompanying drawings.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB1510959.8A GB201510959D0 (en) | 2015-06-22 | 2015-06-22 | Augmented reality imaging System, Apparatus and Method |
PCT/GB2016/051866 WO2016207628A1 (en) | 2015-06-22 | 2016-06-22 | Augmented reality imaging system, apparatus and method |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3311251A1 true EP3311251A1 (en) | 2018-04-25 |
Family
ID=53784329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16747901.3A Withdrawn EP3311251A1 (en) | 2015-06-22 | 2016-06-22 | Augmented reality imaging system, apparatus and method |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3311251A1 (en) |
GB (1) | GB201510959D0 (en) |
WO (1) | WO2016207628A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110852132A (en) * | 2019-11-15 | 2020-02-28 | 北京金山数字娱乐科技有限公司 | Two-dimensional code space position confirmation method and device |
CN114253389A (en) * | 2020-09-25 | 2022-03-29 | 宏碁股份有限公司 | Augmented reality system and augmented reality display method integrating motion sensor |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9370372B2 (en) | 2013-09-04 | 2016-06-21 | Mcginley Engineered Solutions, Llc | Drill bit penetration measurement systems and methods |
US9833244B2 (en) | 2013-11-08 | 2017-12-05 | Mcginley Engineered Solutions, Llc | Surgical saw with sensing technology for determining cut through of bone and depth of the saw blade during surgery |
AU2015312037A1 (en) | 2014-09-05 | 2017-03-02 | Mcginley Engineered Solutions, Llc | Instrument leading edge measurement system and method |
US10154239B2 (en) | 2014-12-30 | 2018-12-11 | Onpoint Medical, Inc. | Image-guided surgery with surface reconstruction and augmented reality visualization |
US10321921B2 (en) | 2015-10-27 | 2019-06-18 | Mcginley Engineered Solutions, Llc | Unicortical path detection for a surgical depth measurement system |
US10390869B2 (en) | 2015-10-27 | 2019-08-27 | Mcginley Engineered Solutions, Llc | Techniques and instruments for placement of orthopedic implants relative to bone features |
US10321920B2 (en) | 2015-11-06 | 2019-06-18 | Mcginley Engineered Solutions, Llc | Measurement system for use with surgical burr instrument |
CN111329554B (en) | 2016-03-12 | 2021-01-05 | P·K·朗 | Devices and methods for surgery |
AU2017236893A1 (en) | 2016-03-21 | 2018-09-06 | Washington University | Virtual reality or augmented reality visualization of 3D medical images |
WO2018132804A1 (en) | 2017-01-16 | 2018-07-19 | Lang Philipp K | Optical guidance for surgical, medical, and dental procedures |
EP4424267A3 (en) * | 2017-04-20 | 2024-10-30 | Intuitive Surgical Operations, Inc. | Systems and methods for constraining a virtual reality surgical system |
US11589927B2 (en) | 2017-05-05 | 2023-02-28 | Stryker European Operations Limited | Surgical navigation system and method |
EP3672501B1 (en) | 2017-08-25 | 2023-06-14 | McGinley Engineered Solutions, LLC | Sensing of surgical instrument placement relative to anatomic structures |
US11801114B2 (en) | 2017-09-11 | 2023-10-31 | Philipp K. Lang | Augmented reality display for vascular and other interventions, compensation for cardiac and respiratory motion |
US10806525B2 (en) | 2017-10-02 | 2020-10-20 | Mcginley Engineered Solutions, Llc | Surgical instrument with real time navigation assistance |
US11103314B2 (en) | 2017-11-24 | 2021-08-31 | Synaptive Medical Inc. | Methods and devices for tracking objects by surgical navigation systems |
WO2019148154A1 (en) | 2018-01-29 | 2019-08-01 | Lang Philipp K | Augmented reality guidance for orthopedic and other surgical procedures |
WO2019152269A1 (en) * | 2018-02-03 | 2019-08-08 | The Johns Hopkins University | Augmented reality display for surgical procedures |
US11191609B2 (en) | 2018-10-08 | 2021-12-07 | The University Of Wyoming | Augmented reality based real-time ultrasonography image rendering for surgical assistance |
EP3870021B1 (en) | 2018-10-26 | 2022-05-25 | Intuitive Surgical Operations, Inc. | Mixed reality systems and methods for indicating an extent of a field of view of an imaging device |
US11857378B1 (en) | 2019-02-14 | 2024-01-02 | Onpoint Medical, Inc. | Systems for adjusting and tracking head mounted displays during surgery including with surgical helmets |
US11553969B1 (en) | 2019-02-14 | 2023-01-17 | Onpoint Medical, Inc. | System for computation of object coordinates accounting for movement of a surgical site for spinal and other procedures |
DE102019116269A1 (en) * | 2019-06-14 | 2020-12-17 | apoQlar GmbH | Video image transmission device, method for live transmission and computer program |
US11529180B2 (en) | 2019-08-16 | 2022-12-20 | Mcginley Engineered Solutions, Llc | Reversible pin driver |
EP4072426A1 (en) * | 2019-12-13 | 2022-10-19 | Smith&Nephew, Inc. | Anatomical feature extraction and presentation using augmented reality |
CN111462337B (en) * | 2020-03-27 | 2023-08-18 | 咪咕文化科技有限公司 | Image processing method, device and computer readable storage medium |
US12053247B1 (en) | 2020-12-04 | 2024-08-06 | Onpoint Medical, Inc. | System for multi-directional tracking of head mounted displays for real-time augmented reality guidance of surgical procedures |
WO2022192585A1 (en) | 2021-03-10 | 2022-09-15 | Onpoint Medical, Inc. | Augmented reality guidance for imaging systems and robotic surgery |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6256529B1 (en) * | 1995-07-26 | 2001-07-03 | Burdette Medical Systems, Inc. | Virtual reality 3D visualization for surgical procedures |
KR20140112207A (en) * | 2013-03-13 | 2014-09-23 | 삼성전자주식회사 | Augmented reality imaging display system and surgical robot system comprising the same |
-
2015
- 2015-06-22 GB GBGB1510959.8A patent/GB201510959D0/en not_active Ceased
-
2016
- 2016-06-22 EP EP16747901.3A patent/EP3311251A1/en not_active Withdrawn
- 2016-06-22 WO PCT/GB2016/051866 patent/WO2016207628A1/en active Application Filing
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110852132A (en) * | 2019-11-15 | 2020-02-28 | 北京金山数字娱乐科技有限公司 | Two-dimensional code space position confirmation method and device |
CN110852132B (en) * | 2019-11-15 | 2023-10-03 | 北京金山数字娱乐科技有限公司 | Two-dimensional code space position confirmation method and device |
CN114253389A (en) * | 2020-09-25 | 2022-03-29 | 宏碁股份有限公司 | Augmented reality system and augmented reality display method integrating motion sensor |
CN114253389B (en) * | 2020-09-25 | 2023-05-23 | 宏碁股份有限公司 | Augmented reality system integrating motion sensor and augmented reality display method |
Also Published As
Publication number | Publication date |
---|---|
WO2016207628A1 (en) | 2016-12-29 |
GB201510959D0 (en) | 2015-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016207628A1 (en) | Augmented reality imaging system, apparatus and method | |
CN109758230B (en) | Neurosurgery navigation method and system based on augmented reality technology | |
US7774044B2 (en) | System and method for augmented reality navigation in a medical intervention procedure | |
JP5380348B2 (en) | System, method, apparatus, and program for supporting endoscopic observation | |
US8414476B2 (en) | Method for using variable direction of view endoscopy in conjunction with image guided surgical systems | |
EP2043499B1 (en) | Endoscopic vision system | |
US9289267B2 (en) | Method and apparatus for minimally invasive surgery using endoscopes | |
CN101193603B (en) | Laparoscopic ultrasound robotic surgical system | |
US9636188B2 (en) | System and method for 3-D tracking of surgical instrument in relation to patient body | |
US10955657B2 (en) | Endoscope with dual image sensors | |
US20050054895A1 (en) | Method for using variable direction of view endoscopy in conjunction with image guided surgical systems | |
CN111970986A (en) | System and method for performing intraoperative guidance | |
JP2004538538A (en) | Intraoperative image-guided neurosurgery and surgical devices with augmented reality visualization | |
WO2007115825A1 (en) | Registration-free augmentation device and method | |
US20210121238A1 (en) | Visualization system and method for ent procedures | |
JP3707830B2 (en) | Image display device for surgical support | |
WO2020145826A1 (en) | Method and assembly for spatial mapping of a model, such as a holographic model, of a surgical tool and/or anatomical structure onto a spatial position of the surgical tool respectively anatomical structure, as well as a surgical tool | |
CN116829091A (en) | Surgical assistance system and presentation method | |
US20230165640A1 (en) | Extended reality systems with three-dimensional visualizations of medical image scan slices | |
US20240164844A1 (en) | Bone landmarks extraction by bone surface palpation using ball tip stylus for computer assisted surgery navigation | |
US20230015060A1 (en) | Methods and systems for displaying preoperative and intraoperative image data of a scene | |
US12048493B2 (en) | Camera tracking system identifying phantom markers during computer assisted surgery navigation | |
CN118251188A (en) | Surgical navigation system and navigation method with improved instrument tracking | |
CN112867427A (en) | Computerized Tomography (CT) image correction using orientation and orientation (P & D) tracking assisted optical visualization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20180122 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20190315 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20190726 |