WO2009004296A1 - Non-photorealistic rendering of augmented reality - Google Patents

Non-photorealistic rendering of augmented reality Download PDF

Info

Publication number
WO2009004296A1
WO2009004296A1 PCT/GB2008/002139 GB2008002139W WO2009004296A1 WO 2009004296 A1 WO2009004296 A1 WO 2009004296A1 GB 2008002139 W GB2008002139 W GB 2008002139W WO 2009004296 A1 WO2009004296 A1 WO 2009004296A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
transparency
npr
window
captured
Prior art date
Application number
PCT/GB2008/002139
Other languages
French (fr)
Inventor
Guang-Zhong Yang
Mirna Lerotic
Original Assignee
Imperial Innovations Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imperial Innovations Limited filed Critical Imperial Innovations Limited
Priority to AT08762451T priority Critical patent/ATE500578T1/en
Priority to JP2010514098A priority patent/JP5186561B2/en
Priority to CN200880022657.0A priority patent/CN101802873B/en
Priority to EP08762451A priority patent/EP2174297B1/en
Priority to US12/666,957 priority patent/US8878900B2/en
Priority to DE602008005312T priority patent/DE602008005312D1/en
Publication of WO2009004296A1 publication Critical patent/WO2009004296A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/62Semi-transparency

Definitions

  • the present invention relates to a method of rendering images, in particular to provide occlusion cues, for example in medical augmented reality displays.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

A method and system for rendering a captured image of a scene is disclosed which provide see-through vision through the rendered image, for example of an augmented reality object rendered behind the scene, by assigning pixel transparency values in dependence upon captured image pixels. The method and system preserve some structure of the scene in the rendered image without requiring a model of the scene.

Description

NON-PHOTOREALISTIC RENDERING OF AUGMENTED REALITY
The present invention relates to a method of rendering images, in particular to provide occlusion cues, for example in medical augmented reality displays.
Augmented reality (AR) is becoming a valuable tool in surgical procedures. Providing real-time registered preoperative data during a surgical task removes the need to refer to off-line images and aids the registration of these to the real tissue. The visualization of the objects of interest becomes accessible through the "see-through" vision that AR provides.
In recent years, medical robots are increasingly being used in Minimally Invasive Surgery (MIS). With robotic assisted MIS, dexterity is enhanced by microprocessor controlled mechanical wrists, allowing motion scaling for reducing gross hand movements and the performance of micro-scale tasks that are otherwise not possible.
The unique operational setting of the surgical robot provides an ideal platform for enhancing the visual field with pre-operative/intra-operative images or computer generated graphics. The effectiveness and clinical benefit of AR has been well recognized in neuro and orthopedic surgery. Its application to cardiothoracic or gastrointestinal surgery, however, remains limited as the complexity of tissue deformation imposes significant challenges to the AR display.
Seamless synthesis of AR depends on a number of factors relating to the way in which virtual objects appear and visually interact with a real scene. One of the major problems in AR is the correct handling of occlusion. Although the handling of partial occlusion of the virtual and real environment can be achieved by accurate 3D reconstruction of the surgical scene, particularly with the advent of recent techniques for real-time 3D tissue deformation recovery, most surgical AR applications involve the superimposition of anatomical structures behind the exposed tissue surface. This, for example, is important for coronary bypass for which improved anatomical and functional visualization permits more accurate intra-operative navigation and vessel excision. In prostatectomy, 3D visualization of the surrounding anatomy can result in improved neurovascular bundle preservation and enhanced continence and potency rates.
Whilst providing a useful in plane reference in stereo vision environments, traditionally overlaid AR suffers from inaccurate depth perception. Even if the object is rendered at the correct depth, the brain perceives the object as floating above the surface (See for example Johnson LG, et al, Surface transparency makes stereo overlays unpredictable: the implications for augmented reality, Studies in Health Technology and Informatics 2003, 94:131-6; and Swan JE, et al, Egocentric Depth Judgments in Optical, See-Through Augmented Reality, IEEE Transactions on Visualization and Computer Graphics 2007, 13(3):429- 42).
For objects to be perceived as embedded in the tissue, our brains expect some degree of occlusion. To address the problem of depth perception in AR, a number of rendering techniques and display strategies have been developed to allow for accurate perception of 3D depth of the virtual structures with respect to the exposed tissue surface. In Sielhorst T, et al, Depth Perception - A Major Issue in Medical AR: Evaluation Study by Twenty Surgeons, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2006 2006:364-72. the issue of depth perception in medical AR has been studied. In agreement with the two references cited above, it was found that depth perception is poor if the AR object is rendered opaquely as it appears to float above the outer body surface even though rendered at the correct depth behind it. Two ways of improving depth perception were identified: rendering both the body surface and the AR object as transparent or rendering the body surface with a window defined inside it such that the window provides an occlusion clue whereby the AR object can be seen within the window but is otherwise occluded by the body surface. Regarding the former approach (transparent rendering), while this may result in improved depth perception for some surfaces, in general rendering two overlayed transparent surfaces results in conflicting visual cues from occlusion such that depth perception can be poor (see for example Johnson et al cited above). The latter approach (rendering a window) has the disadvantage that all information about the body surface within the window is lost.
In Virtual Window for Improved Depth Perception in Medical AR: C. Bichimeir, N. Navab, International Workshop on Augmented Reality environments for Medical Imaging and Computer-aided Surgery (AMI-ARCS 2006), Copenhagen, Denmark, October 2006 (available online at http://ar.in.tum.de/pub/bichlmeier2006window^ichlmeier2006window.pdf) various approaches to improving the depth perception obtained with the window approach while maintaining information about the body surface within the window have been studied. The following approaches have been considered: adapting the window shape to the shape of the body surface, rendering the window surface glass-like using highlight effects due to a virtual light source, mapping the window plane with a simple structured texture, simulating a finitely sized frame for the window and setting the background of the AR objects either transparent or opaque. A drawback of all but the last of these approaches is that a 3D model of the body surface must be known so that the window contour or surface can be rendered accordingly. Such a 3D model can be difficult to obtain reliably in particular if the body surface is deforming or changing in other ways during imaging.
According to one aspect of the invention, there is provided a method of rendering a captured digital image as defined in claim 1.
Advantageously, by setting transparency values of a corresponding non- photorealistically rendered (NPR) image based on the captured image itself it is possible to define a partly transparent window which conserves some of the structure of the image to provide occlusion cues such that depth perception is made possible (in the case of a 2 dimensional rendering of the image) or aided (in the case of 3 dimensional rendering of the image). This approach does not require a model of the scene underlying the image as it is based on the image data itself.
As mentioned above, the NPR image may be rendered as a 2 dimensional view or a second image may be captured to define a stereoscopic view. In any event, a virtual (for example AR) object may be rendered behind the captured image.
The assignment of the transparency values may be done inside a window such that the NPR captured image remains opaque outside the window, occluding the object when it is not seen through the window. For a more natural appearance of the scene or to aid fusing the two windows in a stereoscopic view, the transparency values may be gradually blended from within the window to outside of it. The window position may be defined in dependence upon the viewer's gaze, continuously tracking the viewer's gaze or updating the position only when an update request is received from the viewer. Of course, in case of stereoscopic viewing, the windows in the two (left and right) NPR images may be offset by an amount determined by the camera positions and parameters in accordance with the stereoscopic view.
The transparency values may be determined as a function of a normalised image intensity gradient at corresponding locations in the captured image. Calculating the image intensity gradient may include determining a partial derivative with respect to an image coordinate divided by image intensity at the corresponding locations.
Determining NPR image transparency as set out above can be seen as an example of a method of setting the transparency values by defining a saliency map for an area of the NPR image and assigning transparency values as a function of values of respective corresponding locations in the same saliency map. The saliency map may be arranged to capture salient features of the image, for example features which are salient because they protrude from the background of the underlying scene or because of colour and/or intensity contrast. In particular, the saliency map may be defined as a function of local slopes in a scene underlying the image, for example as estimated based on shading in the image. The local slopes may be estimated as a function of respective normalised intensity gradients in the image.
In addition to assigning transparency values to conserve salient features as more or less opaque and make the background within an area or window more or less transparent, the saliency map may also be used to assign a colour value to a pixel of the NPR image, for example using a colour scale. In particular, the transparency and colour values may be assigned such that an object rendered behind the NPR image is being perceived as being viewed through the transparent area (window) while being occluded by pixels within the area which have high values in the saliency map. In one application, the virtual object may be derived from medical imaging data, for example CT or MRI images of a tumour. In particular, the images may have been captured using a stereoscopic endoscope, for example during thoracic keyhole surgery. However, it will be understood that the rendering method described above is not limited to medical AR applications but is more generally applicable to AR applications were a virtual object is rendered out of a viewer's normal view behind a captured scene.
In a further aspect of the invention, there is provided a system for rendering a digital image as claimed in claim 21.
In yet a further aspect of the invention there is provided a robotic surgery console as defined in claim 41.
Further aspects of the invention extend to a computer program as defined in claim 42.
For the avoidance of doubt, the term NPR image (short for Non- Photorealistically Rendered image) is used here to designate the captured and processed digital image, applied for example as a texture to a plane corresponding to an image plane in a 3D computer graphics model which may also contain the AR object. Of course, this model may be rendered as a 2D or stereoscopic 3D image.
Embodiments of the invention are now described by way of example only and with reference to the accompanying drawings in which: Figure 1 is a flow diagram of a method for non-photorealistic rendering of at least part of an image, for example to reveal an object rendered behind the scene captured in the image;
Figure 2 is a flow diagram of an algorithm for rendering of a corresponding NPR image and AR object in 2D or 3D; Figure 3 depicts a corresponding system; Figure 4 depicts a mask function used in the processing; and Figure 5 depicts an example of an AR view rendered using the described method.
The underlying idea for the present method of rendering an image as applied to medical AR displays is to render an exposed anatomical surface as a translucent layer while keeping sufficient details to aid navigation and depth cueing. One embodiment is based on pq-space based Non-Photorealistic Rendering (NPR) for providing a see through vision of the embedded virtual object whilst maintaining salient anatomical details of the exposed anatomical surface. To this end, surface geometry based on a pq-space representation is first derived, where p and q represent the slope of the imaged surface along the x, y axes, respectively. For example, this can be achieved with photometric stereo by introducing multiple lighting conditions. For deforming tissue, however, the problem is ill posed and the introduction of multiple light sources in an endoscopic set-up is not feasible. Nevertheless, the problem can be simplified for cases where both camera and a light source are near to the surface being imaged (see Rashid HU, Burger P. "Differential algorithm for the determination of shape from shading using a point light source". Image and Vision Computing 1992; 10(2): 119 - 27, herewith incorporated herein by reference), such as in bronchoscopes and endoscopes. In such cases, the value of image intensity at coordinates x,y for a near point light source is given by E(X,y) = S^X^∞sθ (1) r2 where S0 is the light source intensity constant, p(x,y) is the albedo or reflection coefficient, r is the distance between the light source and the surface point (x,y,z) , and θ is the angle between the incident light ray and the normal to the surface ή . In gradient space, the normal vector to the surface is equal to
Figure imgf000010_0001
where p and q represent surface slopes in directions x and y respectively. For a smooth Lambertian surface in the scene, image intensity given by equation Eq. 1 can be reduced to
Figure imgf000010_0002
which defines the relationship between the image intensity E(x, y) at the point
(x, y) and scene radiance at the corresponding surface point (x0Z0,y0Z0, Z0) with surface normal (po, qo, —l) , where paυerage denotes the average albedo in a small neighborhood of the surface and S0 is the light source intensity constant.
Lambertian surface under point source illumination is an idealised surface material that satisfies two conditions: (1) it appears equally bright from all viewing directions, and (2) it reflects all incident light.
By utilizing partial derivatives of the image intensity in equation Eq. 3, normalised partial derivatives in x/ y at image location (x, y) , Rx , , can be written in terms of only image coordinates and local slopes: x E dx
P - l dE - Ry ~E~dy~~
Figure imgf000011_0001
This can be rewritten as two linear equations in pQ and q0 (the normalised partial derivatives or normalised gradient Rx >y at image location (x,y) being determinable from the image intensity at (x, y) and its neighbourhood):
Figure imgf000011_0002
with
A1 = (-x0 Rx +3)-(l + xo2 +y0 2)-S- x0 2 B1 = -Rx -{1 + x0 2 + y0 2)- y0 -3- x0 y0
Figure imgf000011_0003
B2=(-y0-Ry+3)-(l + x0 2+y0 2)-3-y0 2 C2=Ry-(l + x0 2+y0 2) + 3-y0 which gives the following expressions for p0 and q0 at each point (a;, y) of the image:
B Rr1ΠC-, -— B RXdJn B1A2 - A1B2 O)
__ -Af1 + A1C2
(8) q° B1A2 - A1B2
The p and q values of the imaged surface capture 3D details of the exposed anatomical structure and are used to accentuate salient features (that is features protruding from the surface and hence having a high gradient) while making the smoothly varying background surface semi-transparent. It will be understood that the p and q values may be calculated using any suitable technique. To create the desired visual clues, surfaces of the scene that are parallel to the viewing plane (low p , q ) are rendered as more or less transparent whilst sloped structures (high p , q ) are accentuated and rendered more or less opaque. A measure of the surface slope is generated from the pq- values for each image point (_ε, y) by
Six, y) = log(abs(pQ) + abs(q0) + 1) (9) where high values of S(x, y) correspond to large gradients. In effect, this provides a saliency map or salient image. The logarithm squashes high values of p and q to limit the dynamic range for display purposes. A smooth background map B is created by applying a wide Gaussian filter to the saliency map S , thereby smoothing out high frequency variations in the image likely to represent noise or minor surface variations rather than "true" salient features. The salient and background images are combined using a mask such that low pixel values of S are replaced with the value of B at the corresponding (x, y) pixel, as described in detail below.
Turning to the practical application of the above saliency map, with reference to Figure 1 , in a method 2 of producing a non-photorealistic rendering of a captured image (or an area thereof) for use as a texture projected on an image plane in a computer graphics model, at step 4 a region of interest (ROI) is defined for NPR processing. This may include the entire image or a sub area thereof. At step 6 the region of interest is pre-processed including converting the pixel colour values (if in colour) to grey-scale and applying a mild smoothing function such as a 3x3 pixels Gaussian. At step 8, the saliency map is calculated from the captured image in the ROI as described above using any known method for calculating the partial derivatives, for example simply differencing the pixel values of the pixel in question with a neighbouring pixel in the relevant (e.g. x) direction. The map is calculated from the partial derivative and image intensity (for example grey-scale value) at each pixel or location in the ROI.
At step 10 the saliency map is denoised by combining it with the smooth background map B (for example B can be derived from S by applying a broad Gaussian filter of 6x6 pixels with a spread of 7 pixels). S and B are combined in accordance with a mask function as S(x, y) = mαsk(x, y) - S(x, y) + (l — mαsk(x, y)) - B(x, y) (10)
such that the saliency map is blended with the background map whereby S dominates where S is high and B dominates where S is low. The mask function can be defined using splines with a few control points (for example a Catmull Rom spline). A suitable mask function is depicted in figure 5 as a function of S(x,y). Of course, other mask functions can also be employed, for example a step function with a suitable threshold value for S(x, y) . Another example could be a suitable polynomic function.
At step 12, the pixels within the denoised region of interest are assigned colour values in accordance with the saliency map S(x, y) using a colour scale. A suitable colour scale can range from black for the minimum value of S through blue to white for the maximum value. These artificial colours are applied to the NPR image in accordance with a window function, for example a radial window function f(r) such that the artificial colour is applied within the window and the original colour of the image remains outside the window. One example of f(r) is a step function, in which case the window would have a sharp edge discretely switching from artificial to image colour. To achieve a smooth transition, f(r) defines a transition region in another example
Figure imgf000014_0001
where r2 = (x - zcenter)2 + (y - ycmterf and r0 determines the window size. By defining a smooth transition, fusing the two (left and right) windows in a stereo image (see below) may be helped if applicable.
At step 14, the NPR image pixel transparency within the same window is set in accordance with S(x, y) and the window function f(r) such that the lowest value of S corresponds to fully (or nearly fully, say 95%) transparent and the maximum value of S corresponds to fully (or nearly folly, say 95%) opaque and the transparency values are blended with the remainder of the NPR image using f(r), for example.
For the avoidance of doubt, an example of the blending operation using f(r) can be seen as blending the processed Non-Photorealistic Rendered Image (NPI) with the Captured Image (CI) to arrive at the NPR image as follows:
NPR image = f(r) - CI + {I - f{r)) ■ NPI . (12)
The placement of the window (that is the origin of f(r) in the above example (x center > ycenter)) can be set in a number of ways. For example, in some embodiments the window position is predefined based on the position of an AR object to be rendered behind the image plane, in particular if there is only a single object to be rendered within the field of view. Alternatively, in some embodiments the window position may be determined based on a viewer's gaze (for example detected using an eye tracker device), either updated continuously or in accordance with a viewer's fixations when the viewer issues a request for the window position to be updated (or in some embodiments , if no window is presently displayed, the window is displayed in accordance with the viewer's fixation when the window is switched on). This dynamic window display may be particularly useful if a full AR scene is being rendered rather than a single object.
With reference to figures 2 and 3, a system and method for displaying a captured image of a scene together with an augmented reality object is now described. At step 16, an image of the scene is captured by an imaging device 22 and transmitted to a processing unit 24. The captured image is treated as a texture projected onto a scene plane object to be displayed in a computer graphics model in some embodiments. The computer graphics model may be implemented in a number of ways, for example using the OpenGL graphics library in a C++ program in some embodiments.
The correct perspective (and disparity in the case of 3D rendering) is already contained in the captured image (left and right captured image in the case of a stereo camera imaging device) but the depth information about the augmented reality object in the computer graphics model is important to consistently handle occlusion.
For stereo cameras and displays, the position of the augmented reality object in the left and right view has to match disparity in the captured images. This is achieved through stereo camera calibration with provides necessary transformations between the views. In the combined scene the captured images are displayed in the same position for both left and right views as they already contain the disparity, whereas the augmented reality object is rendered at different positions for left and right views so as to match the disparity of the captured images. The transformations obtained as part of a stereo camera calibration are utilized using OpenGL transformations in some embodiments.
In some exemplar implementations, an OpenGL window is opened for each (left and right) display and the NPR image is displayed (for example, applied to a scene plane object acting like a projection screen) in each window at an appropriate depth (see below). The AR object is then rendered for each window using the camera calibration for the respective (left and right) camera. Similarly, the relative position of the ROI and/or window in each of the OpenGL windows is set using the camera calibration data to be consistent with the stereoscopic view in some implementations. In this respect, smoothly blended windows will assist in fusing the two windows in a stereoscopic view even if the window displacement is not exactly in accordance with the stereoscopic view. For example, one approximation suitable for a stereoscopic endoscope camera is to use the camera displacement as the window displacement in some implementations.
To ensure that each pixel is coloured correctly, the z coordinate of the NPR image (the scene object plane) must be closer to the camera position in the computer graphics model than the position of the AR object in the camera frame such that pixels take the colour values of the scene object where the scene object occludes the AR object behind it (or where low transparency parts of the scene object dominate the colouring of the corresponding pixels). Since the occlusion cues provided by the non-photorealistic processing discussed above are only necessary when the AR object lies behind the surface captured by the imaging device 22 (otherwise the real object would be visible in the captured image), it is assumed for rendering in some embodiments that the scene object is always in front of the AR object and, for example, the frontal plane of the viewing frustrum of the computer graphics model is used as the IS
scene object on which the captured image(s) are projected in some implementations. This ensures correct handling of occlusion without influencing the perspective or disparity of the captured image(s) which are defined by the real camera positioning.
Alternatively, in some implementations the NPR image is positioned at or close to the camera focal plane or the depth of the scene plane could be recovered using range finding to set the NPR image depth (see for example Stoyanov D., et al, Computer Aided Surgery July 2005, 10(4): 199-208, herewith incorporated herein by reference). Techniques which are used for depth recovery in some embodiments include the use of the observer's vergence from eye tracking (Mylonas GP et al, Proceedings of the second International Workshop on Medical Imaging and Augmented Reality, MIAR (2004), Beijing, 311-319), depth recovery from stereo such as sparse techniques, shape from shading or a combination of the two (Stoyanov D et al, MICCAI (2) 2004: 41-48) or the use of structured light (Koninckx TP and Van Gool L, IEEE PAMI, vol28, no. 3, 2006), fiducial markers or laser range finding (Mitsuhiro H et al, Medical Image Analysis 10 (2006) 509-519), all herewith incorporated herein by reference.
At step 18 the location of the virtual AR object in the computer graphics model is determined. An AR object input 26 provides coordinate data of the AR object obtained from medical imaging, for example from MRI or CT data. This data must be expressed in coordinates in a frame of reference fixed on a patient's body, for example using fiducial markers fixed on the patient's body before the images are obtained to convert the image data from the medical imagining device frame to a body frame of reference. For accurate display of the AR object together with the captured scene image the AR objects coordinates from the body frame of reference defined by the fiducial markers need to be transformed into the imagining device camera frame of reference from reference frame input 28 which tracks both camera position and orientation as well as the fiducial markers, for example by tracking the fiducial markers using a 3 dimensional tracking device to determine the position and orientation of the body frame of reference relative to the camera frame of reference, as is well known in the art.
If the camera position is known (for example in the case of a robotic surgery console in which the location of the camera can be derived from the position of the robotic arm supporting the camera) this known position in a coordinate frame fixed to the operating room is used together with the tracker data to perform the necessary coordinate transformation in some implementations particularly relevant to applications in a robotic surgery console. In other setups where the cameras are more mobile (such as in a head mounted video see-through arrangement) camera position also needs to be tracked to be able to perform the required coordinate transform. Details of the required measurements and transformation are described in Vogt S. et al, Reality Augmentation for Medical Procedures: System Architecture, Single Camera Marker Tracking and System Evaluation, International Journal of Computer Vision 2006, 70(2): 179-190, herewith incorporated herein by reference.
At step 2 (it will be understood that the order of steps is not limited to the one shown in figure 2 but the steps can be performed in any order, of course subject to the constraint that an image must be captured before it can be processed), NPR processing of the captured image is carried out as described above with reference to figure 1, possibly taking an input about the window position from window position input 30 which, in some embodiments, includes an eye tracker and user interface for switching the window on or off and selecting a window position, for the window described above.
Once the captured image has been processed and applied as an NPR image to the scene object and the 3D location of the AR object has been defined as described above, the scene object and AR object are rendered for display on the display device 32 at step 20. This may be a 2D view of the corresponding computer graphics model where the display device is a simple monitor or a 3D view consisting of a left and right image where the display device is stereo capable.
In some embodiments, the system and method described above are incorporated in a minimally invasive robotic surgery console, for example the da Vinci robotic surgical console by Intuitive Surgical, Inc, Mountain View, USA. The console provides robotic manipulation for remotely controlling minimally invasive surgical tools and stereo visual feedback via a fixed position stereo display device providing respective left and right images from a stereoscopic endoscope to each eye of the operator. The captured images are processed as described above and can then be rendered together with an AR object (e.g. representing a tumour).
Figures 5 a and b show respective left and right eye views of lung tissue from a robotic assisted lung lobectomy. Figures 5 c and d show the same view with a transparent AR overlay of an AR object in which both the scene and the object are rendered transparently as in the see-through video approach of Vogt et al, referenced above. Figures 5 e and f depict respective left and right views of the scene of Figures 5 a and b processed using the method described above together with the AR object and figures g and h show the views of a and b and e and f, respectively, combined using a smooth radial window. As can clearly be seen, the window provides for a more natural interpretation of the 3 dimensional scene as the AR object is seen through the window while the preserved features in the window maintain reference information for the surgeon and also provide additional depth cues by occlusion.
It will be understood that many modifications to the embodiments described above are possible. For example, more than one object may be displayed and the order of the method steps as described above can be altered at will within the constraints that certain steps require the result of certain previous steps, Moreover, the above-described method for determining transparency values which preserves salient features of an image while providing see-through vision to an underlying object at the same time will be applicable to many other kinds of scenes, objects and applications than the ones described above.
It will, of course, be understood that, although particular embodiments have just been described, the claimed subject matter is not limited in scope to a particular embodiment or implementation. For example, one embodiment may be in hardware, such as implemented to operate on a device or combination of devices, for example, whereas another embodiment may be in software. Likewise, an embodiment may be implemented in firmware, or as any combination of hardware, software, and/or firmware, for example. Likewise, although claimed subject matter is not limited in scope in this respect, one embodiment may comprise one or more articles, such as a carrier or storage medium or storage media. The storage media, such as, one or more CD-ROMs solid state memory, mageneto-optical disk and/or magnetic disks or tapes, for example, may have stored thereon instructions, that when executed by a system, such as a computer system, computing platform, or other system, for example, may result in an embodiment of a method in accordance with claimed subject matter being executed, such as one of the embodiments previously described, for example. One embodiment may comprise a carrier signal on a telecommunications medium, for example a telecommunications network. Examples of suitable carrier signals include a radio frequency signal, an optical signal, and/or an electronic signal.
As one potential example, a computing platform or computer may include one or more processing units or processors, one or more input/output devices, such as a display, a keyboard and/or a mouse, and/or one or more memories, such as static random access memory, dynamic random access memory, flash memory, and/or a hard drive.
In the preceding description, various aspects of claimed subject matter have been described. For purposes of explanation, specific numbers, systems and/or confϊguratipns were set forth to provide a thorough understanding of claimed subject matter. However, it should be apparent to one skilled in the art having the benefit of this disclosure that claimed subject matter may be practiced without the specific details. In other instances, well known features were omitted and/or simplified so as not to obscure the claimed subject matter. While certain features have been illustrated and/or described herein, many modifications, substitutions, changes and/or equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and/or changes as fall within the scope of claimed subject matter.

Claims

CLAIMS:
1. A method of rendering a captured digital image captured by a camera and defined by a plurality of image pixels as a non-photorealistically rendered NPR image defined by a plurality of NPR image pixels each having an associated NPR transparency value, wherein the transparency value of at least some of the NPR image pixels is determined in dependence upon corresponding captured image pixels.
2. A method as claimed in claim 1 including rendering a virtual object behind the NPR image.
3. A method as claimed in claim 1 or claim 2 including rendering a further captured image as a further NPR image to define a stereoscopic view.
4. A method as claimed in claim 1, claim 2, or claim 3 including defining a window within the NPR image and assigning the transparency values within the window in dependence upon corresponding captured image pixels.
5. A method as claimed in claim 4 including setting transparency values to opaque outside the window.
6. A method as claimed in claim 5 including gradually blending the transparency values from within the window to outside of it.
7. A method as claimed in any one of claims 4 to 6 including defining a window position in dependence upon a viewer's gaze.
8. A method as claimed in claim 7 in which the window position is updated when an update request is received from the viewer.
9. A method as claimed in claim 7 in which the window position is updated continuously to track the viewer' s gaze.
10. A method as claimed in any of one claims 4 to 7 when dependent on claim 3 including defining a further window in which transparency values are assigned in the further NPR image, the further window being positioned at an offset with respect to the window in accordance with the stereoscopic view.
11. A method as claimed in any one of the preceding claims including determining the transparency values as a function of a normalised image intensity gradient at corresponding locations in the captured image.
12. A method as claimed in claim 11 in which calculating the normalised gradient includes determining a partial derivative with respect to an image coordinate divided by image intensity at the corresponding locations in the captured image.
13. A method as claimed in any one of the preceding claims including defining a saliency map for an area of the NPR image and assigning a transparency value to a pixel of the NPR image as a function of a corresponding value of the saliency map.
14. A method as claimed in claim 13 in which the saliency map is defined as a function of local slopes in a scene underlying the captured image.
15. A method as claimed in claim 14 in which the local slopes are estimated based on shading in the captured image.
16. A method as claimed in claim 14 or claim 15 in which the local slopes are estimated as a function of corresponding normalised intensity gradients in the captured image.
17. A method as claimed in any one of claims 13 to 16 including assigning a colour value to a pixel of the NPR image as a function of a corresponding value of the saliency map.
18. A method as claimed in claim 17 in which the transparency and colour values are assigned such that an object rendered behind the NPR image is being perceived as being viewed through the area where values of the saliency map are lower while being occluded where the saliency map has higher values.
19. A method as claimed in any one of the preceding claims when dependent on claim 2 in which the object is derived from medical imaging data.
20. A method as claimed in claim 19 in which the image has been captured using a stereoscopic endoscope.
21. A system for rendering a captured digital image captured by a camera and defined by a plurality of image pixels as a NPR image defined by a plurality of NPR image pixels each having an associated transparency value, the system including a transparency calculator arranged to calculate the transparency value of at least some of the NPR image pixels in dependence upon corresponding captured image pixels.
22. A system as claimed in claim 21 , the system further being arranged to render a virtual object behind the NPR image.
23. A system as claimed in claims 21 or 22 which is arranged to render a further captured image using the transparency calculator as a further NPR image to define a stereoscopic view.
24. A method as claimed in claims 21, 22 or 23 in which the transparency calculator is arranged to assign the transparency values within a window within the NPR image.
25. A system as claimed in claim 24 which is arranged to set transparency values to opaque outside the window.
26. A system as claimed in claim 25 which is arranged to gradually blend the transparency values from within the window to outside of it.
27. A system as claimed in claims 24 to 26 which includes a windowing module having an input representative of a viewer's gaze for defining a window position in dependence upon the viewer's gaze.
28. A system as claimed in claim 27 in which the windowing module is arranged to update the window position when an update request is received from the viewer.
29. A system as claimed in claim 27 in which the windowing module is arranged to continuously update the window position to track to the viewer's gaze.
30. A system as claimed in any one of claims 24 to 29 when dependent on claim 23 in which the transparency calculator is arranged to assign transparency values inside a further window in the further NPR image, the further window being positioned at an offset with respect to the window in the NPR image in accordance with the stereoscopic view.
31. A system as claimed in any one of claims 21 to 30 in which the transparency calculator is arranged to determine the transparency values as a function of a normalized image intensity gradient at respective corresponding locations in the captured image.
32. A system as claimed in claim 31 in which the transparency calculator is arranged to calculate the normalised image intensity gradients by determining a partial derivative with respect to an image coordinate divided by an image intensity at the respective corresponding locations.
33. A system as claimed in any one of claims 21 to 32 in which the transparency calculator is arranged to define a saliency map for an area of the captured image and to assign a transparency value to a pixel of the NPR image as a function of a corresponding value of the saliency map.
34. A system as claimed in claim 33 in which the transparency calculator is arranged to define the saliency map as a function of local slopes in the scene underlying the captured image.
35. A system as claimed in claim 34 in which the transparency calculator is arranged to estimate the local slopes based on shading in the captured image.
36. A system as claimed in claim 34 or claim 35 in which the transparency calculator is arranged to estimate the local slopes are estimated as a function of respective normalised intensity gradients in the captured image.
37. A system as claimed in any one of claims 33 to 36 which includes a colour calculator arranged to assign a colour value to a pixel of the NPR captured image as a function of a corresponding value of the saliency map.
38. A system as claimed in claim 37 in which the NPR captured image transparency calculator and the colour calculator are arranged to assign the respective values such that an object rendered behind the NPR captured image is being perceived as being viewed through the area where the saliency map has lower values while being occluded where the saliency map has higher values.
39. A system as claimed in any one of claims 21 to 38 when dependent on claim 22 in which the virtual object is derived from medical imaging data.
40. A system as claimed in claim 39 in which the image has been captured using a stereoscopic endoscope.
41. A robotic surgery console for minimally invasive surgery including a stereoscopic endoscope for capturing stereoscopic images of a surgical scene and a stereoscopic viewing arrangement for viewing the captured images, the console including a system as claimed in any one of claims 21 to 40 arranged to render images received from the stereoscopic endoscope and display them on the stereoscopic viewing arrangement.
42. A computer program which, when run on a computer, implements a method as claimed in any one of claims 1 to 20.
43. A computer readable medium or carrier signal encoding a computer program as claimed in claim 42.
PCT/GB2008/002139 2007-06-29 2008-06-23 Non-photorealistic rendering of augmented reality WO2009004296A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
AT08762451T ATE500578T1 (en) 2007-06-29 2008-06-23 NON-PHOTO REALISTIC REPRESENTATION OF AUGMENTED REALITY
JP2010514098A JP5186561B2 (en) 2007-06-29 2008-06-23 Non-realistic rendering of augmented reality
CN200880022657.0A CN101802873B (en) 2007-06-29 2008-06-23 Non-photorealistic rendering of augmented reality
EP08762451A EP2174297B1 (en) 2007-06-29 2008-06-23 Non-photorealistic rendering of augmented reality
US12/666,957 US8878900B2 (en) 2007-06-29 2008-06-23 Non photorealistic rendering of augmented reality
DE602008005312T DE602008005312D1 (en) 2007-06-29 2008-06-23 NON-PHOTOREALISTIC PLAYBACK OF EXTENDED REALITY

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0712690.7 2007-06-29
GBGB0712690.7A GB0712690D0 (en) 2007-06-29 2007-06-29 Imagee processing

Publications (1)

Publication Number Publication Date
WO2009004296A1 true WO2009004296A1 (en) 2009-01-08

Family

ID=38420978

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2008/002139 WO2009004296A1 (en) 2007-06-29 2008-06-23 Non-photorealistic rendering of augmented reality

Country Status (10)

Country Link
US (1) US8878900B2 (en)
EP (1) EP2174297B1 (en)
JP (1) JP5186561B2 (en)
KR (1) KR20100051798A (en)
CN (1) CN101802873B (en)
AT (1) ATE500578T1 (en)
DE (1) DE602008005312D1 (en)
ES (1) ES2361228T3 (en)
GB (1) GB0712690D0 (en)
WO (1) WO2009004296A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011126459A1 (en) * 2010-04-08 2011-10-13 Nanyang Technological University Image generating devices, graphics boards, and image generating methods
CN102271262A (en) * 2010-06-04 2011-12-07 三星电子株式会社 Multithread-based video processing method for 3D (Three-Dimensional) display
EP2441504A3 (en) * 2010-10-15 2013-07-24 Nintendo Co., Ltd. Storage medium recording image processing program, image processing device, image processing system and image processing method
EP2706508A1 (en) * 2012-09-10 2014-03-12 BlackBerry Limited Reducing latency in an augmented-reality display
US9576397B2 (en) 2012-09-10 2017-02-21 Blackberry Limited Reducing latency in an augmented-reality display
WO2017165566A1 (en) * 2016-03-25 2017-09-28 The Regents Of The University Of California High definition, color images, animations, and videos for diagnostic and personal imaging applications

Families Citing this family (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR112012007115A2 (en) * 2009-10-02 2020-02-27 Koninklijke Philips Electronics N.V. METHOD OF ENCODING A 3D VIDEO DATA SIGNAL, METHOD OF DECODING A 3D VIDEO SIGNAL, ENCODER FOR ENCODING A 3D VIDEO DATA SIGNAL, DECODER FOR DECODING A 3D VIDEO DATA SIGNAL, COMPUTER PROGRAM PRODUCT FOR PRODUCT ENCODE A VIDEO DATA SIGNAL, COMPUTER PROGRAM PRODUCT TO DECODE A VIDEO SIGNAL, 3D VIDEO DATA SIGNAL, AND DIGITAL DATA HOLDER
US20110287811A1 (en) * 2010-05-21 2011-11-24 Nokia Corporation Method and apparatus for an augmented reality x-ray
KR20110137634A (en) * 2010-06-17 2011-12-23 삼성전자주식회사 Method for generating sketch image and display apparaus applying the same
KR101188715B1 (en) * 2010-10-04 2012-10-09 한국과학기술연구원 3 dimension tracking system for surgery simulation and localization sensing method using the same
US8514295B2 (en) * 2010-12-17 2013-08-20 Qualcomm Incorporated Augmented reality processing based on eye capture in handheld device
US20120249416A1 (en) * 2011-03-29 2012-10-04 Giuliano Maciocci Modular mobile connected pico projectors for a local multi-user collaboration
KR20120117165A (en) * 2011-04-14 2012-10-24 삼성전자주식회사 Method of generating 3-dimensional image and endoscope apparatus using the same
EP2521097B1 (en) * 2011-04-15 2020-01-22 Sony Interactive Entertainment Europe Limited System and Method of Input Processing for Augmented Reality
US8970693B1 (en) * 2011-12-15 2015-03-03 Rawles Llc Surface modeling with structured light
DE102011089233A1 (en) * 2011-12-20 2013-06-20 Siemens Aktiengesellschaft Method for texture adaptation in medical image for repairing abdominal aorta aneurysm on angiography system for patient, involves adjusting portion of image texture that is designed transparent such that visibility of object is maintained
US9734633B2 (en) 2012-01-27 2017-08-15 Microsoft Technology Licensing, Llc Virtual environment generating system
US20150049177A1 (en) * 2012-02-06 2015-02-19 Biooptico Ab Camera Arrangement and Image Processing Method for Quantifying Tissue Structure and Degeneration
CN102663788B (en) * 2012-04-12 2014-09-10 云南大学 Pen light-colored artistic effect drawing method based on unreality feeling
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
CN104346777B (en) * 2013-08-09 2017-08-29 联想(北京)有限公司 A kind of method and device for adding real enhancement information
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN110363840A (en) * 2014-05-13 2019-10-22 河谷控股Ip有限责任公司 It is rendered by the augmented reality content of albedo model, system and method
KR101652888B1 (en) * 2014-08-20 2016-09-01 재단법인 아산사회복지재단 Method for displaying a surgery instrument by surgery navigation
CA2960426A1 (en) * 2014-09-09 2016-03-17 Nokia Technologies Oy Stereo image recording and playback
US10013808B2 (en) 2015-02-03 2018-07-03 Globus Medical, Inc. Surgeon head-mounted display apparatuses
GB2536650A (en) 2015-03-24 2016-09-28 Augmedics Ltd Method and system for combining video-based and optic-based augmented reality in a near eye display
US10241569B2 (en) 2015-12-08 2019-03-26 Facebook Technologies, Llc Focus adjustment method for a virtual reality headset
US10445860B2 (en) 2015-12-08 2019-10-15 Facebook Technologies, Llc Autofocus virtual reality headset
WO2017114834A1 (en) 2015-12-29 2017-07-06 Koninklijke Philips N.V. System, controller and method using virtual reality device for robotic surgery
EP3211599A1 (en) * 2016-02-29 2017-08-30 Thomson Licensing Adaptive depth-guided non-photorealistic rendering method, corresponding computer program product, computer-readable carrier medium and device
US11106276B2 (en) 2016-03-11 2021-08-31 Facebook Technologies, Llc Focus adjusting headset
JP6493885B2 (en) * 2016-03-15 2019-04-03 富士フイルム株式会社 Image alignment apparatus, method of operating image alignment apparatus, and image alignment program
JP6932135B2 (en) * 2016-03-16 2021-09-08 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Computational device for superimposing laparoscopic images and ultrasonic images
US10379356B2 (en) 2016-04-07 2019-08-13 Facebook Technologies, Llc Accommodation based optical correction
JP6698824B2 (en) * 2016-04-11 2020-05-27 富士フイルム株式会社 Image display control device, method and program
US10429647B2 (en) 2016-06-10 2019-10-01 Facebook Technologies, Llc Focus adjusting virtual reality headset
CN106780760A (en) * 2016-12-12 2017-05-31 大连文森特软件科技有限公司 System is viewed and emulated in a kind of plastic operation based on AR virtual reality technologies
US10310598B2 (en) 2017-01-17 2019-06-04 Facebook Technologies, Llc Varifocal head-mounted display including modular air spaced optical assembly
US10628995B2 (en) 2017-04-17 2020-04-21 Microsoft Technology Licensing, Llc Anti-aliasing of graphical elements defined based on functions
WO2019139935A1 (en) 2018-01-10 2019-07-18 Covidien Lp Guidance for positioning a patient and surgical robot
US20190254753A1 (en) 2018-02-19 2019-08-22 Globus Medical, Inc. Augmented reality navigation systems for use with robotic surgical systems and methods of their use
EP3787543A4 (en) 2018-05-02 2022-01-19 Augmedics Ltd. Registration of a fiducial marker for an augmented reality system
US11410398B2 (en) 2018-11-21 2022-08-09 Hewlett-Packard Development Company, L.P. Augmenting live images of a scene for occlusion
US11766296B2 (en) 2018-11-26 2023-09-26 Augmedics Ltd. Tracking system for image-guided surgery
US10818028B2 (en) * 2018-12-17 2020-10-27 Microsoft Technology Licensing, Llc Detecting objects in crowds using geometric context
CN110009720B (en) * 2019-04-02 2023-04-07 阿波罗智联(北京)科技有限公司 Image processing method and device in AR scene, electronic equipment and storage medium
US10881353B2 (en) * 2019-06-03 2021-01-05 General Electric Company Machine-guided imaging techniques
US11980506B2 (en) 2019-07-29 2024-05-14 Augmedics Ltd. Fiducial marker
KR102216587B1 (en) * 2019-10-24 2021-02-17 주식회사 레티널 Optical device for augmented reality which can prevent ghost images
US11992373B2 (en) 2019-12-10 2024-05-28 Globus Medical, Inc Augmented reality headset with varied opacity for navigated robotic surgery
US12133772B2 (en) 2019-12-10 2024-11-05 Globus Medical, Inc. Augmented reality headset for navigated robotic surgery
US11382712B2 (en) 2019-12-22 2022-07-12 Augmedics Ltd. Mirroring in image guided surgery
EP4093311A4 (en) * 2020-01-22 2023-06-14 Beyeonics Surgical Ltd. System and method for improved electronic assisted medical procedures
US11464581B2 (en) 2020-01-28 2022-10-11 Globus Medical, Inc. Pose measurement chaining for extended reality surgical navigation in visible and near infrared spectrums
US11382699B2 (en) 2020-02-10 2022-07-12 Globus Medical Inc. Extended reality visualization of optical tool tracking volume for computer assisted navigation in surgery
US11207150B2 (en) 2020-02-19 2021-12-28 Globus Medical, Inc. Displaying a virtual model of a planned instrument attachment to ensure correct selection of physical instrument attachment
US11607277B2 (en) 2020-04-29 2023-03-21 Globus Medical, Inc. Registration of surgical tool with reference array tracked by cameras of an extended reality headset for assisted navigation during surgery
US11153555B1 (en) 2020-05-08 2021-10-19 Globus Medical Inc. Extended reality headset camera system for computer assisted navigation in surgery
US11510750B2 (en) 2020-05-08 2022-11-29 Globus Medical, Inc. Leveraging two-dimensional digital imaging and communication in medicine imagery in three-dimensional extended reality applications
US11382700B2 (en) 2020-05-08 2022-07-12 Globus Medical Inc. Extended reality headset tool tracking and control
US11737831B2 (en) 2020-09-02 2023-08-29 Globus Medical Inc. Surgical object tracking template generation for computer assisted navigation during surgical procedure
US11896445B2 (en) 2021-07-07 2024-02-13 Augmedics Ltd. Iliac pin and adapter
JP2024531672A (en) * 2021-09-10 2024-08-29 ボストン サイエンティフィック サイムド,インコーポレイテッド Methods for robust surface and depth estimation - Patents.com
WO2024057210A1 (en) 2022-09-13 2024-03-21 Augmedics Ltd. Augmented reality eyewear for image-guided medical intervention
CN116540872B (en) * 2023-04-28 2024-06-04 中广电广播电影电视设计研究院有限公司 VR data processing method, device, equipment, medium and product
CN117041511B (en) * 2023-09-28 2024-01-02 青岛欧亚丰科技发展有限公司 Video image processing method for visual interaction enhancement of exhibition hall

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151009A (en) * 1996-08-21 2000-11-21 Carnegie Mellon University Method and apparatus for merging real and synthetic images
US7355597B2 (en) * 2002-05-06 2008-04-08 Brown University Research Foundation Method, apparatus and computer program product for the interactive rendering of multivalued volume data with layered complementary values
US7295720B2 (en) * 2003-03-19 2007-11-13 Mitsubishi Electric Research Laboratories Non-photorealistic camera
JP2005108108A (en) * 2003-10-01 2005-04-21 Canon Inc Operating device and method for three-dimensional cg and calibration device for position/attitude sensor
WO2006099490A1 (en) * 2005-03-15 2006-09-21 The University Of North Carolina At Chapel Hill Methods, systems, and computer program products for processing three-dimensional image data to render an image from a viewpoint within or beyond an occluding region of the image data
US20070070063A1 (en) * 2005-09-29 2007-03-29 Siemens Medical Solutions Usa, Inc. Non-photorealistic volume rendering of ultrasonic data
EP2034442A1 (en) * 2007-09-06 2009-03-11 Thomson Licensing Method for non-photorealistic rendering of an image frame sequence

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DOELLNER J ET AL: "Real-time expressive rendering of city models", INFORMATION VISUALIZATION, 2003. INFOVIS 2003. IEEE SYMPOSIUM ON 16-18 JULY 2003, PISCATAWAY, NJ, USA,IEEE, 16 July 2003 (2003-07-16), pages 245 - 250, XP010648507, ISBN: 978-0-7695-1988-3 *
FISCHER J ET AL: "Stylized Augmented Reality for Improved Immersion", VIRTUAL REALITY, 2005. PROCEEDINGS. VR 2005. IEEE BONN, GERMANY MARCH 12-16, 2005, PISCATAWAY, NJ, USA,IEEE, 12 March 2005 (2005-03-12), pages 195 - 202,325, XP010836683, ISBN: 978-0-7803-8929-8 *
HUI XU ET AL: "STYLIZED RENDERING OF 3D SCANNED REAL WORLD ENVIRONMENTS", PROCEEDINGS NPAR 2004. 3RD. INTERNATIONAL SYMPOSIUM ON NON-PHOTOREALISTIC ANIMATION AND RENDERING. ANNECY, FRANCE, JUNE 7 - 9, 2004; [SYMPOSIUM ON NON - PHOTOREALISTIC ANIMATION AND RENDERING], NEW YORK, NY : ACM, US, 7 June 2004 (2004-06-07), pages 25 - 34, XP001210019, ISBN: 978-1-58113-887-0 *
LAPEER R J ET AL: "PC-based volume rendering for medical visualisation and augmented reality based surgical navigation", INFORMATION VISUALISATION, 2004. IV 2004. PROCEEDINGS. EIGHTH INTERNAT IONAL CONFERENCE ON LONDON, ENGLAND 14-16 JULY 2004, PISCATAWAY, NJ, USA,IEEE, 14 July 2004 (2004-07-14), pages 67 - 72, XP010713883, ISBN: 978-0-7695-2177-0 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011126459A1 (en) * 2010-04-08 2011-10-13 Nanyang Technological University Image generating devices, graphics boards, and image generating methods
CN102271262A (en) * 2010-06-04 2011-12-07 三星电子株式会社 Multithread-based video processing method for 3D (Three-Dimensional) display
EP2441504A3 (en) * 2010-10-15 2013-07-24 Nintendo Co., Ltd. Storage medium recording image processing program, image processing device, image processing system and image processing method
US8956227B2 (en) 2010-10-15 2015-02-17 Nintendo Co., Ltd. Storage medium recording image processing program, image processing device, image processing system and image processing method
EP2706508A1 (en) * 2012-09-10 2014-03-12 BlackBerry Limited Reducing latency in an augmented-reality display
US9576397B2 (en) 2012-09-10 2017-02-21 Blackberry Limited Reducing latency in an augmented-reality display
WO2017165566A1 (en) * 2016-03-25 2017-09-28 The Regents Of The University Of California High definition, color images, animations, and videos for diagnostic and personal imaging applications
US11051769B2 (en) 2016-03-25 2021-07-06 The Regents Of The University Of California High definition, color images, animations, and videos for diagnostic and personal imaging applications

Also Published As

Publication number Publication date
ES2361228T3 (en) 2011-06-15
ATE500578T1 (en) 2011-03-15
EP2174297B1 (en) 2011-03-02
JP5186561B2 (en) 2013-04-17
JP2010532035A (en) 2010-09-30
CN101802873A (en) 2010-08-11
KR20100051798A (en) 2010-05-18
US8878900B2 (en) 2014-11-04
CN101802873B (en) 2012-11-28
US20100177163A1 (en) 2010-07-15
EP2174297A1 (en) 2010-04-14
GB0712690D0 (en) 2007-08-08
DE602008005312D1 (en) 2011-04-14

Similar Documents

Publication Publication Date Title
EP2174297B1 (en) Non-photorealistic rendering of augmented reality
Mori et al. A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects
Bichlmeier et al. Contextual anatomic mimesis hybrid in-situ visualization method for improving multi-sensory depth perception in medical augmented reality
Lerotic et al. Pq-space based non-photorealistic rendering for augmented reality
US9681925B2 (en) Method for augmented reality instrument placement using an image based navigation system
US9858475B2 (en) Method and system of hand segmentation and overlay using depth data
Hu et al. Head-mounted augmented reality platform for markerless orthopaedic navigation
US20230050857A1 (en) Systems and methods for masking a recognized object during an application of a synthetic element to an original image
WO2010081094A2 (en) A system for registration and information overlay on deformable surfaces from video data
Kutter et al. Real-time volume rendering for high quality visualization in augmented reality
Fischer et al. A hybrid tracking method for surgical augmented reality
US20220215539A1 (en) Composite medical imaging systems and methods
Reichard et al. Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery
US20220218435A1 (en) Systems and methods for integrating imagery captured by different imaging modalities into composite imagery of a surgical space
WO2021211986A1 (en) Systems and methods for enhancing medical images
Wieczorek et al. GPU-accelerated rendering for medical augmented reality in minimally-invasive procedures.
Dey et al. Mixed reality merging of endoscopic images and 3-D surfaces
JP7504942B2 (en) Representation device for displaying a graphical representation of an augmented reality - Patent Application 20070123633
Singh et al. A novel enhanced hybrid recursive algorithm: image processing based augmented reality for gallbladder and uterus visualisation
US20220175473A1 (en) Using model data to generate an enhanced depth map in a computer-assisted surgical system
Shakya et al. Remote surgeon hand motion and occlusion removal in mixed reality in breast surgical telepresence: rural and remote care
Wang et al. Augmented reality provision in robotically assisted minimally invasive surgery
EP4322114A1 (en) Projective bisector mirror
Bichlmeier et al. A practical approach for intraoperative contextual in-situ visualization
US20240331329A1 (en) Method and system for superimposing two-dimensional (2d) images over deformed surfaces

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200880022657.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08762451

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 12666957

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2010514098

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2008762451

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20107002014

Country of ref document: KR

Kind code of ref document: A