WO2016177788A1 - Device for depth visualization of a predefined object - Google Patents
Device for depth visualization of a predefined object Download PDFInfo
- Publication number
- WO2016177788A1 WO2016177788A1 PCT/EP2016/060015 EP2016060015W WO2016177788A1 WO 2016177788 A1 WO2016177788 A1 WO 2016177788A1 EP 2016060015 W EP2016060015 W EP 2016060015W WO 2016177788 A1 WO2016177788 A1 WO 2016177788A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- detection surface
- intersection points
- plane
- image
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/46—Arrangements for interfacing with the operator or the patient
- A61B6/461—Displaying means of special interest
- A61B6/466—Displaying means of special interest adapted to display 3D data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/21—Collision detection, intersection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/008—Cut plane or projection plane definition
Definitions
- the invention relates to a device for depth visualization of a predefined object, a system for depth visualization of a predefined object, a method for depth visualization of a predefined object, and a computer program element for controlling such device or system, and a computer readable medium having stored such computer program element.
- 2D images and in particular fluoroscopy images are used for visualization and guidance purposes. Fluoroscopy produces so-called projective 2D images, which are basically flat images without 3D or depth information. In some situations, this lack of information is hampering a quick progress in the intervention, such as for example the insertion of a guide wire into a contralateral branch of a prosthesis in AAA procedures.
- 3D visualization and guidance it may be displayed by a color code, which is however not precise enough to discriminate between a depth of points close to each other, such as points on a catheter tip and close by fenestration markers. For example, with a color map going continuously from yellow to blue for a scene that corresponds to the depth of the field of view, two points located a few millimeters apart at a catheter tip may appear with close colors that cannot be distinguished by the eye.
- US 2009/184955 Al hereto discloses a method of volume rendering by means of depth weighted colorization.
- the method includes an obtaining of data representative of a first composited plane of anatomical structures and calculating data of a second composited plane as a function of a first composited plane.
- the data of the second composited plane is indicative of a measure of depth of the anatomical structures along respective ray cast lines.
- the method also includes determining depth weighted color values between two different colorization palettes as a function of the measure of depths of the second composited plane.
- the determined depth weighted color values are applied to the first composited plane for producing a volume rendering image with depth weighted colorization.
- the depth visualization of a predefined object or anatomical structure can still be improved.
- a device for depth visualization of a predefined object comprises an image provision unit, a depth information provision unit, a control unit and a display unit.
- the image provision unit is configured to provide image data
- the depth information provision unit is configured to provide depth information along a depth axis extending in a direction not being part of a 2D image plane of the image data.
- the control unit is configured to move a detection surface along the depth axis, and is further configured to detect intersection points between the moving detection surface and the predefined object.
- the display unit is configured to indicate the intersection points between the moving detection surface and the object to distinguish the intersection points from other parts of the object.
- the predefined object may be an anatomical structure, an intervention tool or the like.
- the predefined object may be a part or point of interest of the anatomical structure, the intervention tool or the like.
- the predefined object may be more than one anatomical structure, intervention tool, point of interest or the like.
- the image data may be 2D image data generated by e.g. an X-ray C-arm.
- the depth information can be computed from a set of two or more images that are acquired for example with a system equipped for stereovision or wiggle, in which correspondence between points of interest or objects has been established. Alternatively, it may come from images that have been acquired and processed with another system, previously to the intervention.
- the depth axis can be also understood as viewing axis.
- the depth axis extending in a direction not being part of a 2D image plane can also be understood as extending in a direction not belonging to the 2D image plane or being different to the 2D image plane.
- the detection surface as e.g. plane or sphere are explained further below.
- the moving detection surface can be also understood as moving volumetric slices.
- the term "moving" can be understood as positioning, placing or determining the detection surface.
- intersection points can be understood as detecting at least one location of at least one intersection point. There can be more than one intersection point between the moving detection surface and the predefined object.
- intersection points can be understood as overlaying illumination marks on the image data at the locations of the intersection points.
- a very precise dynamic depth visualization of one or more 3D objects in a 2D image with a dynamic lighting effect may be achieved. This may be done by generating images in which the object is illuminated with a dynamic lighting effect.
- the dynamic lighting effect may evolve from an area of the object that is the closest to the background to an area that is the closest to the foreground or vice versa.
- the device for depth visualization may dynamically visualize precise depth information related to at least one object by adding an extra dynamic illumination scheme to a standard rendering of a scene.
- the dynamic illumination scheme may pass through the depth range in a precise order (e.g. background to foreground or vice versa). At a given time, only object parts or points with the same depth value may be illuminated.
- the depth display is focused on a single predefined object instead of the entire content of the image data. This allows the user to concentrate on the object without any distraction by unnecessary depth information in view of other objects or structures.
- the indication of the intersection points between the moving detection surface and the object provides information on the object's orientation. For example, in the context of guide wire tip tracking, the depth information on the tip is directly related to its orientation.
- the display unit is configured to subsequently indicate the intersection points between the moving detection plane and the predefined object.
- the illumination of a first intersection point in a first position of the moving detection surface is switched off when the illumination of a subsequent second intersection point in a different, second position of the moving detection surface is switched on.
- the depth axis extends in a direction perpendicular to a 2D image plane and the detection surface is a detection plane parallel to the 2D image plane. Then, the lighting effect may follow a plane parallel to the 2D image plane moving along the depth axis from e.g. the background towards the foreground or vice versa. At a given time, the intersection of the detection plane with the at least one predefined 3D object comprising several points of interest may be illuminated as described below:
- the first step is to sort the points of interest according to their depth. For each point of interest, the intersection with the moving detection plane is a dot. Consequently, the points of interest are sequentially illuminated by overlaying a bright dot on them, starting from the ones that are e.g. closest to the background and ending on the ones that are on the foreground. This is done in such a way that, at a given time, only the points of interest that have the same depth value are overlaid with a bright dot, which is then switched off before switching on a bright dot on the point(s) of interest which is(are) immediately closer to the foreground, and so on.
- the intersection of thin, continuous objects e.g. guide wire or catheters
- a detection plane of constant depth is a small area which appears as a bright mark on the object when illuminated. Consequently, the depth display corresponds to a bright mark that moves along the object from background to foreground.
- the detection surface is a detection plane not being parallel to the 2D image plane, and in particular, the detection surface may be a detection plane perpendicular to an object axis.
- the detection plane can have a normal that is aligned with a normal of the vessel. Further, the detection plane may be automatically aligned perpendicular to an object axis when a suitable object axis is identified in the scene.
- the detection surface is a detection sphere segment and the depth axis corresponds to its radius or radial distance axis or radial connection line between the detection sphere segment and its center or predefined focus point.
- the detection sphere segment comprises points of equal distance to a predefined focus point as its center.
- the growing/shrinking detection sphere segment is moving along its increasing/decreasing radius.
- the depth axis can also be understood as evolution direction of the increasing or decreasing sphere.
- control unit is configured to decrease a movement speed of the moving detection surface with increasing complexity of the image data and/or depth information.
- the traversal of the depth range could be uniform with a constant illumination time for each point, but could also be adapted to the scene. For example, it is possible to slow down the traversal when the scene is more complex.
- control unit is configured to move the detection surface stepwise along the depth axis, wherein the distance between subsequent steps along the depth axis decreases with increasing complexity of the 2D image data and/or depth information.
- the indication of intersection points between the moving detection surface and the object or so-called dynamic depth information can be automatically activated with a given time interval or manually activated / deactivated by a user using a button for example.
- the possibility to deactivate the dynamic depth information allows the user to focus on the image without additional information.
- control unit is further configured to combine the indication of intersection points between the moving detection surface and the object with more global static or continuous techniques such as smooth blue to yellow depth colorization.
- the system for depth visualization comprises a device for depth visualization of a predefined object as described above and an image generating unit to generate image data to be provided by this device.
- the image generating unit is an X-ray C-arm.
- a very precise dynamic depth visualization of one or more 3D objects in a 2D image with a dynamic lighting effect may be achieved. This may be done by generating images in which the object or anatomical structure is illuminated with a dynamic lighting effect.
- the dynamic lighting effect may evolve from an area of the object that is the closest to the background to an area that is the closest to the foreground or vice versa.
- a method for depth visualization of a predefined object comprises the following steps, not necessarily in this order: a) providing image data,
- the method for depth visualization of a predefined object may dynamically visualize precise depth information related to at least one object or point of interest by adding an extra dynamic illumination scheme to a standard rendering of a scene.
- the dynamic illumination scheme may pass through the depth range in a precise order (e.g. background to foreground or vice versa). At a given time, only object parts or points with the same depth value may be illuminated.
- the detection of intersection points between the object and the moving detection surface can also be understood as a determination of depth or position information of the object relative to a particular one of a row of volumetric slices forming the moving detection surface.
- the indication of intersection points between the moving detection surface and the object can also be understood as illuminating ambient dots of a particular one of a row of volumetric slices forming the moving detection surface on a 2D display for e.g. fluoroscopy imaging.
- intersection points between the moving detection plane and the predefined object are subsequently indicated.
- the illumination of a first intersection point in a first position of the moving detection surface is switched off when the illumination of a subsequent second intersection point in a different, second position of the moving detection surface is switched on.
- the depth axis extends in a direction perpendicular to a 2D image plane and the detection surface is a detection plane parallel to the 2D image plane. Then, the lighting effect may follow a plane parallel to the 2D image plane moving along the depth axis from e.g. the background towards the foreground.
- the computer program element comprises program code means for causing the device or system as defined in the independent device claim to carry out the steps of the method when the computer program is run on a computer.
- the device for depth visualization of a predefined object the system for depth visualization of a predefined object, the method for depth visualization of a predefined object, the computer program element for controlling such device or system, and the computer readable medium having stored such computer program element according to the independent claims have similar and/or identical preferred embodiments, in particular, as defined in the dependent claims. It shall be understood further that a preferred embodiment of the invention can also be any combination of the dependent claims with the respective independent claim.
- Fig 1 shows a schematic drawing of an example of a system and a device for depth visualization of a predefined object.
- Fig. 2 shows schematically and exemplarily an embodiment of a detection plane parallel to a 2D image plane.
- Fig. 3 shows schematically and exemplarily a display unit indicating intersection points between a moving detection plane and a predefined object.
- Fig. 4 shows schematically and exemplarily an embodiment of a detection plane not being parallel of the 2D image plane.
- Fig. 5 shows schematically and exemplarily an embodiment of a detection sphere and depth axis corresponding to a radial connection line between the detection sphere and its centre.
- Fig. 6 shows basic steps of an example of a method for depth visualization of a predefined object.
- Fig. 1 shows schematically and exemplarily an embodiment of a system 100 for depth visualization of a predefined object 20 according to the invention.
- the system 100 for depth visualization comprises a device 10 for depth visualization of a predefined object 20 and an image generating unit 111 to generate image data to be provided by this device 10.
- the image generating unit 111 may be an X-ray C-arm.
- the device 10 for depth visualization comprises an image provision unit 11, a depth information provision unit 12, a control unit 13 and a display unit 14.
- the image provision unit 11 provides image data
- the depth information provision unit 12 provides depth information along a depth axis 2 extending in a direction not being part of a 2D image plane 1 of the image data.
- the control unit 13 moves a detection surface 3 along the depth axis 2 and detects intersection points between the moving detection surface 3 and the predefined object 20.
- the display unit 14 indicates the intersection points between the moving detection surface 3 and the object 20 to distinguish the intersection points from other parts of the object 20.
- the predefined object 20 may be an anatomical structure, an intervention tool or the like. It can be selected automatically by the system or manually by an operator.
- the image data are 2D image data generated by the image generating unit 111.
- the detection of intersection points can be understood as detecting at least one location of at least one intersection point.
- the indication of intersection points can be understood as overlaying illumination marks 4 (see Fig. 3) on the image data at the locations of the intersection points.
- a very precise dynamic depth visualization of one or more 3D objects 20 in a 2D image with a dynamic lighting effect is achieved. This is done by generating images in which the object 20 is illuminated with a dynamic lighting effect.
- the dynamic lighting effect evolves from an area of the object 20 that is e.g. the closest to the background to an area that is the closest to the foreground.
- the detection surface 3 is a detection plane parallel to a 2D image plane 1. This may be combined with a depth axis 2 perpendicular or non-perpendicular to the 2D image plane 1.
- the depth axis 2 extends in a direction perpendicular to a 2D image plane 1.
- the lighting effect follows the detection plane parallel to the 2D image plane 1 moving along the depth axis 2 from e.g. the background towards the foreground (marked by arrow A).
- the intersection of the detection plane with the predefined 3D object 20 may be illuminated as described below with reference to Figs. 2 and 3.
- the first step is to sort the points of interest according to their depth. For each point of interest, the intersection with the moving detection plane is a dot. Consequently, the points of interest are sequentially illuminated by overlaying a bright dot on them, starting from the ones that are e.g. closest to the background and ending on the ones that are on the foreground. This is done in such a way that, at a given time, only the points of interest that have the same depth value are overlaid with a bright dot, which is then switched off before switching on a bright dot on the point(s) of interest which is(are) immediately closer to the foreground, and so on. Consequently, the depth display shown in Fig. 3 corresponds to a bright mark 4 that moves along the object 20 from background to foreground.
- the display unit 14 subsequently indicates the intersection points between the moving detection plane and the predefined object 20.
- the illumination of a first intersection point in a first position of the moving detection surface 3 is switched off when the illumination of a subsequent second intersection point in a different, second position of the moving detection surface 3 is switched on.
- a bright mark 4 is moving from background to foreground along e.g. here a curved linear object 20.
- the deepest part of the object 20 is on the bottom left of the image, whereas the closest part to the foreground is on the top right.
- the detection surface 3 is a detection plane not being parallel to or part of the 2D image plane 1.
- the detection surface 3 is a detection plane perpendicular to an object axis 21 as e.g. a normal of the vessel.
- the detection plane not parallel to the 2D image plane 1 may also be combined with a depth axis 2 perpendicular or non-perpendicular to the 2D image plane 1.
- the depth axis 2 extends in a direction non-perpendicular to a 2D image plane 1. Then, the lighting effect follows the detection plane moving along the depth axis 2 from e.g. the background towards the foreground (marked by arrow A).
- the detection surface 3 is a detection sphere segment and the depth axis 2 corresponds to a radial distance axis or a radial connection line between the detection sphere segment and its centre C or predefined focus point.
- the depth axis 2 or radius of the detection sphere segment may be perpendicular or non-perpendicular to the 2D image plane 1 and is here shown as non-perpendicular to the 2D image plane 1.
- the growing/shrinking detection sphere segment is moving along its increasing/decreasing radius or depth axis 2 (marked by arrow A).
- Fig. 6 shows a schematic overview of steps of a method for depth visualization of a predefined object 20.
- the method comprises the following steps, not necessarily in this order:
- a first step SI providing image data.
- a second step S2 providing depth information along a depth axis 2 extending in a direction not being part of a 2D image plane 1 of the image data.
- a fourth step S4 detecting intersection points between the moving detection surface 3 and the predefined object 20.
- a fifth step S5 indicating the intersection points between the moving detection surface 3 and the object 20 to distinguish the intersection points from other parts of the object 20.
- the intersection points between the moving detection plane and the predefined object 20 may be subsequently indicated.
- the illumination of a first intersection point in a first position of the moving detection surface 3 is switched off when the illumination of a subsequent second intersection point in a different, second position of the moving detection surface 3 is switched on.
- the depth axis 2 may extend in a direction perpendicular to a 2D image plane 1 and the detection surface 3 may be a detection plane parallel to the 2D image plane 1. Then, in the fifth step S5, the lighting effect may follow a plane parallel to the 2D image plane 1 moving along the depth axis 2 from e.g. the background towards the foreground.
- the method for depth visualization of a predefined object 20 may thereby dynamically visualize precise depth information related to at least one object 20 or point of interest by adding an extra dynamic illumination scheme to a standard rendering of a scene.
- the dynamic illumination scheme may pass through the depth range in a precise order (e.g. background to foreground or vice versa). At a given time, only object 20 parts or points with the same depth value may be illuminated.
- a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system 100.
- the computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention.
- This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above described apparatus.
- the computing unit can be adapted to operate automatically and/or to execute the orders of a user.
- a computer program may be loaded into a working memory of a data processor.
- the data processor may thus be equipped to carry out the method of the invention.
- This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
- the computer program element might be able to provide all necessary steps to fulfil the procedure of an exemplary embodiment of the method as described above.
- a computer readable medium such as a CD-ROM
- the computer readable medium has a computer program element stored on it, which computer program element is described by the preceding section.
- a computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
- a suitable medium such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
- the computer program may also be presented over a network like the
- World Wide Web can be downloaded into the working memory of a data processor from such a network.
- a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Human Computer Interaction (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a device (10) for depth visualization of a predefined object (20), a system (100) for depth visualization of a predefined object (20), a method for depth visualization of a predefined object (20), and a computer program element for controlling such device (10) or system (100), and a computer readable medium having stored such computer program element. The device (10) for depth visualization comprises an image provision unit (11), a depth information provision unit (12), a control unit (13), and a display unit (14). The image provision unit (11) is configured to provide image data, and the depth information provision unit (12) is configured to provide depth information along a depth axis (2) extending in a direction not being part of a 2D image plane (1) of the image data. The control unit (13) is configured to move a detection surface (3) along the depth axis (2), and is further configured to detect intersection points between the moving detection surface (3) and the predefined object (20). The display unit (14) is configured to indicate the intersection points between the moving detection surface (3) and the object (20) to distinguish the intersection points from other parts of the object (20).
Description
Device For Depth Visualization Of A Predefined Object
FIELD OF THE INVENTION
The invention relates to a device for depth visualization of a predefined object, a system for depth visualization of a predefined object, a method for depth visualization of a predefined object, and a computer program element for controlling such device or system, and a computer readable medium having stored such computer program element.
BACKGROUND OF THE INVENTION
For minimally invasive procedures, 2D images and in particular fluoroscopy images are used for visualization and guidance purposes. Fluoroscopy produces so-called projective 2D images, which are basically flat images without 3D or depth information. In some situations, this lack of information is hampering a quick progress in the intervention, such as for example the insertion of a guide wire into a contralateral branch of a prosthesis in AAA procedures.
As 3D visualization and guidance is useful, it may be displayed by a color code, which is however not precise enough to discriminate between a depth of points close to each other, such as points on a catheter tip and close by fenestration markers. For example, with a color map going continuously from yellow to blue for a scene that corresponds to the depth of the field of view, two points located a few millimeters apart at a catheter tip may appear with close colors that cannot be distinguished by the eye.
For example, US 2009/184955 Al hereto discloses a method of volume rendering by means of depth weighted colorization. The method includes an obtaining of data representative of a first composited plane of anatomical structures and calculating data of a second composited plane as a function of a first composited plane. The data of the second composited plane is indicative of a measure of depth of the anatomical structures along respective ray cast lines. The method also includes determining depth weighted color values between two different colorization palettes as a function of the measure of depths of the second composited plane. The determined depth weighted color values are applied to the first composited plane for producing a volume rendering image with depth weighted colorization.
However, the depth visualization of a predefined object or anatomical structure can still be improved.
SUMMARY OF THE INVENTION
Hence, there may be a need to provide an improved device, system and method for depth visualization of a predefined object, which are in particular more precise.
The problem of the present invention is solved by the subject-matters of the independent claims, wherein further embodiments are incorporated in the dependent claims. It should be noted that the aspects of the invention described in the following apply also to the device for depth visualization of a predefined object, the system for depth visualization of a predefined object, the method for depth visualization of a predefined object, the computer program element, and the computer readable medium.
According to the present invention, a device for depth visualization of a predefined object is presented. The device for depth visualization comprises an image provision unit, a depth information provision unit, a control unit and a display unit. The image provision unit is configured to provide image data, and the depth information provision unit is configured to provide depth information along a depth axis extending in a direction not being part of a 2D image plane of the image data. The control unit is configured to move a detection surface along the depth axis, and is further configured to detect intersection points between the moving detection surface and the predefined object. The display unit is configured to indicate the intersection points between the moving detection surface and the object to distinguish the intersection points from other parts of the object.
The predefined object may be an anatomical structure, an intervention tool or the like. The predefined object may be a part or point of interest of the anatomical structure, the intervention tool or the like. The predefined object may be more than one anatomical structure, intervention tool, point of interest or the like.
The image data may be 2D image data generated by e.g. an X-ray C-arm.
The depth information can be computed from a set of two or more images that are acquired for example with a system equipped for stereovision or wiggle, in which correspondence between points of interest or objects has been established. Alternatively, it may come from images that have been acquired and processed with another system, previously to the intervention.
The depth axis can be also understood as viewing axis. The depth axis
extending in a direction not being part of a 2D image plane can also be understood as extending in a direction not belonging to the 2D image plane or being different to the 2D image plane.
Different embodiments of the detection surface as e.g. plane or sphere are explained further below.
The moving detection surface can be also understood as moving volumetric slices. The term "moving" can be understood as positioning, placing or determining the detection surface.
The detection of intersection points can be understood as detecting at least one location of at least one intersection point. There can be more than one intersection point between the moving detection surface and the predefined object.
The indication of intersection points can be understood as overlaying illumination marks on the image data at the locations of the intersection points.
According to the invention, a very precise dynamic depth visualization of one or more 3D objects in a 2D image with a dynamic lighting effect may be achieved. This may be done by generating images in which the object is illuminated with a dynamic lighting effect. The dynamic lighting effect may evolve from an area of the object that is the closest to the background to an area that is the closest to the foreground or vice versa.
In other words, the device for depth visualization may dynamically visualize precise depth information related to at least one object by adding an extra dynamic illumination scheme to a standard rendering of a scene. The dynamic illumination scheme may pass through the depth range in a precise order (e.g. background to foreground or vice versa). At a given time, only object parts or points with the same depth value may be illuminated.
By means of this dynamic depth visualization of 3D objects in 2D images, it becomes possible to very precisely discriminate relative depth positions of close objects or object parts without user interaction. The dynamic aspect makes this technique far more discriminative than the prior art.
Exemplarily, the depth display is focused on a single predefined object instead of the entire content of the image data. This allows the user to concentrate on the object without any distraction by unnecessary depth information in view of other objects or structures. Exemplarily, the indication of the intersection points between the moving detection surface and the object provides information on the object's orientation. For
example, in the context of guide wire tip tracking, the depth information on the tip is directly related to its orientation.
In an example, the display unit is configured to subsequently indicate the intersection points between the moving detection plane and the predefined object. In an example, the illumination of a first intersection point in a first position of the moving detection surface is switched off when the illumination of a subsequent second intersection point in a different, second position of the moving detection surface is switched on.
In an example, the depth axis extends in a direction perpendicular to a 2D image plane and the detection surface is a detection plane parallel to the 2D image plane. Then, the lighting effect may follow a plane parallel to the 2D image plane moving along the depth axis from e.g. the background towards the foreground or vice versa. At a given time, the intersection of the detection plane with the at least one predefined 3D object comprising several points of interest may be illuminated as described below:
In order to illuminate a set of 3D points of interest, the first step is to sort the points of interest according to their depth. For each point of interest, the intersection with the moving detection plane is a dot. Consequently, the points of interest are sequentially illuminated by overlaying a bright dot on them, starting from the ones that are e.g. closest to the background and ending on the ones that are on the foreground. This is done in such a way that, at a given time, only the points of interest that have the same depth value are overlaid with a bright dot, which is then switched off before switching on a bright dot on the point(s) of interest which is(are) immediately closer to the foreground, and so on.
The intersection of thin, continuous objects (e.g. guide wire or catheters) with a detection plane of constant depth is a small area which appears as a bright mark on the object when illuminated. Consequently, the depth display corresponds to a bright mark that moves along the object from background to foreground.
In an example, the detection surface is a detection plane not being parallel to the 2D image plane, and in particular, the detection surface may be a detection plane perpendicular to an object axis. For example, to better understand if a catheter tip is near an ostium of a vessel, the detection plane can have a normal that is aligned with a normal of the vessel. Further, the detection plane may be automatically aligned perpendicular to an object axis when a suitable object axis is identified in the scene.
In an example, the detection surface is a detection sphere segment and the depth axis corresponds to its radius or radial distance axis or radial connection line between
the detection sphere segment and its center or predefined focus point. In other words, the detection sphere segment comprises points of equal distance to a predefined focus point as its center. As detection surface, the growing/shrinking detection sphere segment is moving along its increasing/decreasing radius. The depth axis can also be understood as evolution direction of the increasing or decreasing sphere.
In an example, the control unit is configured to decrease a movement speed of the moving detection surface with increasing complexity of the image data and/or depth information. In other words, the traversal of the depth range could be uniform with a constant illumination time for each point, but could also be adapted to the scene. For example, it is possible to slow down the traversal when the scene is more complex.
Exemplarily, the control unit is configured to move the detection surface stepwise along the depth axis, wherein the distance between subsequent steps along the depth axis decreases with increasing complexity of the 2D image data and/or depth information.
Exemplarily, the indication of intersection points between the moving detection surface and the object or so-called dynamic depth information can be automatically activated with a given time interval or manually activated / deactivated by a user using a button for example. The possibility to deactivate the dynamic depth information allows the user to focus on the image without additional information.
Exemplarily, the control unit is further configured to combine the indication of intersection points between the moving detection surface and the object with more global static or continuous techniques such as smooth blue to yellow depth colorization.
According to the present invention, also a system for depth visualization of a predefined object is presented. The system for depth visualization comprises a device for depth visualization of a predefined object as described above and an image generating unit to generate image data to be provided by this device. Exemplarily, the image generating unit is an X-ray C-arm.
According to the invention, a very precise dynamic depth visualization of one or more 3D objects in a 2D image with a dynamic lighting effect may be achieved. This may be done by generating images in which the object or anatomical structure is illuminated with a dynamic lighting effect. The dynamic lighting effect may evolve from an area of the object that is the closest to the background to an area that is the closest to the foreground or vice versa.
According to the present invention, also a method for depth visualization of a
predefined object is presented. It comprises the following steps, not necessarily in this order: a) providing image data,
b) providing depth information along a depth axis extending in a direction not being part of a 2D image plane of the image data,
c) moving a detection surface along the depth axis,
d) detecting intersection points between the moving detection surface and the predefined object, and
e) indicating the intersection points between the moving detection surface and the object to distinguish the intersection points from other parts of the object.
The method for depth visualization of a predefined object may dynamically visualize precise depth information related to at least one object or point of interest by adding an extra dynamic illumination scheme to a standard rendering of a scene. The dynamic illumination scheme may pass through the depth range in a precise order (e.g. background to foreground or vice versa). At a given time, only object parts or points with the same depth value may be illuminated. By means of this dynamic depth visualization of 3D objects in 2D images, it becomes possible to very precisely discriminate relative depth positions of close objects or object parts without user interaction.
The detection of intersection points between the object and the moving detection surface can also be understood as a determination of depth or position information of the object relative to a particular one of a row of volumetric slices forming the moving detection surface.
The indication of intersection points between the moving detection surface and the object can also be understood as illuminating ambient dots of a particular one of a row of volumetric slices forming the moving detection surface on a 2D display for e.g. fluoroscopy imaging.
In an example, the intersection points between the moving detection plane and the predefined object are subsequently indicated. In an example, the illumination of a first intersection point in a first position of the moving detection surface is switched off when the illumination of a subsequent second intersection point in a different, second position of the moving detection surface is switched on.
In an example, the depth axis extends in a direction perpendicular to a 2D image plane and the detection surface is a detection plane parallel to the 2D image plane. Then, the lighting effect may follow a plane parallel to the 2D image plane moving along the
depth axis from e.g. the background towards the foreground.
According to the present invention, also a computer program element is presented, wherein the computer program element comprises program code means for causing the device or system as defined in the independent device claim to carry out the steps of the method when the computer program is run on a computer.
It shall be understood that the device for depth visualization of a predefined object, the system for depth visualization of a predefined object, the method for depth visualization of a predefined object, the computer program element for controlling such device or system, and the computer readable medium having stored such computer program element according to the independent claims have similar and/or identical preferred embodiments, in particular, as defined in the dependent claims. It shall be understood further that a preferred embodiment of the invention can also be any combination of the dependent claims with the respective independent claim.
These and other aspects of the present invention will become apparent from and be elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
Exemplary embodiments of the invention will be described in the following with reference to the accompanying drawings:
Fig 1 shows a schematic drawing of an example of a system and a device for depth visualization of a predefined object.
Fig. 2 shows schematically and exemplarily an embodiment of a detection plane parallel to a 2D image plane.
Fig. 3 shows schematically and exemplarily a display unit indicating intersection points between a moving detection plane and a predefined object.
Fig. 4 shows schematically and exemplarily an embodiment of a detection plane not being parallel of the 2D image plane.
Fig. 5 shows schematically and exemplarily an embodiment of a detection sphere and depth axis corresponding to a radial connection line between the detection sphere and its centre.
Fig. 6 shows basic steps of an example of a method for depth visualization of a predefined object.
DETAILED DESCRIPTION OF EMBODIMENTS
Fig. 1 shows schematically and exemplarily an embodiment of a system 100 for depth visualization of a predefined object 20 according to the invention. The system 100 for depth visualization comprises a device 10 for depth visualization of a predefined object 20 and an image generating unit 111 to generate image data to be provided by this device 10. The image generating unit 111 may be an X-ray C-arm. The device 10 for depth visualization comprises an image provision unit 11, a depth information provision unit 12, a control unit 13 and a display unit 14.
The image provision unit 11 provides image data, and the depth information provision unit 12 provides depth information along a depth axis 2 extending in a direction not being part of a 2D image plane 1 of the image data. The control unit 13 moves a detection surface 3 along the depth axis 2 and detects intersection points between the moving detection surface 3 and the predefined object 20. The display unit 14 indicates the intersection points between the moving detection surface 3 and the object 20 to distinguish the intersection points from other parts of the object 20.
The predefined object 20 may be an anatomical structure, an intervention tool or the like. It can be selected automatically by the system or manually by an operator. The image data are 2D image data generated by the image generating unit 111. The detection of intersection points can be understood as detecting at least one location of at least one intersection point. The indication of intersection points can be understood as overlaying illumination marks 4 (see Fig. 3) on the image data at the locations of the intersection points.
According to the invention, a very precise dynamic depth visualization of one or more 3D objects 20 in a 2D image with a dynamic lighting effect is achieved. This is done by generating images in which the object 20 is illuminated with a dynamic lighting effect. The dynamic lighting effect evolves from an area of the object 20 that is e.g. the closest to the background to an area that is the closest to the foreground. By means of this dynamic depth visualization of 3D objects 20 in 2D images, it becomes possible to very precisely
discriminate relative depth positions of close objects 20 or object 20 parts without user interaction.
In Fig. 2, the detection surface 3 is a detection plane parallel to a 2D image plane 1. This may be combined with a depth axis 2 perpendicular or non-perpendicular to the 2D image plane 1. In Fig. 2, the depth axis 2 extends in a direction perpendicular to a 2D image plane 1. The lighting effect follows the detection plane parallel to the 2D image plane
1 moving along the depth axis 2 from e.g. the background towards the foreground (marked by arrow A). At a given time, the intersection of the detection plane with the predefined 3D object 20 may be illuminated as described below with reference to Figs. 2 and 3.
In order to illuminate a set of 3D points of interest, the first step is to sort the points of interest according to their depth. For each point of interest, the intersection with the moving detection plane is a dot. Consequently, the points of interest are sequentially illuminated by overlaying a bright dot on them, starting from the ones that are e.g. closest to the background and ending on the ones that are on the foreground. This is done in such a way that, at a given time, only the points of interest that have the same depth value are overlaid with a bright dot, which is then switched off before switching on a bright dot on the point(s) of interest which is(are) immediately closer to the foreground, and so on. Consequently, the depth display shown in Fig. 3 corresponds to a bright mark 4 that moves along the object 20 from background to foreground.
In other words, in Fig. 3, the display unit 14 subsequently indicates the intersection points between the moving detection plane and the predefined object 20. The illumination of a first intersection point in a first position of the moving detection surface 3 is switched off when the illumination of a subsequent second intersection point in a different, second position of the moving detection surface 3 is switched on. Then, a bright mark 4 is moving from background to foreground along e.g. here a curved linear object 20. The deepest part of the object 20 is on the bottom left of the image, whereas the closest part to the foreground is on the top right.
In Fig. 4, the detection surface 3 is a detection plane not being parallel to or part of the 2D image plane 1. In particular, the detection surface 3 is a detection plane perpendicular to an object axis 21 as e.g. a normal of the vessel. The detection plane not parallel to the 2D image plane 1 may also be combined with a depth axis 2 perpendicular or non-perpendicular to the 2D image plane 1. In Fig. 4, the depth axis 2 extends in a direction non-perpendicular to a 2D image plane 1. Then, the lighting effect follows the detection plane moving along the depth axis 2 from e.g. the background towards the foreground (marked by arrow A).
In Fig. 5, the detection surface 3 is a detection sphere segment and the depth axis 2 corresponds to a radial distance axis or a radial connection line between the detection sphere segment and its centre C or predefined focus point. The depth axis 2 or radius of the detection sphere segment may be perpendicular or non-perpendicular to the 2D image plane 1
and is here shown as non-perpendicular to the 2D image plane 1. As detection surface 3, the growing/shrinking detection sphere segment is moving along its increasing/decreasing radius or depth axis 2 (marked by arrow A).
Fig. 6 shows a schematic overview of steps of a method for depth visualization of a predefined object 20. The method comprises the following steps, not necessarily in this order:
In a first step SI, providing image data.
In a second step S2, providing depth information along a depth axis 2 extending in a direction not being part of a 2D image plane 1 of the image data.
- In a third step S3, moving a detection surface 3 along the depth axis 2
In a fourth step S4, detecting intersection points between the moving detection surface 3 and the predefined object 20.
In a fifth step S5, indicating the intersection points between the moving detection surface 3 and the object 20 to distinguish the intersection points from other parts of the object 20.
In the fifth step S5, the intersection points between the moving detection plane and the predefined object 20 may be subsequently indicated. The illumination of a first intersection point in a first position of the moving detection surface 3 is switched off when the illumination of a subsequent second intersection point in a different, second position of the moving detection surface 3 is switched on.
The depth axis 2 may extend in a direction perpendicular to a 2D image plane 1 and the detection surface 3 may be a detection plane parallel to the 2D image plane 1. Then, in the fifth step S5, the lighting effect may follow a plane parallel to the 2D image plane 1 moving along the depth axis 2 from e.g. the background towards the foreground.
The method for depth visualization of a predefined object 20 may thereby dynamically visualize precise depth information related to at least one object 20 or point of interest by adding an extra dynamic illumination scheme to a standard rendering of a scene. The dynamic illumination scheme may pass through the depth range in a precise order (e.g. background to foreground or vice versa). At a given time, only object 20 parts or points with the same depth value may be illuminated. By means of this dynamic depth visualization of 3D objects 20 in 2D images, it becomes possible to very precisely discriminate relative depth positions of close objects 20 or object 20 parts without user interaction.
In another exemplary embodiment of the present invention, a computer
program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system 100.
The computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention. This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above described apparatus. The computing unit can be adapted to operate automatically and/or to execute the orders of a user. A computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out the method of the invention.
This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
Further on, the computer program element might be able to provide all necessary steps to fulfil the procedure of an exemplary embodiment of the method as described above.
According to a further exemplary embodiment of the present invention, a computer readable medium, such as a CD-ROM, is presented wherein the computer readable medium has a computer program element stored on it, which computer program element is described by the preceding section.
A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
However, the computer program may also be presented over a network like the
World Wide Web and can be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the present invention, a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.
It has to be noted that embodiments of the invention are described with reference to different subject matters. In particular, some embodiments are described with reference to method type claims whereas other embodiments are described with reference to
the device type claims. However, a person skilled in the art will gather from the above and the following description that, unless otherwise notified, in addition to any combination of features belonging to one type of subject matter also any combination between features relating to different subject matters is considered to be disclosed with this application.
However, all features can be combined providing synergetic effects that are more than the simple summation of the features.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing a claimed invention, from a study of the drawings, the disclosure, and the dependent claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items re-cited in the claims. The mere fact that certain measures are re-cited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.
Claims
1. A device (10) for depth visualization of a predefined object (20), comprising:
an image provision unit (11),
a depth information provision unit (12),
a control unit (13), and
- a display unit (14),
wherein the image provision unit (11) is configured to provide image data, wherein the depth information provision unit (12) is configured to provide depth information along a depth axis (2) extending in a direction not being part of a 2D image plane (1) of the image data,
wherein the control unit (13) is configured to move a detection surface (3) along the depth axis (2),
wherein the control unit (13) is further configured to detect intersection points between the moving detection surface (3) and the predefined object (20), and
wherein the display unit (14) is configured to indicate the intersection points between the moving detection surface (3) and the object (20) to distinguish the intersection points from other parts of the object (20).
2. Device (10) according to claim 1, wherein the display unit (14) is configured to subsequently indicate the intersection points between the moving detection plane and the object (20).
3. Device (10) according to claim 1 or 2, wherein the illumination of a first intersection point in a first position of the moving detection surface (3) is switched off when the illumination of a subsequent second intersection point in a different, second position of the moving detection surface (3) is switched on.
4. Device (10) according to any of the preceding claims, wherein the display unit
(14) is configured to indicate the intersection points by overlaying illumination marks (4) on
the image data at the locations of the intersection points.
5. Device (10) according to any of the preceding claims, wherein the depth axis
(2) extends in a direction perpendicular to a 2D image plane (1).
6. Device (10) according to any of the preceding claims, wherein the detection surface (3) is a detection plane not being parallel to the 2D image plane (1).
7. Device (10) according to any of the preceding claims, wherein the detection surface (3) is a detection plane perpendicular to an axis (21) of the object (20).
8. Device (10) according to the preceding claim, wherein the detection plane is automatically aligned perpendicular to the object axis (21).
9. Device (10) according to any of claims 1 to 5, wherein the detection surface
(3) is a detection plane parallel to the 2D image plane (1).
10. Device (10) according to any of claims 1 to 5, wherein the detection surface (3) is a detection sphere segment, wherein the radius of the sphere segment extends along the depth axis (2).
11. Device (10) according to any of the preceding claims, wherein the control unit (13) is configured to decrease a movement speed of the moving detection surface (3) with increasing complexity of the image data and/or depth information.
12. A system (100) for depth visualization of a predefined object (20), comprising:
a device (10) according to one of the preceding claims, and
an image generating unit (111) to generate image data to be provided by the device (10).
13. A method for depth visualization of a predefined object (20), comprising the following steps:
a) providing image data,
b) providing depth information along a depth axis (2) extending in a direction not being part of a 2D image plane (1) of the image data,
c) moving a detection surface (3) along the depth axis (2),
d) detecting intersection points between the moving detection surface (3) and the predefined object (20), and
e) indicating the intersection points between the moving detection surface (3) and the object (20) to distinguish the intersection points from other parts of the object (20).
14. A computer program element for controlling a device (10) or system (100) according to one of the claims 1 to 12, which, when being executed by a processing unit, is adapted to perform the method steps of the preceding claim.
15. A computer readable medium having stored the computer program element of the preceding claim.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15305681.7 | 2015-05-04 | ||
EP15305681 | 2015-05-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016177788A1 true WO2016177788A1 (en) | 2016-11-10 |
Family
ID=53274457
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2016/060015 WO2016177788A1 (en) | 2015-05-04 | 2016-05-04 | Device for depth visualization of a predefined object |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016177788A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008062338A1 (en) * | 2006-11-20 | 2008-05-29 | Koninklijke Philips Electronics, N.V. | Displaying anatomical tree structures |
US20090184955A1 (en) | 2006-05-31 | 2009-07-23 | Koninklijke Philips Electronics N.V. | Method and apparatus for volume rendering using depth weighted colorization |
US20140033126A1 (en) * | 2008-12-08 | 2014-01-30 | Hologic, Inc. | Displaying Computer-Aided Detection Information With Associated Breast Tomosynthesis Image Information |
-
2016
- 2016-05-04 WO PCT/EP2016/060015 patent/WO2016177788A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090184955A1 (en) | 2006-05-31 | 2009-07-23 | Koninklijke Philips Electronics N.V. | Method and apparatus for volume rendering using depth weighted colorization |
WO2008062338A1 (en) * | 2006-11-20 | 2008-05-29 | Koninklijke Philips Electronics, N.V. | Displaying anatomical tree structures |
US20140033126A1 (en) * | 2008-12-08 | 2014-01-30 | Hologic, Inc. | Displaying Computer-Aided Detection Information With Associated Breast Tomosynthesis Image Information |
Non-Patent Citations (3)
Title |
---|
HANSEN C ET AL: "Multidimensional transfer functions for interactive volume rendering", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 8, no. 3, 1 July 2002 (2002-07-01), pages 270 - 285, XP011094639, ISSN: 1077-2626, DOI: 10.1109/TVCG.2002.1021579 * |
RADEVA NADEZHDA ET AL: "Generalized Temporal Focus + Context Framework for Improved Medical Data Exp", JOURNAL OF DIGITAL IMAGING, UNDERS, PHILADELPHIA, PA, SA, vol. 27, no. 2, 7 January 2014 (2014-01-07), pages 207 - 219, XP035346057, ISSN: 0897-1889, [retrieved on 20140107], DOI: 10.1007/S10278-013-9662-Z * |
TURLINGTON J Z ET AL: "New Techniques for Efficient Sliding Thin-Slab Volume Visualization", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 20, no. 8, 1 August 2001 (2001-08-01), pages 823 - 835, XP011036127, ISSN: 0278-0062, DOI: 10.1109/42.938250 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3505133B1 (en) | Use of augmented reality to assist navigation | |
EP3244799B1 (en) | Vertebral feature identification | |
US9098899B2 (en) | Determining the specific orientation of an object | |
US10506991B2 (en) | Displaying position and optical axis of an endoscope in an anatomical image | |
US10198875B2 (en) | Mapping image display control device, method, and program | |
US11610329B2 (en) | Visualization system for visualizing an alignment accuracy | |
EP3145432B1 (en) | Imaging apparatus for imaging a first object within a second object | |
US20170105685A1 (en) | Guidance device for a tee probe | |
CN115461009A (en) | Systems and methods for viewing subjects | |
EP3629932B1 (en) | Device and a corresponding method for providing spatial information of an interventional device in a live 2d x-ray image | |
JP6702902B2 (en) | Mapping image display control device, method and program | |
US12023208B2 (en) | Method for operating a visualization system in a surgical application, and visualization system for a surgical application | |
CN116958486A (en) | Medical image processing method and system based on convolutional neural network | |
WO2016177788A1 (en) | Device for depth visualization of a predefined object | |
US6810280B2 (en) | Method and apparatus for detecting the three-dimensional position of an examination instrument inserted into a body region | |
JP6476125B2 (en) | Image processing apparatus and surgical microscope system | |
KR20140128137A (en) | Method of comparing preoperative respiratory level with intraoperative respiratory level | |
CN115485600A (en) | System and method for viewing a subject | |
JP5305609B2 (en) | X-ray imaging apparatus and fluoroscopic road map image creation program | |
EP3703011A1 (en) | Interventional device tracking | |
WO2022072344A1 (en) | A sensor-equipped phantom for measuring accuracy of medical instrument placement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16722127 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16722127 Country of ref document: EP Kind code of ref document: A1 |