WO2008120136A1 - 2d/3d image registration - Google Patents

2d/3d image registration Download PDF

Info

Publication number
WO2008120136A1
WO2008120136A1 PCT/IB2008/051117 IB2008051117W WO2008120136A1 WO 2008120136 A1 WO2008120136 A1 WO 2008120136A1 IB 2008051117 W IB2008051117 W IB 2008051117W WO 2008120136 A1 WO2008120136 A1 WO 2008120136A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
registration
image data
volume data
data
Prior art date
Application number
PCT/IB2008/051117
Other languages
French (fr)
Inventor
Pieter Maria Mielekamp
Robert Johannes Frederik Homan
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2008120136A1 publication Critical patent/WO2008120136A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the invention relates to the field of medical imaging, and more specifically to a method of registering 3D volume data and 2D image data.
  • US 7,010,080 discloses a method for marker- free automatic fusion of 2D fluoroscopic C-arm images with preoperative 3D images using an intraoperatively obtained 3D data record.
  • an intraoperatively 3D image of the region is obtained using a C-arm x-ray system having a C-arm and having a tool plate attached to the C-arm x-ray system.
  • An image-based matching of an existing preoperative 3D image of the region, obtained prior to the medical interventional procedure, relative to the intraoperative 3D image is undertaken which generates a matching matrix.
  • the tool plate is matched relative to a navigation system.
  • a 2D fluoroscopic image is obtained using the C-arm x-ray system with the C-arm in any arbitrary position.
  • a projection matrix is determined for matching the 2D fluoroscopic image to the 3D image.
  • the 2D fluoroscopic image is matched with the preoperative 3D image using the matching matrix and the projection matrix.
  • a region of interest may be a portion of a patient which is imaged with an imaging system.
  • a method which comprises providing 3D volume data of a region of interest and providing 2D image data by imaging the region of interest with an imaging system.
  • the method further comprises performing a registration of the 3D volume data and the 2D image data and thereby providing registered 3D volume data and 2D image data, wherein the registration includes a machine-based registration of the 3D volume data and the 2D image data.
  • the method according to the first aspect of the invention comprises providing a transformation specification defined by an image registration of the registered 3D volume data and 2D image data.
  • This method has the advantage that any change in the spatial relationship of the position of the region of interest during acquisition of the 3D volume data and the position of the region of interest during acquisition of the 2D image date is described by the transformation specification and may be taken into account in any further processing or in further image acquisition. Such a change in the spatial relationship of the position of region of interest during acquisition of the 3D volume data and the position of the region of interest during acquisition of the 2D image date may occur due to patient movements.
  • a method is presented, wherein providing the transformation specification includes storing the transformation specification.
  • a method which further comprises providing further 2D image data by imaging the region of interest with the imaging system.
  • the method further comprises performing a registration of the 3D volume data and the further 2D image data and thereby providing registered 3D volume data and further 2D image data, wherein the registration of the 3D volume data and the further 2D image data includes performing a machine-based registration of the 3D volume data and the further 2D image data and further includes performing a transformation according to the transformation specification.
  • This embodiment has the advantage that for example in addition to the machine based registration, a change in the spatial relationship of the position of region of interest during acquisition of the 3D volume data and the position of the region of interest during acquisition of the further 2D image date described by the transformation specification is taken into account by the registration of the 3D volume data and the 2D image data.
  • registered 3D volume data and (further) 2D image data includes at least one of “registered by performing only a machine-based registration of the 3D volume data and the (further) 2D image data” and “registered by performing a machine-based registration of the 3D volume data and the 2D image data and by further performing a transformation according to a transformation specification".
  • registered 2D image data and 3D volume data may have been registered by using only machine-based registration or by using machine-based registration in combination with a further registration based on further registration information.
  • a method is presented wherein the image-based registration is performed manually.
  • the manual image-based registration will automatically generate the transformation specification.
  • a method is presented which further comprises comparing the registered 3D volume data and further 2D image data on an image base for detecting a change in spatial position of the region of interest. Upon a detection of a change in spatial position of the region of interest, a signal indicative of the change in spatial position is generated. According to an embodiment, this signal indicative of the change in spatial position may be a visible or audible signal.
  • a user interface may be provided by which a user may perform a manual image registration of the registered 2D image data and 3D volume data in response to the signal indicative of the change in spatial position. This has the advantage that only a small computation power is required for the image registration. According to still another embodiment, a user interface may be provided by which a user may initiate an automatic image registration of the registered 2D image data and 3D volume data in response to the signal indicative of the change in spatial position.
  • a method which further comprises automatically performing the image-based registration of the 3D volume data and the 2D image data in response to the signal indicative of the change in spatial position.
  • a method which further comprises providing a digital reconstructed radiograph for the image- based registration, wherein the digital reconstructed radiograph is obtained by a perspective projection of the 3D volume data.
  • Providing a digital reconstructed radiograph has the advantage that any comparison and/or registration of the 3D volume data and the 2D image data can be reduced to a comparison and/or registration of the digital reconstructed radiograph and the 2D image data. That is, the comparison and/or registration is reduced to a comparison and/or registration of two sets of 2D image data.
  • the perspective projection takes an imaging state of the imaging system during acquisition of the 2D image data into account.
  • the perspective projection takes the transformation specification defined by an image registration of the registered 3D volume data and the 2D image data into account.
  • rendering the digital reconstructed radiograph from the 3D volume data is performed by a graphics processing unit and the image-base registration of the digital reconstructed radiograph with the 2D image data is performed by a central processing unit.
  • Performing the image-based registration by the central processing unit includes at least one of "performing instructions of an automatic image-based registration" and "performing instructions of a manual image-based registration".
  • the image-based registration process is implemented in part on the graphics processing unit.
  • the whole image-based registration process is implemented on the graphics processing unit.
  • a method is presented which further comprises a roadmapping visualization wherein the 3D volume data and 2D image data are displayed as a merged image.
  • a comparison of the 2D image data and the 3D volume data is performed in parallel to the roadmapping visualization.
  • patient movements are be detected.
  • patient movements are detected in parallel to a roadmapping visualization, e.g. in parallel to an interventional application.
  • a method is presented wherein the 3D volume data of the region of interest is provided by a pre-interventional 3D imaging run and the 2D image data is provided by a live interventional x-ray fluoroscopy.
  • a computer program product which enables at least one processor to carry out the method according to the first aspect of the invention or an embodiment thereof.
  • an image processing unit is presented which is capable of performing the method according to the first aspect of the invention or an embodiment thereof.
  • an imaging system which includes an image processing unit which is capable of performing the method according to the first aspect of the invention or an embodiment thereof.
  • a method comprises providing 3D volume data of a region of interest and providing 2D image data by imaging the region of interest with an imaging system in an imaging state. A registration of the 3D volume data and the 2D image data is performed and thereby registered 3D volume data and 2D image data are provided. The registration includes a machine-based registration of the 3D volume data and the 2D image data. Further a transformation specification is provided which is defined by an image registration of the registered 3D volume data and the 2D image data.
  • the transformation specification may be taken into account in any further processing or in further image acquisition.
  • the image based registration which may be executed in the background, keeps track of patient movements. It should be understood that the invention is not about providing a diagnosis or about treating patients, but just about a technical invention that provides a method, a computer program product and an interventional system that may assist a physician in reaching a diagnosis or treating a patient.
  • FIG. 1 shows a schematic view of an imaging system according to an embodiment of the invention
  • Fig. 2 shows a flowchart of a method according to another embodiment of the invention
  • FIG. 3 shows a flowchart of a method according to still another embodiment of the invention.
  • Fig. 4 shows a flowchart of a method according to still another embodiment of the invention.
  • Fig. 5 shows in part a flowchart of a method according to still another embodiment of the invention, wherein a previously determined transformation specification is taken into account for 2D/3D registration;
  • Fig. 6 shows in part a flowchart of a method according to still another embodiment of the invention, wherein region of interest is monitored for movement;
  • Fig. 7 shows a schematic view of an image processing unit according to still another embodiment of the invention.
  • an exemplary imaging setup is described which can be used for 3D roadmapping applications. In other embodiments other imaging systems can be used.
  • Fig. 1 shows a so called C-arc x-ray imaging system 2, wherein an x-ray source 4 is mounted in diametrically opposed relation to an x-ray detector 6 at a C-arm 8.
  • the C-arm 8 is rotatably mounted in a curved guide 10 to be rotatable about a first axis of rotation 12 (perpendicular to the drawing plane).
  • the curved guide 10 is rotatably mounted at a support 14 to be rotatable about a second axis of rotation 16.
  • the x-ray detector is linearly movable along a linear axis 18.
  • the x-ray source 4 and the x-ray detector 6 from an imaging unit 20 which is rotatable and linearly moveable with respect to a so called iso-center 21, where the three axes 12, 16 and 18 meet.
  • a region of interest 22 of a patient 24 is located in or close to the iso-center 21 of the C-arm 8.
  • the region of interest 22 may be displaced from the iso-center 21.
  • the imaging system further includes a table 26, on which the patient 24 is received.
  • the table 26 is movable by a drive unit 28.
  • Other embodiments do not contain a drive unit 28.
  • the drive unit 28 is positioned between a table support 30 and the table 26.
  • the table support 30 is mounted on the floor 32.
  • the imaging system 2 further includes a control unit 34 for providing control signals 36 to the imaging unit 20 and the drive system of the C-arm (not shown).
  • Feedback signals 37 may be provided to the control unit, wherein the feedback signals 37 may include at least one imaging parameter of the imaging unit 20, e.g. tube voltage, aperture setting, etc. Further, the feedback signals 37 may include position signals of the drive systems of the C-arm, e.g. position signals which indicate the spatial position of the x-ray source 4 and the x-ray detector 6, respectively.
  • the control unit 34 further receives image data 38 which are generated by the imaging unit 20 in response to x-rays acquired by the x-ray detector 6.
  • control unit 34 In response to the received image data 24, the control unit 34 generates display signals 40 in response to which a display device 42 generates a visible image.
  • the control unit 34 may be adapted to further provide control signals 44 to the drive system 28 of the table 26 in order to move the table 26 and hence patient 24 to a desired position.
  • the table drive system 28 provides position signals 46 indicating the actual position of the table 26 to the control unit 34.
  • the imaging system may further include a user interface 48 for signalling user commands 50 to the control unit 34.
  • the imaging unit 20 is a floor mounted imaging unit. In other embodiments of the invention, a ceiling mounted imaging unit is used. In still other embodiments of the invention, a mobile imaging unit is used.
  • An embodiment of the invention relates to a 2D/3D roadmapping application, wherein 2D image data of the region of interest are fused with 3D volume data of the region of interest 22.
  • the roadmapping application does not require an administration of a contrast agent during acquisition of the 2D image data. Rather, according to this embodiment, a contrast agent is only administered to the patient for 3D image acquisition.
  • the 1 is capable of providing 2D image data of the region of interest 22 by imaging the region of interest 22 with the imaging system 2 with the x-ray source 4 and the x-ray detector 6 in a spatially fixed position.
  • the 2D image data may be provided in real time or in near real time to enable a physician to track the position of an object under consideration, e.g. a cathether, a stent, a coil, a radio opaque glue (e.g. ONYX) that is injected for embolization purposes, etc.
  • 3D volume data are visualized in the 2D/3D roadmapping application as a fused image.
  • the visualization of the 3D volume data allow the physician to spatially locate the object under consideration in the region of interest 22.
  • the 3D volume data are provided by a rotational run of the imaging unit 20 of the imaging system 2, i.e. the 3D volume data are provided by the same imaging system which is used to provide the 2D image data.
  • the 3D volume data may be taken with another 3D imaging system.
  • the other 3D imaging system may even work on a basis different from the 2D imaging system.
  • the 3D imaging system may be a computed tomography device CT, a magnetic resonance tomography device MRT, a ultrasound device UR, a positron emission tomography device PET, etc.
  • the 2D image data may be obtained from x-ray fluoroscopy, ultrasound, etc.
  • one set of 3D volume data is provided whereas a sequence of 2D image data is provided, corresponding to a sequence of 2D images.
  • the 2D image data are provided by imaging the region of interest 22 with the imaging system 2 in a certain imaging state.
  • the imaging state may be characterized by imaging parameters like settings of the imaging unit 20, by the spatial position of the imaging unit 22 and by the spatial position of the region of interest 22, i.e. by the spatial position of the table 26.
  • the 3D volume data and the 2D image data For displaying a fused image of the 3D volume data and the 2D image data, a registration of the 3D volume data and the 2D image data is necessary, i.e. the 3D volume data and the 2D image data must be spatially aligned in space such that they display the region of interest 22 form an identical viewpoint.
  • such an 2D/3D image registration can be performed on a machine base, i.e. either the control signals 36 to the imaging unit 20 and to the drive systems of the imaging unit 20 as well as the control signals 44 to the table drive system 28 can be used to register the 3D volume data and the actual 2D image data.
  • This so-called machine-based registration of the 2D image data and the 3D volume data requires that the region of interest does not move.
  • the patient 24 may be immobilized on the table. However if the patient moves despite immobilization, the spatial correspondence between the 2D image data and the 3D volume data is disturbed or lost. The same arises when the interventional procedure moves the region 22 of interest with respect to the immobilized parts of the patient 24.
  • a change in spatial position of the region of interest is not always directly visible during intervention.
  • patient movements may displace the materials relative to the vessel as displayed in the roadmapping presentation.
  • a registration of the 3D volume data and the 2D image data is performed wherein the registration of the 3D volume data and the 2D image data includes a machine-based registration of the 3D volume data and the 2D image data.
  • the registration provides registered 3D volume data and 2D image data.
  • a transformation specification defined by an image registration of the registered 3D volume data and 2D image data is provided.
  • Providing a transformation specification which is defined by the image registration of the at least machine-base registered 3D volume data and 2D image data has the advantage that this transformation specification may be used for accurate registration of the 2D image data of two or more 2D images with the 3D volume data.
  • This provides for a fast registration of the 2D image data and the 3D volume data, while patient movements or a movement of the region of interest is taken into account by an image-based registration.
  • the method according to an embodiment of the invention Compared to a mere image-based 2D/3D registration, a method according to an embodiment of the invention is faster and has a higher accuracy and a greater capture range.
  • An embodiment of the inventive method provides for a 2D/3D registration that fits smoothly and transparently in the interventional application.
  • the registration which has been carried out prior to the image-based registration is referred to as pre-registration in the following and the respectively registered 2D image data and 3D volume data are referred to as pre- registered 2D image data and 3D volume data.
  • pre-registration itself may include further registration processes in addition to the machine-based registration.
  • Fig. 2 schematically illustrates an embodiment of a method according to the invention, wherein the 2D image data 52 and 3D image data 54 are provided. Further, imaging parameters 56 which define a certain imaging state of the imaging system are provided.
  • the imaging state to which the imaging parameters 56 correspond is the imaging state during the acquisition of the 2D image under consideration. It should be understood that during the interventional procedure the imaging state may be altered by the user, e.g. by rotating the C-arm 8 about at least one of its axes of rotation 12, 16, by changing the distance between the x-ray source 4 and the x-ray detector 6 along the linear axis 18, by changing tube settings, by changing the position of the table 26, etc.
  • imaging parameters and changes thereof may be accounted for by a respective machine based registration 58 of the 3D volume data and the actual 2D image data under consideration, which has been taken using the specific imaging parameters 56. It should be noted that the machine-based registration of the 3D volume data 54 and the 2D image data 52 does not directly involve or use the 2D image data, but rather the imaging parameters 56 which have been used to acquire the 2D image data.
  • an image based registration 60 of the 2D image data 52 and the 3D volume data 54 is performed.
  • the image based registration 60 allows for providing the transformation specification indicated at 62 in Fig. 2, wherein the transformation specification is defined by the image registration 60. That the transformation specification describes any change in spatial position of the region of interest 22 with regard to the machine-base registered 2D image data and 3D volume data. It should be noted that within this specification and the claims, a "change in position" includes translative position changes as well as changes in orientation.
  • the method illustrated in Fig. 2 may be carried out to initially register the 3D volume data and the 2D image data.
  • the method illustrated in Fig. 2 may be carried out to take movements of the region of interest during the interventional procedure into account.
  • the image based registration 60 is carried out on the basis of the machine-base registered 2D image data 52 and 3D volume data 54. In other words, according to an embodiment, no prior image-based registrations are taken into account in the actual image-based registration 60.
  • already performed image-based registrations are taken into account when performing the actual image-based registration, as described below in greater detail with regard to Fig. 3.
  • the image-based registration may be carried out automatically, e.g. by using a similarity optimizer loop.
  • the image registration is carried out manually.
  • the pre-registered 3D image volume data and 2 D image data i.e. in the illustrated embodiment the machine-base registered 3D volume data and 2D image data are displayed on a display device, e.g. on the display device 42, and the user performs the image registration by effecting relative translation and/or rotation of the 3D volume data and the 2D image data, e.g. by operating the user interface 48 and according to the user commands are commanded to the control unit.
  • the manually performed translation and/or rotation defines the transformation specification corresponding to the manual image registration.
  • Fig. 3 shows a further embodiment of a method according to the invention.
  • the method illustrated in Fig. 3 differs from the method illustrated in Fig. 2 in a pre-registration 64 wherein a transformation of the 3D volume data is performed according to a prior transformation specification 66, which has been obtained in an already performed, prior image-based registration process.
  • the pre- registration 64 includes the machine-based registration as performed in the method of Fig. 2.
  • the actual image-based registration 60 which is performed after the pre-registration 64, is possibly carried out faster compared to an actual image based registration carried out on only machine-base registered 2D image data and 3D volume data.
  • the new transformation specification 62 describes the transformation of the pre- registered 3D volume data and the 2D image data to the registration state after completing the image registration 68. Since the already performed image registrations are contained in the pre-registration 64, the new transformation specification describes, according to an embodiment, the change in position of the region of interest 22 after the last already performed image registration.
  • the other method steps in Fig. 3 correspond to the method steps illustrated in Fig. 2, the description of which is not repeated here.
  • Fig. 4 shows a further embodiment of a method according to the invention.
  • the method illustrated in Fig. 4 differs from the method illustrated in Fig. 2 in that the image-based registration 68 takes into account a prior transformation specification 66 which is defined by already performed image registrations of the 3D volume data and the 2D image date.
  • the image-based registration 68 includes performing a transformation of the machine-base registered 3D volume data and 2D image data according to the prior transformation specification 66 and, subsequently, performing an actual image-based registration, e.g. by a similarity optimizing loop, in order to complete the image based registration 68.
  • the whole transformation specification which describes the transformation of the machine-base registered 3D volume data and the 2D image data to the registration state after completing the image registration 68, is provided as new transformation specification 62.
  • a method further comprises providing further 2D image data 52 by imaging the region of interest with the imaging system 2 and performing a registration 64 of the 3D volume data 54 and the further 2D image data 52 to thereby provide registered 3D volume data and further 2D image data.
  • the registration 70 of the 3D volume data and the further 2D image data includes performing a machine-based registration of the 3D volume data and the further 2D image data by taking imaging parameters 56 into account and further includes performing a transformation according to a transformation specification 62.
  • An example of a method of this kind is illustrated in Fig. 5.
  • the transformation specification 62 may be obtained according to any suitable method described herein, e.g. according to one of the methods illustrated in Fig. 2, Fig. 3 and Fig. 4.
  • the registered 2D image data and 3D volume data may be taken as input of an image fusion process 72, where the registered 2D image data and the 3D volume data are overlayed (fused).
  • the fused image is then displayed on a display device, indicated at 74 in Fig. 5.
  • a transformation specification 62 obtained by already performed image registration is taken into account.
  • An actual image-based registration may be performed after a predetermined time interval or after a predetermined number of acquired images.
  • the 2D images which are not used for an actual image registration are registered by a machine-based registration using the imaging parameters 56 and taking the available transformation specification 62 into account.
  • the 2D image data is checked for movements of the region of interest 22 and the image-based registration is carried out when a movement of the region of interest 22 is detected.
  • Fig. 6 shows an example of such a checking for movements of the region of interest 22.
  • 3D volume data and 2D image data are registered by performing a registration 64 which includes performing a machine-based registration of the 3D volume data and the further 2D image data by taking imaging parameters 56 into account and further includes performing a transformation according to a transformation specification 62.
  • the method illustrated in Fig. 6 is similar to the method in Fig. 3.
  • no image based registration is carried out on the registered 2D image data and the 3D volume data, but rather a similarity comparison of the 2D image data and the 3D volume data is performed. If no movement of the region of interest 22 has occurred, the 3D volume data matches the 2D image data. Otherwise, it is decided that a change in spatial position of the region of interest 22 has occurred and a signal indicative thereof is provided, indicated at 78 in Fig. 6.
  • the image-based registration of the 3D volume data and the 2D image data is carried out in response to the signal indicative of the change in spatial position.
  • the image-based registration may be carried out according to one of the methods illustrated in Fig. 2, Fig. 3 and Fig. 4.
  • the image-based comparison and/or the image-based registration of the 2D image data and the 3D volume data may be carried out in parallel to the display of the fused image based on the 2D image data and the 3D volume data. That is, in an embodiment, a detection and/or correction of a change in spatial position of the region of interest is carried out in parallel to a 2D/3D roadmapping visualization. According to an embodiment, a comparison of the 2D image data and the
  • 3D volume data, or a registration of the 2D image data and the 3D volume data may include performing a corresponding projection transformation of the 3D volume data, thereby providing a digital reconstructed radiograph (DRR).
  • the projection transformation may take imaging parameters 56 as well as transformation specification 62, 66 into account, depending on the method.
  • the resulting digital reconstructed radiograph is a 2D image defined by corresponding 2D DRR image data which can be compared to or registered with the 2D image data 52.
  • the rendering of the digital reconstructed radiograph from the 3D volume data is performed by a graphics processing unit of the control unit 34 and the image-based registration or the image comparison of the digital reconstructed radiograph with the 2D image data is performed by a central processing unit of the control unit 34.
  • the control unit 34 includes the functionality of an image processing system. According to other embodiment, a separate image processing system may be provided for providing the described functionality.
  • a computer program product is carried out in the control unit 34 which enables at least one processor, e.g. the central processing unit and the graphics processing unit, to carry out a method as described herein.
  • Fig. 7 describes an exemplary embodiment of a viewing architecture of an image processing system 100 employing an embodiment of a method according to the invention in the context of 2D/3D roadmapping application, wherein the 2D image data represents an x-ray fluoroscopy.
  • the data flow shows the images as they are stored in the various GPU side image-buffers and processed by the different bold circled graphics processing unit (GPU) 2D image processing (IP) blocks.
  • GPU IP blocks have a central processing unit (CPU) counter part that is responsible for initialisation and parameterisation.
  • CPU central processing unit
  • thin circles indicate a CPU program CPUP whereas bold circles indicate a GPU program GPUP.
  • thin rectangles indicate CPU image data CPUID whereas bold rectangles indicate GPU image data GPUID.
  • Dotted arrows indicate control signals Cl, C2, C4, C5, C6, thin arrows indicated 2D image data 2DID and bold arrows indicated 3D image data 3DID.
  • the image processing system of Fig. 7 includes one volume initialisation part V, five image processing steps I, II, III, IVa and IVb that are coupled by means of four off-screen image-buffers or framebuffer objects 104, 106, 116 and 117 and one or more display devices, e.g. viewing consoles 123, 124 indicated at VI.
  • a motion detection/compensation is performed in the image processing step I.
  • a 3D volume rendering is performed.
  • a 2D fluoroscopy image rendering is performed.
  • a 2D/3D volume visualization is performed in the image processing step IVa.
  • the normal 2D/3D roadmapping visualisation will be executed in the outer loop of Fig. 7, i.e. in step II, III, IVb, V and VI.
  • the real-time fluoroscopy image will be processed in steps 112, 114, and 118.
  • the 3D volume will be rendered in step 105.
  • Both images will be fused in step 119 visualized in 120 and presented on a display device in 123, 124.
  • the roadmapping presentation is displayed on an interventional display 124 and images related to the 2D/3D registration process are displayed on an control display 123.
  • New 2D fluoroscopy images coming from realtime acquisitions are entering in CPU process 111.
  • the images are loaded in a 2D GPU texture-map 113.
  • the incoming images are noise filtered using both the incoming and stored images.
  • GPU step 114 the noise-filtered images are, based on the current detector formats together with the user selected zooming and panning information, mapped to screen space.
  • the 2D images are entering block IVb.
  • the 3D volumes are, given one of set of rendering modes and based on the current (normally inverse) perspective projection and viewing transformation as obtained for the current C-arm position by the machined based 2D/3D registration together with the user selected zooming and panning information, mapped to screen space.
  • the 2D fluoroscopy image is used as mask and given the display polarity/colour merged/blended onto the 3D-Volumes, the mask is processed in step 118.
  • the fluoroscopy image mask is processed with functions like guidewire enhancement and landmarking. For example, the opacity of the fluoro pixel is used as blending factor to blend a color signal (active black, white or any other color) onto the 3D information. In this way, low contrast 2D (background) information does not obstruct the 3D information.
  • the projected 3D vessel region is used as a mask. Inside this region the contrast of the 2D fluoroscopic image is noise reduced, e.g. by a recursive filter over multiple acquisitions, and the contrast is decreased by a user controlled factor called land marking.
  • step 105 can be skipped, and the image contained in 106 can be re-used. If the viewing transformation changes (zooming or panning) the image upload to the GPU in 111 and pre-processing in 112 can be skipped.
  • the image processing system 100 is capable of performing an automatic mode and a manual mode.
  • movements of the region of interest are automatically detected and compensated.
  • movements of the region of interest are compensated by manual image registration of the 2D image data and 3D volume data.
  • the movements of the region of interest are detected automatically, like in the automatic mode and a signal indicative of the movement is signalled to the user who may register the 2D image data and the 3D volume data manually in response to the signal indicative of the movement.
  • the movements of the region of interest are detected manually by the user.
  • the 2D/3D image-registration will be executed in block I inside the inner dotted rectangle R. Further in the inner dotted rectangle R, the visualization for visual feedback of the manual registration or for the progression presentation of the automatic registration process is performed.
  • the DRR (step 102) and filtered 2D fluoroscopy images (115) will be rendered to screen space at a fixed (full screen) scale, based on the current (normally inverse) perspective projection and viewing transformation as obtained for the current C-arm position by the machined based 2D/3D registration.
  • the registration will be orchestered by the similarity- optimising loop that will control the DRR renderer (control 104) using six degrees of freedom (Tx, Ty, Tz, Rx, Ry, Rz), wherein Tx, Ty and Tz indicate translational degrees of freedom in x, y and z direction, respectively and Rx, Ry and Rz indicate rotational degrees of freedom about axes in x, y and z direction, respectively.
  • Tx, Ty and Tz indicate translational degrees of freedom in x, y and z direction, respectively
  • Rx, Ry and Rz indicate rotational degrees of freedom about axes in x, y and z direction, respectively.
  • the similarity optimiser step 101 is replaced by the controller 103 that will pass the translation/rotation offsets of the manual image registration in the form of control signal C4 to both volume Tenderers 102 and 105.
  • the DRR (step 102) and filtered 2D fluoroscopy images (115) will be rendered to screen space with the user selected zooming and panning scale, based on the current (normally inverse) perspective projection and viewing transformation as obtained for the current C-arm position by the machined based 2D/3D registration.
  • Image registration results may be displayed either next to or in place of the 2D/3D roadmapping information in a fused 2D/3D combination using IP steps 121 and 122.
  • Image registration results in this sense may include intermediate image registration results and/or final image registration results.
  • volume initialisation is described.
  • the 3D volumes textures as used by volume Tenderers 102 and 105, will be initialised in block V.
  • step 107 based on histogram information of the 3D volume automatic segmentation thresholds Cl, C2 for bone and vessel information (if present) will be determined.
  • Segmentation in this sense is a process of deciding whether a pixel is part of an object to be measured or processed, or is merely part of the background and to be excluded from analysis. This process generally uses the intensity of the pixel to make the decision.
  • the image may be segmented by selecting an upper and lower threshold to define a range of acceptable grayscale levels, and the image processor would group all of the contiguous pixels that fall within that range into "objects".
  • the segmentation thresholds Cl, C2 will be communicated to the volume renderer 105 (contrast) and DRR renderer 102 (bone). Furthermore the contrast threshold is passed to step 108. It should be noted that for the 3D roadmapping interventional procedure, high contrast vessel information is of particular interest.
  • the vessel tree is visualized based on segmentation threshold by controlling the transfer function during the direct volume rendering step 106. During other procedures like needle guidance, the transfer function is controlled to visualize the (soft tissue) target.
  • the image based registration is based on the bone information present in both 2D and 3D signals. In other embodiments, the image based registration is based on other image information present in both 2D and 3D signals.
  • the volume of interest V.O.I.
  • the contrast information if present, is removed and the resulting volume textures as used by the DRR renderer 102 are uploaded to the texture maps 110.
  • the 3D signal used by the DRR rendered in the contrast signal if present, is removed and the (bone) threshold is used by the DRR transfer function to put emphasis on the bone information.
  • C6 of the volumes and/or areas of interest as used during the registration may be passed to the DDR renderer 102 and/or simularity optimizer 101.
  • a method/framework of 2D/3D image registration is disclosed.
  • accurate 2D/3D machine based registration is used in combination with manual and/or automatic image based 2D/3D registration to tackle the problem of patient movements detection and correction during an intervention, that can be implemented efficiently and executed transparently within the context of a dynamic 2D/3D roadmapping viewing application using of the shelve graphics hardware.
  • the motion detection/compensation can be made an intrinsic part of the roadmapping visualisation that can execute either transparently in the background or be used to visualise/control the automatic and/or manual compensation.
  • the state of the movement detection/compensation can be presented as natural part of the roadmapping process.
  • the time-consuming DRR renderer will execute on the GPU.
  • the similarity determination/optimiser may be executed on the CPU, which offers a scalable and flexible solution for changes in the optimising strategies.
  • DRR generation a fast gradient emulation algorithm, based on a LUT table implementation that puts special emphasis on the edges of the structures within the 3D volume, is used.
  • the image based 2D/3D registration will execute that keeps track of patient movements.
  • the image-based 2D/3D registration can be started either automatically or manual by the user.
  • the motion detection/compensation is running in the background, not visible to the user.
  • an initialisation/continuation of this process can be visualised upon a user selection.
  • the user can decide to perform the registration manually.
  • the system will start the optimiser loop and will come up with the 3D rotation/translation compensation in a couple of seconds that will be taken into account from than on in the new 2D/3D image fusion over the various C-arm geometry positional changes.
  • the optimiser will need to execute at least once to come up with a good similarity measure in order to start the movement monitoring process. In the monitoring step only one similarity display/comparison step will be needed to check the movement. In this way, as long as the C-ARM position is unchanged new incoming fluoroscopy acquisitions can be tested upon movements within a few 100th of seconds.
  • both the 2D acquisition and the geometry settings as used by the DDR renderer will be frozen until the 2D/3D registration is finished. Once finished the result will be taken over by the 2D/3D roadmapping renderer.
  • a new initial similarity value for the detection monitoring will be evaluated for this new position including the last performed correction, once the C-Arm is fixed for a certain period of time and a certain number of fluoroscopy image acquisitions have been acquired in this position. From than on process as described is repeated (in monitoring mode), i.e. the positions will checked in a loop each time whenever a presetted number of acquisition have entered, which may again lead to a optimiser compensation loop etc.
  • the 2D/3D roadmapping visualisation approach as lined out above can be performed on 3D data acquired from different examinations or from different modalities. Further, 2D/3D-Roadmapping can be used effectively for percutanious interventions where a needle is inserted in the patient. Prior to the intervention the needle path is planned using the pre-interventional 3D Volume and along this path a 3D (ruler) graphic is rendered in the 3D-Volume. Based on the planned needle path the orientation of the C-ARM automatic position control is programmed to look exactly in the direction of the planned needle, so that under fluoroscopy a "pinpoint approach" can be used to enter the needle at the right position/angle. Furthermore a view direction is programmed orthogonal to the needle.
  • a computer program may be stored/distributed on a suitable medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The inventive method comprises providing 3D volume data (54) of a region of interest and providing 2D image data (52) by imaging the region of interest with an imaging system. A registration of the 3D volume data (54) and the 2D image data (52) is performed and thereby registered 3D volume data and 2D image data are provided. The registration includes a machine-based registration of the 3D volume data and the 2D image data. Further a transformation specification (62) is provided which is defined by an image registration (60) of the registered 3D volume data (54) and the 2D image data (52). Thereby any change in the spatial relationship between the position of the region of interest during acquisition of the 3D volume data (54) and the position of the region of interest during acquisition of the 2D image date (52) is described by the transformation specification (62) and may be taken into account in any further processing or in further image acquisition. In an embodiment of a 2D/3D roadmapping visulization the image based registration, which may be executed in the background, keeps track of patient movements.

Description

2D/3D IMAGE REGISTRATION
FIELD OF THE INVENTION
The invention relates to the field of medical imaging, and more specifically to a method of registering 3D volume data and 2D image data.
BACKGROUND OF THE INVENTION
US 7,010,080 discloses a method for marker- free automatic fusion of 2D fluoroscopic C-arm images with preoperative 3D images using an intraoperatively obtained 3D data record. During a medical interventional procedure involving a region of a patient, an intraoperatively 3D image of the region is obtained using a C-arm x-ray system having a C-arm and having a tool plate attached to the C-arm x-ray system. An image-based matching of an existing preoperative 3D image of the region, obtained prior to the medical interventional procedure, relative to the intraoperative 3D image is undertaken which generates a matching matrix. The tool plate is matched relative to a navigation system. Subsequently, a 2D fluoroscopic image is obtained using the C-arm x-ray system with the C-arm in any arbitrary position. A projection matrix is determined for matching the 2D fluoroscopic image to the 3D image. The 2D fluoroscopic image is matched with the preoperative 3D image using the matching matrix and the projection matrix.
SUMMARY OF THE INVENTION
It would be advantageous to achieve a method or a device which provides registered 3D volume data and 2D image data of a region or volume of interest while movements of the patient region of interest are taken into account. In this respect, a region of interest may be a portion of a patient which is imaged with an imaging system.
To better address this concern, in a first aspect of the invention a method is presented which comprises providing 3D volume data of a region of interest and providing 2D image data by imaging the region of interest with an imaging system. The method further comprises performing a registration of the 3D volume data and the 2D image data and thereby providing registered 3D volume data and 2D image data, wherein the registration includes a machine-based registration of the 3D volume data and the 2D image data. Further, the method according to the first aspect of the invention comprises providing a transformation specification defined by an image registration of the registered 3D volume data and 2D image data.
This method has the advantage that any change in the spatial relationship of the position of the region of interest during acquisition of the 3D volume data and the position of the region of interest during acquisition of the 2D image date is described by the transformation specification and may be taken into account in any further processing or in further image acquisition. Such a change in the spatial relationship of the position of region of interest during acquisition of the 3D volume data and the position of the region of interest during acquisition of the 2D image date may occur due to patient movements. According to an embodiment of the invention, a method is presented, wherein providing the transformation specification includes storing the transformation specification.
According to another embodiment of the invention, a method is presented which further comprises providing further 2D image data by imaging the region of interest with the imaging system. The method further comprises performing a registration of the 3D volume data and the further 2D image data and thereby providing registered 3D volume data and further 2D image data, wherein the registration of the 3D volume data and the further 2D image data includes performing a machine-based registration of the 3D volume data and the further 2D image data and further includes performing a transformation according to the transformation specification. This embodiment has the advantage that for example in addition to the machine based registration, a change in the spatial relationship of the position of region of interest during acquisition of the 3D volume data and the position of the region of interest during acquisition of the further 2D image date described by the transformation specification is taken into account by the registration of the 3D volume data and the 2D image data.
It should be noted that throughout this specification, the term "registered 3D volume data and (further) 2D image data" includes at least one of "registered by performing only a machine-based registration of the 3D volume data and the (further) 2D image data" and "registered by performing a machine-based registration of the 3D volume data and the 2D image data and by further performing a transformation according to a transformation specification". In other words, "registered 2D image data and 3D volume data" may have been registered by using only machine-based registration or by using machine-based registration in combination with a further registration based on further registration information.
According to still another embodiment, a method is presented wherein the image-based registration is performed manually. According to still another embodiment, the manual image-based registration will automatically generate the transformation specification. According to an embodiment, a method is presented which further comprises comparing the registered 3D volume data and further 2D image data on an image base for detecting a change in spatial position of the region of interest. Upon a detection of a change in spatial position of the region of interest, a signal indicative of the change in spatial position is generated. According to an embodiment, this signal indicative of the change in spatial position may be a visible or audible signal. According to still another embodiment, a user interface may be provided by which a user may perform a manual image registration of the registered 2D image data and 3D volume data in response to the signal indicative of the change in spatial position. This has the advantage that only a small computation power is required for the image registration. According to still another embodiment, a user interface may be provided by which a user may initiate an automatic image registration of the registered 2D image data and 3D volume data in response to the signal indicative of the change in spatial position.
According to still another embodiment, a method is presented which further comprises automatically performing the image-based registration of the 3D volume data and the 2D image data in response to the signal indicative of the change in spatial position. This has the advantage that no user-interaction is necessary to perform the image-based registration.
According to an embodiment of the invention, a method is presented which further comprises providing a digital reconstructed radiograph for the image- based registration, wherein the digital reconstructed radiograph is obtained by a perspective projection of the 3D volume data. Providing a digital reconstructed radiograph has the advantage that any comparison and/or registration of the 3D volume data and the 2D image data can be reduced to a comparison and/or registration of the digital reconstructed radiograph and the 2D image data. That is, the comparison and/or registration is reduced to a comparison and/or registration of two sets of 2D image data.
According to an embodiment, the perspective projection takes an imaging state of the imaging system during acquisition of the 2D image data into account. According to other embodiments, in addition or alternatively to the imaging state, the perspective projection takes the transformation specification defined by an image registration of the registered 3D volume data and the 2D image data into account.
According to another embodiment, rendering the digital reconstructed radiograph from the 3D volume data is performed by a graphics processing unit and the image-base registration of the digital reconstructed radiograph with the 2D image data is performed by a central processing unit. Performing the image-based registration by the central processing unit includes at least one of "performing instructions of an automatic image-based registration" and "performing instructions of a manual image-based registration". According to other embodiments, the image-based registration process is implemented in part on the graphics processing unit. According to still other embodiments, the whole image-based registration process is implemented on the graphics processing unit. According to still another embodiment, a method is presented which further comprises a roadmapping visualization wherein the 3D volume data and 2D image data are displayed as a merged image.
According to still another embodiment, a comparison of the 2D image data and the 3D volume data is performed in parallel to the roadmapping visualization. According to an embodiment, by such a comparison, patient movements are be detected. Accordingly in this embodiment, patient movements are detected in parallel to a roadmapping visualization, e.g. in parallel to an interventional application.
According to another embodiment, a method is presented wherein the 3D volume data of the region of interest is provided by a pre-interventional 3D imaging run and the 2D image data is provided by a live interventional x-ray fluoroscopy.
According to a second aspect of the invention, a computer program product is presented which enables at least one processor to carry out the method according to the first aspect of the invention or an embodiment thereof. According to a third aspect of the invention, an image processing unit is presented which is capable of performing the method according to the first aspect of the invention or an embodiment thereof.
According to a fourth aspect of the invention, an imaging system is presented which includes an image processing unit which is capable of performing the method according to the first aspect of the invention or an embodiment thereof. In summary, according to one aspect of the invention, a method comprises providing 3D volume data of a region of interest and providing 2D image data by imaging the region of interest with an imaging system in an imaging state. A registration of the 3D volume data and the 2D image data is performed and thereby registered 3D volume data and 2D image data are provided. The registration includes a machine-based registration of the 3D volume data and the 2D image data. Further a transformation specification is provided which is defined by an image registration of the registered 3D volume data and the 2D image data. Thereby any change in the spatial relationship between the position of the region of interest during acquisition of the 3D volume data and the position of the region of interest during acquisition of the 2D image date is described by the transformation specification and may be taken into account in any further processing or in further image acquisition. In an embodiment of a 2D/3D roadmapping visualization the image based registration, which may be executed in the background, keeps track of patient movements. It should be understood that the invention is not about providing a diagnosis or about treating patients, but just about a technical invention that provides a method, a computer program product and an interventional system that may assist a physician in reaching a diagnosis or treating a patient.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following detailed description, reference is made to the drawings in which Fig. 1 shows a schematic view of an imaging system according to an embodiment of the invention;
Fig. 2 shows a flowchart of a method according to another embodiment of the invention;
Fig. 3 shows a flowchart of a method according to still another embodiment of the invention;
Fig. 4 shows a flowchart of a method according to still another embodiment of the invention;
Fig. 5 shows in part a flowchart of a method according to still another embodiment of the invention, wherein a previously determined transformation specification is taken into account for 2D/3D registration;
Fig. 6 shows in part a flowchart of a method according to still another embodiment of the invention, wherein region of interest is monitored for movement;
Fig. 7 shows a schematic view of an image processing unit according to still another embodiment of the invention. DETAILED DESCRIPTION OF EMBODIMENTS
With reference to the drawings, illustrative embodiments of the present invention will now be described in more detail. In an illustrated embodiment, an exemplary imaging setup is described which can be used for 3D roadmapping applications. In other embodiments other imaging systems can be used.
Fig. 1 shows a so called C-arc x-ray imaging system 2, wherein an x-ray source 4 is mounted in diametrically opposed relation to an x-ray detector 6 at a C-arm 8. The C-arm 8 is rotatably mounted in a curved guide 10 to be rotatable about a first axis of rotation 12 (perpendicular to the drawing plane). The curved guide 10 is rotatably mounted at a support 14 to be rotatable about a second axis of rotation 16. The x-ray detector is linearly movable along a linear axis 18. By the described mounting scheme the x-ray source 4 and the x-ray detector 6 from an imaging unit 20 which is rotatable and linearly moveable with respect to a so called iso-center 21, where the three axes 12, 16 and 18 meet. In the illustrated embodiment a region of interest 22 of a patient 24 is located in or close to the iso-center 21 of the C-arm 8. In other embodiments, the region of interest 22 may be displaced from the iso-center 21. The imaging system further includes a table 26, on which the patient 24 is received. The table 26 is movable by a drive unit 28. Other embodiments do not contain a drive unit 28. In the illustrated embodiment, the drive unit 28 is positioned between a table support 30 and the table 26. The table support 30 is mounted on the floor 32.
The imaging system 2 further includes a control unit 34 for providing control signals 36 to the imaging unit 20 and the drive system of the C-arm (not shown). Feedback signals 37 may be provided to the control unit, wherein the feedback signals 37 may include at least one imaging parameter of the imaging unit 20, e.g. tube voltage, aperture setting, etc. Further, the feedback signals 37 may include position signals of the drive systems of the C-arm, e.g. position signals which indicate the spatial position of the x-ray source 4 and the x-ray detector 6, respectively. The control unit 34 further receives image data 38 which are generated by the imaging unit 20 in response to x-rays acquired by the x-ray detector 6. In response to the received image data 24, the control unit 34 generates display signals 40 in response to which a display device 42 generates a visible image. The control unit 34 may be adapted to further provide control signals 44 to the drive system 28 of the table 26 in order to move the table 26 and hence patient 24 to a desired position. According to an embodiment, the table drive system 28 provides position signals 46 indicating the actual position of the table 26 to the control unit 34. The imaging system may further include a user interface 48 for signalling user commands 50 to the control unit 34.
In the illustrated embodiment, the imaging unit 20 is a floor mounted imaging unit. In other embodiments of the invention, a ceiling mounted imaging unit is used. In still other embodiments of the invention, a mobile imaging unit is used. An embodiment of the invention relates to a 2D/3D roadmapping application, wherein 2D image data of the region of interest are fused with 3D volume data of the region of interest 22. According to an embodiment, the roadmapping application does not require an administration of a contrast agent during acquisition of the 2D image data. Rather, according to this embodiment, a contrast agent is only administered to the patient for 3D image acquisition. The imaging system 2 illustrated in Fig. 1 is capable of providing 2D image data of the region of interest 22 by imaging the region of interest 22 with the imaging system 2 with the x-ray source 4 and the x-ray detector 6 in a spatially fixed position. In a roadmapping application, the 2D image data may be provided in real time or in near real time to enable a physician to track the position of an object under consideration, e.g. a cathether, a stent, a coil, a radio opaque glue (e.g. ONYX) that is injected for embolization purposes, etc. Together with the 2D image data, 3D volume data are visualized in the 2D/3D roadmapping application as a fused image. The visualization of the 3D volume data, which are typically acquired by a pre-interventional imaging run, allow the physician to spatially locate the object under consideration in the region of interest 22. According to an embodiment of the invention, the 3D volume data are provided by a rotational run of the imaging unit 20 of the imaging system 2, i.e. the 3D volume data are provided by the same imaging system which is used to provide the 2D image data. In other embodiments, the 3D volume data may be taken with another 3D imaging system. Moreover, the other 3D imaging system may even work on a basis different from the 2D imaging system. For example, the 3D imaging system may be a computed tomography device CT, a magnetic resonance tomography device MRT, a ultrasound device UR, a positron emission tomography device PET, etc. The 2D image data may be obtained from x-ray fluoroscopy, ultrasound, etc. According to an embodiment, one set of 3D volume data is provided whereas a sequence of 2D image data is provided, corresponding to a sequence of 2D images. The 2D image data are provided by imaging the region of interest 22 with the imaging system 2 in a certain imaging state. The imaging state may be characterized by imaging parameters like settings of the imaging unit 20, by the spatial position of the imaging unit 22 and by the spatial position of the region of interest 22, i.e. by the spatial position of the table 26. For displaying a fused image of the 3D volume data and the 2D image data, a registration of the 3D volume data and the 2D image data is necessary, i.e. the 3D volume data and the 2D image data must be spatially aligned in space such that they display the region of interest 22 form an identical viewpoint.
Generally, such an 2D/3D image registration can be performed on a machine base, i.e. either the control signals 36 to the imaging unit 20 and to the drive systems of the imaging unit 20 as well as the control signals 44 to the table drive system 28 can be used to register the 3D volume data and the actual 2D image data. This so- called machine-based registration of the 2D image data and the 3D volume data requires that the region of interest does not move. To prevent movements of the region of interest 22, the patient 24 may be immobilized on the table. However if the patient moves despite immobilization, the spatial correspondence between the 2D image data and the 3D volume data is disturbed or lost. The same arises when the interventional procedure moves the region 22 of interest with respect to the immobilized parts of the patient 24. In any case, a change in spatial position of the region of interest is not always directly visible during intervention. In other situations, for instance when interventional material (guidewires, stents, coils, etc) is or has been inserted during an interarterial intervention, patient movements may displace the materials relative to the vessel as displayed in the roadmapping presentation.
According to the invention, a registration of the 3D volume data and the 2D image data is performed wherein the registration of the 3D volume data and the 2D image data includes a machine-based registration of the 3D volume data and the 2D image data. The registration provides registered 3D volume data and 2D image data.
Further, a transformation specification defined by an image registration of the registered 3D volume data and 2D image data is provided. Providing a transformation specification which is defined by the image registration of the at least machine-base registered 3D volume data and 2D image data has the advantage that this transformation specification may be used for accurate registration of the 2D image data of two or more 2D images with the 3D volume data. This provides for a fast registration of the 2D image data and the 3D volume data, while patient movements or a movement of the region of interest is taken into account by an image-based registration. For example the method according to an embodiment of the invention Compared to a mere image-based 2D/3D registration, a method according to an embodiment of the invention is faster and has a higher accuracy and a greater capture range. An embodiment of the inventive method provides for a 2D/3D registration that fits smoothly and transparently in the interventional application. The registration which has been carried out prior to the image-based registration is referred to as pre-registration in the following and the respectively registered 2D image data and 3D volume data are referred to as pre- registered 2D image data and 3D volume data. However, it should be noted that this wording is used merely for ease of distinction of the (pre-)registration process carried out prior to the actual image registration and the actual image registration itself and does not limit the pre-registration to any extend. For example, the pre-registration itself may include further registration processes in addition to the machine-based registration.
Fig. 2 schematically illustrates an embodiment of a method according to the invention, wherein the 2D image data 52 and 3D image data 54 are provided. Further, imaging parameters 56 which define a certain imaging state of the imaging system are provided. The imaging state to which the imaging parameters 56 correspond is the imaging state during the acquisition of the 2D image under consideration. It should be understood that during the interventional procedure the imaging state may be altered by the user, e.g. by rotating the C-arm 8 about at least one of its axes of rotation 12, 16, by changing the distance between the x-ray source 4 and the x-ray detector 6 along the linear axis 18, by changing tube settings, by changing the position of the table 26, etc. All these imaging parameters and changes thereof may be accounted for by a respective machine based registration 58 of the 3D volume data and the actual 2D image data under consideration, which has been taken using the specific imaging parameters 56. It should be noted that the machine-based registration of the 3D volume data 54 and the 2D image data 52 does not directly involve or use the 2D image data, but rather the imaging parameters 56 which have been used to acquire the 2D image data.
After machine-base registering the 3D volume data 54 and the 2D image data 52, an image based registration 60 of the 2D image data 52 and the 3D volume data 54 is performed. The image based registration 60 allows for providing the transformation specification indicated at 62 in Fig. 2, wherein the transformation specification is defined by the image registration 60. That the transformation specification describes any change in spatial position of the region of interest 22 with regard to the machine-base registered 2D image data and 3D volume data. It should be noted that within this specification and the claims, a "change in position" includes translative position changes as well as changes in orientation.
The method illustrated in Fig. 2 may be carried out to initially register the 3D volume data and the 2D image data. In particular when the 3D volume data have been taken with a 3D imaging system different from the 2D imaging system, such an initial registration may be carried out. Further, the method illustrated in Fig. 2 may be carried out to take movements of the region of interest during the interventional procedure into account. In accordance with the illustrated method, the image based registration 60 is carried out on the basis of the machine-base registered 2D image data 52 and 3D volume data 54. In other words, according to an embodiment, no prior image-based registrations are taken into account in the actual image-based registration 60. In other embodiments, already performed image-based registrations are taken into account when performing the actual image-based registration, as described below in greater detail with regard to Fig. 3. The image-based registration may be carried out automatically, e.g. by using a similarity optimizer loop. In other embodiments, the image registration is carried out manually. For manual image registration, the pre-registered 3D image volume data and 2 D image data, i.e. in the illustrated embodiment the machine-base registered 3D volume data and 2D image data are displayed on a display device, e.g. on the display device 42, and the user performs the image registration by effecting relative translation and/or rotation of the 3D volume data and the 2D image data, e.g. by operating the user interface 48 and according to the user commands are commanded to the control unit. The manually performed translation and/or rotation defines the transformation specification corresponding to the manual image registration.
Fig. 3 shows a further embodiment of a method according to the invention. The method illustrated in Fig. 3 differs from the method illustrated in Fig. 2 in a pre-registration 64 wherein a transformation of the 3D volume data is performed according to a prior transformation specification 66, which has been obtained in an already performed, prior image-based registration process. In addition, the pre- registration 64 includes the machine-based registration as performed in the method of Fig. 2. By taking one or more prior image-based registration processes into account, it is likely that the pre-registered 2D image data and 3D volume data match better than they would do after a mere machine-based registration. Accordingly the actual image-based registration 60 which is performed after the pre-registration 64, is possibly carried out faster compared to an actual image based registration carried out on only machine-base registered 2D image data and 3D volume data. In the embodiment illustrated in Fig. 3, the new transformation specification 62 describes the transformation of the pre- registered 3D volume data and the 2D image data to the registration state after completing the image registration 68. Since the already performed image registrations are contained in the pre-registration 64, the new transformation specification describes, according to an embodiment, the change in position of the region of interest 22 after the last already performed image registration. The other method steps in Fig. 3 correspond to the method steps illustrated in Fig. 2, the description of which is not repeated here.
Fig. 4 shows a further embodiment of a method according to the invention. The method illustrated in Fig. 4 differs from the method illustrated in Fig. 2 in that the image-based registration 68 takes into account a prior transformation specification 66 which is defined by already performed image registrations of the 3D volume data and the 2D image date. In an embodiment, the image-based registration 68 includes performing a transformation of the machine-base registered 3D volume data and 2D image data according to the prior transformation specification 66 and, subsequently, performing an actual image-based registration, e.g. by a similarity optimizing loop, in order to complete the image based registration 68. Similar to the method illustrated in Fig. 2, the whole transformation specification which describes the transformation of the machine-base registered 3D volume data and the 2D image data to the registration state after completing the image registration 68, is provided as new transformation specification 62.
According to an embodiment of the invention, a method further comprises providing further 2D image data 52 by imaging the region of interest with the imaging system 2 and performing a registration 64 of the 3D volume data 54 and the further 2D image data 52 to thereby provide registered 3D volume data and further 2D image data. Herein, the registration 70 of the 3D volume data and the further 2D image data includes performing a machine-based registration of the 3D volume data and the further 2D image data by taking imaging parameters 56 into account and further includes performing a transformation according to a transformation specification 62. An example of a method of this kind is illustrated in Fig. 5. The transformation specification 62 may be obtained according to any suitable method described herein, e.g. according to one of the methods illustrated in Fig. 2, Fig. 3 and Fig. 4. The registered 2D image data and 3D volume data may be taken as input of an image fusion process 72, where the registered 2D image data and the 3D volume data are overlayed (fused). The fused image is then displayed on a display device, indicated at 74 in Fig. 5.
In the method illustrated in Fig. 5, no actual image-based registration is performed but rather a transformation specification 62 obtained by already performed image registration is taken into account. An actual image-based registration may be performed after a predetermined time interval or after a predetermined number of acquired images. The 2D images which are not used for an actual image registration are registered by a machine-based registration using the imaging parameters 56 and taking the available transformation specification 62 into account. According to another embodiment, the 2D image data is checked for movements of the region of interest 22 and the image-based registration is carried out when a movement of the region of interest 22 is detected. Fig. 6 shows an example of such a checking for movements of the region of interest 22. To this end, 3D volume data and 2D image data are registered by performing a registration 64 which includes performing a machine-based registration of the 3D volume data and the further 2D image data by taking imaging parameters 56 into account and further includes performing a transformation according to a transformation specification 62. Up to this point, the method illustrated in Fig. 6 is similar to the method in Fig. 3. In contrast to the method illustrated in Fig. 3, according to the method illustrated in Fig. 6, no image based registration is carried out on the registered 2D image data and the 3D volume data, but rather a similarity comparison of the 2D image data and the 3D volume data is performed. If no movement of the region of interest 22 has occurred, the 3D volume data matches the 2D image data. Otherwise, it is decided that a change in spatial position of the region of interest 22 has occurred and a signal indicative thereof is provided, indicated at 78 in Fig. 6.
According to an embodiment of the invention, the image-based registration of the 3D volume data and the 2D image data is carried out in response to the signal indicative of the change in spatial position. For example, the image-based registration may be carried out according to one of the methods illustrated in Fig. 2, Fig. 3 and Fig. 4.
According to an embodiment, the image-based comparison and/or the image-based registration of the 2D image data and the 3D volume data may be carried out in parallel to the display of the fused image based on the 2D image data and the 3D volume data. That is, in an embodiment, a detection and/or correction of a change in spatial position of the region of interest is carried out in parallel to a 2D/3D roadmapping visualization. According to an embodiment, a comparison of the 2D image data and the
3D volume data, or a registration of the 2D image data and the 3D volume data may include performing a corresponding projection transformation of the 3D volume data, thereby providing a digital reconstructed radiograph (DRR). The projection transformation may take imaging parameters 56 as well as transformation specification 62, 66 into account, depending on the method. The resulting digital reconstructed radiograph is a 2D image defined by corresponding 2D DRR image data which can be compared to or registered with the 2D image data 52.
According to an embodiment of the invention, the rendering of the digital reconstructed radiograph from the 3D volume data is performed by a graphics processing unit of the control unit 34 and the image-based registration or the image comparison of the digital reconstructed radiograph with the 2D image data is performed by a central processing unit of the control unit 34. In this sense, the control unit 34 includes the functionality of an image processing system. According to other embodiment, a separate image processing system may be provided for providing the described functionality.
According to an embodiment, a computer program product is carried out in the control unit 34 which enables at least one processor, e.g. the central processing unit and the graphics processing unit, to carry out a method as described herein.
In Fig. 7 describes an exemplary embodiment of a viewing architecture of an image processing system 100 employing an embodiment of a method according to the invention in the context of 2D/3D roadmapping application, wherein the 2D image data represents an x-ray fluoroscopy. The data flow shows the images as they are stored in the various GPU side image-buffers and processed by the different bold circled graphics processing unit (GPU) 2D image processing (IP) blocks. These GPU IP blocks have a central processing unit (CPU) counter part that is responsible for initialisation and parameterisation. Generally in Fig. 7, thin circles indicate a CPU program CPUP whereas bold circles indicate a GPU program GPUP. Further, thin rectangles indicate CPU image data CPUID whereas bold rectangles indicate GPU image data GPUID. Dotted arrows indicate control signals Cl, C2, C4, C5, C6, thin arrows indicated 2D image data 2DID and bold arrows indicated 3D image data 3DID.
The image processing system of Fig. 7 includes one volume initialisation part V, five image processing steps I, II, III, IVa and IVb that are coupled by means of four off-screen image-buffers or framebuffer objects 104, 106, 116 and 117 and one or more display devices, e.g. viewing consoles 123, 124 indicated at VI. In the image processing step I, a motion detection/compensation is performed. In the image processing step II, a 3D volume rendering is performed. In the image processing step III, a 2D fluoroscopy image rendering is performed. In the image processing step IVb, a 2D/3D volume visualization is performed. In the image processing step IVa, a 2D/3D matching visualization is performed.
In the following, the roadmapping presentation will be described. The normal 2D/3D roadmapping visualisation will be executed in the outer loop of Fig. 7, i.e. in step II, III, IVb, V and VI. The real-time fluoroscopy image will be processed in steps 112, 114, and 118. The 3D volume will be rendered in step 105. Both images will be fused in step 119 visualized in 120 and presented on a display device in 123, 124. According to an embodiment, the roadmapping presentation is displayed on an interventional display 124 and images related to the 2D/3D registration process are displayed on an control display 123. New 2D fluoroscopy images coming from realtime acquisitions are entering in CPU process 111. Here the images are loaded in a 2D GPU texture-map 113. In the GPU step 112 the incoming images are noise filtered using both the incoming and stored images. In GPU step 114 the noise-filtered images are, based on the current detector formats together with the user selected zooming and panning information, mapped to screen space.
During the visualisation the 2D images are entering block IVb. In GPU step 105 the 3D volumes are, given one of set of rendering modes and based on the current (normally inverse) perspective projection and viewing transformation as obtained for the current C-arm position by the machined based 2D/3D registration together with the user selected zooming and panning information, mapped to screen space.
At this point the 2D images as stored in image-buffers 117 and 106 are spatially aligned and clinically compatible. Before the images are fused together in step 119, the 2D fluoroscopy image is used as mask and given the display polarity/colour merged/blended onto the 3D-Volumes, the mask is processed in step 118. In this step the fluoroscopy image mask is processed with functions like guidewire enhancement and landmarking. For example, the opacity of the fluoro pixel is used as blending factor to blend a color signal (active black, white or any other color) onto the 3D information. In this way, low contrast 2D (background) information does not obstruct the 3D information. In step 118, the projected 3D vessel region is used as a mask. Inside this region the contrast of the 2D fluoroscopic image is noise reduced, e.g. by a recursive filter over multiple acquisitions, and the contrast is decreased by a user controlled factor called land marking.
It should be noted that not all steps as described above will need to be processed every time. If a new 2D image in coming in and the C-arm geometry has not changed step 105 can be skipped, and the image contained in 106 can be re-used. If the viewing transformation changes (zooming or panning) the image upload to the GPU in 111 and pre-processing in 112 can be skipped.
In the following, the image registration will be described. The image processing system 100 is capable of performing an automatic mode and a manual mode. In the automatic mode, movements of the region of interest are automatically detected and compensated. In the manual mode, movements of the region of interest are compensated by manual image registration of the 2D image data and 3D volume data. According to an embodiment of the manual mode the movements of the region of interest are detected automatically, like in the automatic mode and a signal indicative of the movement is signalled to the user who may register the 2D image data and the 3D volume data manually in response to the signal indicative of the movement. In another embodiment of the manual mode, the movements of the region of interest are detected manually by the user. Parallel to the roadmapping visualisation as described above the 2D/3D image-registration will be executed in block I inside the inner dotted rectangle R. Further in the inner dotted rectangle R, the visualization for visual feedback of the manual registration or for the progression presentation of the automatic registration process is performed. In the automatic registration mode the DRR (step 102) and filtered 2D fluoroscopy images (115) will be rendered to screen space at a fixed (full screen) scale, based on the current (normally inverse) perspective projection and viewing transformation as obtained for the current C-arm position by the machined based 2D/3D registration. The registration will be orchestered by the similarity- optimising loop that will control the DRR renderer (control 104) using six degrees of freedom (Tx, Ty, Tz, Rx, Ry, Rz), wherein Tx, Ty and Tz indicate translational degrees of freedom in x, y and z direction, respectively and Rx, Ry and Rz indicate rotational degrees of freedom about axes in x, y and z direction, respectively. When the optimum is found the change in spatial position, i.e. the rotation and displacement will be passed to the volume renderer (105).
During manual registration, the similarity optimiser step 101 is replaced by the controller 103 that will pass the translation/rotation offsets of the manual image registration in the form of control signal C4 to both volume Tenderers 102 and 105. In the manual registration mode the DRR (step 102) and filtered 2D fluoroscopy images (115) will be rendered to screen space with the user selected zooming and panning scale, based on the current (normally inverse) perspective projection and viewing transformation as obtained for the current C-arm position by the machined based 2D/3D registration.
During image registration, irrespective whether the image registration is performed manually or automatically, the image registration results may be displayed either next to or in place of the 2D/3D roadmapping information in a fused 2D/3D combination using IP steps 121 and 122. Image registration results in this sense may include intermediate image registration results and/or final image registration results.
In the following, volume initialisation is described. The 3D volumes textures as used by volume Tenderers 102 and 105, will be initialised in block V. In step 107, based on histogram information of the 3D volume automatic segmentation thresholds Cl, C2 for bone and vessel information (if present) will be determined.
Segmentation in this sense is a process of deciding whether a pixel is part of an object to be measured or processed, or is merely part of the background and to be excluded from analysis. This process generally uses the intensity of the pixel to make the decision. For example, the image may be segmented by selecting an upper and lower threshold to define a range of acceptable grayscale levels, and the image processor would group all of the contiguous pixels that fall within that range into "objects". The segmentation thresholds Cl, C2 will be communicated to the volume renderer 105 (contrast) and DRR renderer 102 (bone). Furthermore the contrast threshold is passed to step 108. It should be noted that for the 3D roadmapping interventional procedure, high contrast vessel information is of particular interest. The vessel tree is visualized based on segmentation threshold by controlling the transfer function during the direct volume rendering step 106. During other procedures like needle guidance, the transfer function is controlled to visualize the (soft tissue) target. According to an embodiment, the image based registration is based on the bone information present in both 2D and 3D signals. In other embodiments, the image based registration is based on other image information present in both 2D and 3D signals. In the step 108 the volume of interest (V.O.I.) is sub- sampled, the contrast information, if present, is removed and the resulting volume textures as used by the DRR renderer 102 are uploaded to the texture maps 110. For example, the 3D signal used by the DRR rendered in the contrast signal, if present, is removed and the (bone) threshold is used by the DRR transfer function to put emphasis on the bone information. In step 108 further limitation C6 of the volumes and/or areas of interest as used during the registration may be passed to the DDR renderer 102 and/or simularity optimizer 101.
In summary, a method/framework of 2D/3D image registration is disclosed. In an embodiment accurate 2D/3D machine based registration is used in combination with manual and/or automatic image based 2D/3D registration to tackle the problem of patient movements detection and correction during an intervention, that can be implemented efficiently and executed transparently within the context of a dynamic 2D/3D roadmapping viewing application using of the shelve graphics hardware.
By using machine based 2D/3D registration the (inverse) perspective projection and the viewing transformation can be determined accurately. By using this form of registration in combination with (inverse) perspective 3D volume/graphics model/view transformation, real-time 2D fluoroscopy image can be fused accurately and effectively with the 3D information as long as the modelling transformation can be expressed by the identity matrix or in other words as long as the patient/table does not move.
By the combination of 2D/3D machine-based registration with 2D/3D image-based registration, the corrections of the image based registration are invariant over the various geometry changes, so the roadmapping visualisation procedure can proceed while at the background new patient movements are anticipated with the image based 2D/3D registration as described below. Furthermore by the encapsulation of 2D/3D image-based registration inside the 2D/3D viewing pipeline the motion detection/compensation can be made an intrinsic part of the roadmapping visualisation that can execute either transparently in the background or be used to visualise/control the automatic and/or manual compensation. Furthermore the state of the movement detection/compensation can be presented as natural part of the roadmapping process. In order to speed up the image based 2D/3D registration process, the time-consuming DRR renderer will execute on the GPU. Further, the similarity determination/optimiser may be executed on the CPU, which offers a scalable and flexible solution for changes in the optimising strategies. For the DRR generation a fast gradient emulation algorithm, based on a LUT table implementation that puts special emphasis on the edges of the structures within the 3D volume, is used.
Within the 2D/3D roadmapping visualisation context, the image based 2D/3D registration will execute that keeps track of patient movements. The image-based 2D/3D registration can be started either automatically or manual by the user. According to an embodiment, the motion detection/compensation is running in the background, not visible to the user. According to another embodiment, an initialisation/continuation of this process can be visualised upon a user selection. According to another embodiment, the user can decide to perform the registration manually.
In the automatic mode, once a displacement is noticed the system will start the optimiser loop and will come up with the 3D rotation/translation compensation in a couple of seconds that will be taken into account from than on in the new 2D/3D image fusion over the various C-arm geometry positional changes. Initially the optimiser will need to execute at least once to come up with a good similarity measure in order to start the movement monitoring process. In the monitoring step only one similarity display/comparison step will be needed to check the movement. In this way, as long as the C-ARM position is unchanged new incoming fluoroscopy acquisitions can be tested upon movements within a few 100th of seconds. If the C-ARM geometry is changed, once a movement is signalled and the optimiser has been started, both the 2D acquisition and the geometry settings as used by the DDR renderer will be frozen until the 2D/3D registration is finished. Once finished the result will be taken over by the 2D/3D roadmapping renderer. A new initial similarity value for the detection monitoring will be evaluated for this new position including the last performed correction, once the C-Arm is fixed for a certain period of time and a certain number of fluoroscopy image acquisitions have been acquired in this position. From than on process as described is repeated (in monitoring mode), i.e. the positions will checked in a loop each time whenever a presetted number of acquisition have entered, which may again lead to a optimiser compensation loop etc. It will be clear that if the C-ARM is toggled through a set of pre-set positions the similarities once established can be used for these different positions. The process as described above will execute at a global zoomed out scale transparent to the user, who will be able to zoom and pan at the viewing console in the usual way.
The 2D/3D roadmapping visualisation approach as lined out above can be performed on 3D data acquired from different examinations or from different modalities. Further, 2D/3D-Roadmapping can be used effectively for percutanious interventions where a needle is inserted in the patient. Prior to the intervention the needle path is planned using the pre-interventional 3D Volume and along this path a 3D (ruler) graphic is rendered in the 3D-Volume. Based on the planned needle path the orientation of the C-ARM automatic position control is programmed to look exactly in the direction of the planned needle, so that under fluoroscopy a "pinpoint approach" can be used to enter the needle at the right position/angle. Furthermore a view direction is programmed orthogonal to the needle. So once this C-ARM view position is selected the distance from the needle to the target can be followed in the live 2D/3D roadmapping presentation without foreshortening. If patient movement has been detected/corrected using the method outlined above, also the 3D graphics will be automatically be transformed to the correct position according to the determined transformation. However the pre-calculated C- ARM positions are not correct anymore, so they will need to be corrected so that the clinical user can simply reposition the C-ARM to see the effect of the interruption. While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclose embodiments.
Variations to the discussed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. Further, each of the functions of the control system might be carried out by an individual processor or other unit. The mere fact that certain measures are recited in mutually different dependent claims or with regard hereto does no indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Claims

CLAIMS:
1. Method comprising: providing 3D volume data (54, 109) of a region of interest (22); providing 2D image data (52, 111) by imaging said region of interest (22) with an imaging system (2); performing a registration of said 3D volume data (54, 109) and said 2D image data (52, 111) and thereby providing registered 3D volume data and 2D image data, wherein said registration of said 3D volume data (54, 109) and said 2D image data (52, 111) includes a machine-based registration of said 3D volume data and said 2D image data; providing a transformation specification (62) defined by an image registration (60, 68, 101, 103) of said registered 3D volume data and 2D image data.
2. Method according to claim 1, further comprising: providing further 2D image data (52, 111) by imaging said region of interest (22) with said imaging system; performing a registration of said 3D volume data (54, 109) and said further 2D image data (52, 111) and thereby providing registered 3D volume data and further 2D image data, wherein said registration of said 3D volume data and said further 2D image data includes performing a machine-based registration of said 3D volume data and said further 2D image data and further includes performing a transformation according to said transformation specification (62).
3. Method according to claim 2, further comprising: comparing (76) said registered 3D volume data and further 2D image data on an image base for detecting a change in spatial position of said region of interest (22); and upon a detection of a change in spatial position of said region of interest (22), generating a signal indicative of said change in spatial position (78).
4. Method according to claim 3, further comprising: automatically performing said image-based registration of said 3D volume data and said 2D image data in response to said signal indicative of said change in spatial position (78).
5. Method according to claim 1, further comprising: providing a digital reconstructed radiograph (DRR) for said image-based registration (60, 68, 101, 103), wherein said digital reconstructed radiograph is obtained by a perspective projection of said 3D volume data (54, 109).
6. Method according to claim 5, comprising: rendering said digital reconstructed radiograph from said 3D volume data by a graphics processing unit (GPUP, GPUD); and image-base registering said digital reconstructed radiograph with said 2D image data by a central processing unit (CPUP, CPUD).
7. Method according to claim 1, the method further comprising a roadmapping visualization wherein said 3D volume data and 2D image data are displayed as a fused image (74).
8. Method according to claim 3, wherein comparing (76) of said 2D image data (52, 111) and said 3D volume data (54, 109) is performed in parallel to said roadmapping visualization.
9. Computer program product which enables at least one processor to carry out the method according to claim 1.
10. Image processing unit (34, 100) which is capable of performing the method according to claim 1.
11. Imaging system (2) including an image processing unit (34, 100) which is capable of performing the method according to claim 1.
PCT/IB2008/051117 2007-03-30 2008-03-26 2d/3d image registration WO2008120136A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP07105308.6 2007-03-30
EP07105308 2007-03-30

Publications (1)

Publication Number Publication Date
WO2008120136A1 true WO2008120136A1 (en) 2008-10-09

Family

ID=39628940

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2008/051117 WO2008120136A1 (en) 2007-03-30 2008-03-26 2d/3d image registration

Country Status (1)

Country Link
WO (1) WO2008120136A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2960332A1 (en) * 2010-05-21 2011-11-25 Gen Electric METHOD OF PROCESSING RADIOLOGICAL IMAGES TO DETERMINE A 3D POSITION OF A NEEDLE.
WO2012120405A1 (en) 2011-03-04 2012-09-13 Koninklijke Philips Electronics N.V. 2d/3d image registration
EP2499972A1 (en) * 2009-11-13 2012-09-19 Imagnosis Inc. Medical three-dimensional image display-orientation adjustment device and adjustment program
WO2014102718A1 (en) * 2012-12-28 2014-07-03 Koninklijke Philips N.V. Real-time scene-modeling combining 3d ultrasound and 2d x-ray imagery
CN104881568A (en) * 2015-04-27 2015-09-02 苏州敏宇医疗科技有限公司 Cloud computation based early oncotherapy efficacy evaluation system and method
EP3021283A3 (en) * 2014-05-14 2016-09-14 Nuctech Company Limited Image display methods
EP3626176A1 (en) * 2018-09-19 2020-03-25 Siemens Healthcare GmbH Method for supporting a user, computer program product, data carrier and imaging system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050245807A1 (en) * 2004-01-29 2005-11-03 Jan Boese Method for registering and merging medical image data
US20060262970A1 (en) * 2005-05-19 2006-11-23 Jan Boese Method and device for registering 2D projection images relative to a 3D image data record

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050245807A1 (en) * 2004-01-29 2005-11-03 Jan Boese Method for registering and merging medical image data
US20060262970A1 (en) * 2005-05-19 2006-11-23 Jan Boese Method and device for registering 2D projection images relative to a 3D image data record

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KERRIEN E ET AL: "Fully Automatic 3D/2D Subtracted Angiography Registration", MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MIC CAI'99 LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER BERLIN HEIDELBERG, BE, vol. 1679, 1 January 2006 (2006-01-01), pages 664 - 671, XP019036219, ISBN: 978-3-540-66503-8 *
RUIJTERS D, BABIC D, HOMAN R, MIELEKAMP P, TER HAAR ROMENY B, SUETENS P: "3D multimodality roadmapping in neuroangiography", PROCEEDINGS OF SPIE, vol. 6509, 21 March 2007 (2007-03-21), pages 1 - 8, XP002489838 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2499972A4 (en) * 2009-11-13 2015-07-01 Imagnosis Inc Medical three-dimensional image display-orientation adjustment device and adjustment program
EP2499972A1 (en) * 2009-11-13 2012-09-19 Imagnosis Inc. Medical three-dimensional image display-orientation adjustment device and adjustment program
US8600138B2 (en) 2010-05-21 2013-12-03 General Electric Company Method for processing radiological images to determine a 3D position of a needle
FR2960332A1 (en) * 2010-05-21 2011-11-25 Gen Electric METHOD OF PROCESSING RADIOLOGICAL IMAGES TO DETERMINE A 3D POSITION OF A NEEDLE.
CN103403763A (en) * 2011-03-04 2013-11-20 皇家飞利浦有限公司 2d/3d image registration
WO2012120405A1 (en) 2011-03-04 2012-09-13 Koninklijke Philips Electronics N.V. 2d/3d image registration
US9262830B2 (en) 2011-03-04 2016-02-16 Koninklijke Philips N.V. 2D/3D image registration
CN103403763B (en) * 2011-03-04 2017-05-10 皇家飞利浦有限公司 2d/3d image registration
WO2014102718A1 (en) * 2012-12-28 2014-07-03 Koninklijke Philips N.V. Real-time scene-modeling combining 3d ultrasound and 2d x-ray imagery
CN104883975A (en) * 2012-12-28 2015-09-02 皇家飞利浦有限公司 Real-time scene-modeling combining 3d ultrasound and 2d x-ray imagery
US10157491B2 (en) 2012-12-28 2018-12-18 Koninklijke Philips N.V. Real-time scene-modeling combining 3D ultrasound and 2D X-ray imagery
EP3021283A3 (en) * 2014-05-14 2016-09-14 Nuctech Company Limited Image display methods
CN104881568A (en) * 2015-04-27 2015-09-02 苏州敏宇医疗科技有限公司 Cloud computation based early oncotherapy efficacy evaluation system and method
EP3626176A1 (en) * 2018-09-19 2020-03-25 Siemens Healthcare GmbH Method for supporting a user, computer program product, data carrier and imaging system
US11576557B2 (en) 2018-09-19 2023-02-14 Siemens Healthcare Gmbh Method for supporting a user, computer program product, data medium and imaging system

Similar Documents

Publication Publication Date Title
JP6768878B2 (en) How to generate an image display
JP6333979B2 (en) Intervention X-ray system with automatic isocentering
US7697743B2 (en) Methods and systems for prescribing parameters for tomosynthesis
US10426414B2 (en) System for tracking an ultrasonic probe in a body part
US8045780B2 (en) Device for merging a 2D radioscopy image with an image from a 3D image data record
US8145012B2 (en) Device and process for multimodal registration of images
JP5427179B2 (en) Visualization of anatomical data
US20060036167A1 (en) Vascular image processing
US20070167721A1 (en) Method and device for correction motion in imaging during a medical intervention
JP2009022754A (en) Method for correcting registration of radiography images
EP2037811A2 (en) Spatially varying 2d image processing based on 3d image data
US10537293B2 (en) X-ray CT system, image display device, and image display method
WO2008120136A1 (en) 2d/3d image registration
WO2008104921A2 (en) Phase-free cardiac roadmapping
KR20170057141A (en) Locally applied transparency for a ct image
US10242452B2 (en) Method, apparatus, and recording medium for evaluating reference points, and method, apparatus, and recording medium for positional alignment
KR20180116090A (en) Medical navigation system and the method thereof
US20120057671A1 (en) Data acquisition and visualization mode for low dose intervention guidance in computed tomography
US11423554B2 (en) Registering a two-dimensional image with a three-dimensional image
WO2023209014A1 (en) Registering projection images to volumetric images
US7856080B2 (en) Method for determining a defined position of a patient couch in a C-arm computed tomography system, and C-arm computed tomography system
Hartmann et al. Depth-buffer targeting for spatially accurate 3-D visualization of medical images
JP2005046394A (en) Medical image displaying device
US11915446B2 (en) Generating a medical result image
US20220096165A1 (en) Interventional device tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08719831

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08719831

Country of ref document: EP

Kind code of ref document: A1