WO2011128792A2 - Image data reformatting - Google Patents

Image data reformatting Download PDF

Info

Publication number
WO2011128792A2
WO2011128792A2 PCT/IB2011/051107 IB2011051107W WO2011128792A2 WO 2011128792 A2 WO2011128792 A2 WO 2011128792A2 IB 2011051107 W IB2011051107 W IB 2011051107W WO 2011128792 A2 WO2011128792 A2 WO 2011128792A2
Authority
WO
WIPO (PCT)
Prior art keywords
interest
sub
volume
image data
mip
Prior art date
Application number
PCT/IB2011/051107
Other languages
French (fr)
Other versions
WO2011128792A3 (en
Inventor
Rafael Wiemker
Sven Kabus
Tobias Klinder
Original Assignee
Koninklijke Philips Electronics N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips Electronics N.V.
Priority to US13/639,189 priority Critical patent/US9424680B2/en
Priority to EP11721104A priority patent/EP2559007A2/en
Priority to CN201180019052.8A priority patent/CN102844794B/en
Publication of WO2011128792A2 publication Critical patent/WO2011128792A2/en
Publication of WO2011128792A3 publication Critical patent/WO2011128792A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering

Definitions

  • CT computed tomography
  • other imaging modalities such as magnetic resonance imaging (MRI), 3D x-ray, positron emission tomography (PET), single photon emission tomography (SPECT), ultrasound (US), and/or other imaging modalities are also contemplated herein.
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • SPECT single photon emission tomography
  • US ultrasound
  • Diagnostic imaging e.g., CT, MRI, 3D x-ray, PET, SPECT, US, etc.
  • the volumetric image data generated thereby has been variously rendered and reformatted for visually enhancing tissue of interest and/or suppressing other tissue.
  • MIP maximum intensity projection
  • reformatting the volumetric image data may render data that mainly shows the larger more central vessels of the structure of interest as the smaller peripheral vessels of the structure of interest may be hidden or occluded thereby.
  • disease corresponding to the smaller peripheral vessels may not be readily apparent in the rendered reformatted volumetric image data.
  • a method for reformatting image data includes obtaining volumetric image data indicative of an anatomical structure of interest, identifying a surface of interest of the anatomical structure of interest in the volumetric image data, identifying a thickness for a sub-volume of interest of the volumetric image data, shaping the sub-volume of interest such that at least one of its sides follows the surface of interest, and generating, via a processor, a maximum intensity projection (MIP) or direct volume rendering (DVR) based on the identified surface of interest and the shaped sub-volume of interest.
  • MIP maximum intensity projection
  • DVR direct volume rendering
  • a reformatter includes a processor that generates at least one of maximum intensity projection (MIP) or direct volume rendering (DVR) for a sub- portion of an anatomical structure of interest based on an identified surface of interest of the anatomical structure of interest and an identified sub-volume of interest of the anatomical structure of interest, wherein the MIP or DVR is generated based on a side of the sub-portion that follows the surface of interest.
  • MIP maximum intensity projection
  • DVR direct volume rendering
  • a computer readable storage medium encoded with instructions which, when executed by a computer, cause a processor of the computer to perform the step of: identifying a sub- volume of interest in an anatomical structure in volumetric image data, wherein the sub-volume of interest follows a surface of the anatomical structure, and generating at least one of a maximum intensity projection (MIP) or direct volume rendering (DVR) based on the identified sub-volume of interest.
  • MIP maximum intensity projection
  • DVR direct volume rendering
  • the invention may take form in various components and arrangements of components, and in various steps and arrangements of steps.
  • the drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
  • FIGURE 1 illustrates an imaging system in connection with an image data reformatter.
  • FIGURE 2 illustrates an example image data reformatter.
  • FIGURES 3A - 3C illustrate example image data reformatting viewing angle and sub-volume thickness.
  • FIGURES 4A - 4C illustrate example image data reformatting viewing angle and sub-volume thickness.
  • FIGURE 5 illustrates an example method for reformatting image data.
  • FIGURE 1 illustrates an imaging system 100 such as a computed tomography (CT) scanner.
  • the imaging system 100 includes a stationary gantry 102 and a rotating gantry 104, which is rotatably supported by the stationary gantry 102.
  • the rotating gantry 104 rotates around an examination region 106 about a longitudinal or z-axis.
  • a radiation source 108 such as an x-ray tube, is supported by the rotating gantry 104 and rotates with the rotating gantry 104, and emits radiation that traverses the examination region 106.
  • a radiation sensitive detector array 110 detects radiation emitted by the radiation source 108 that traverses the examination region 106 and generates projection data indicative of the detected radiation.
  • a recons true tor 112 reconstructs projection data and generates volumetric image data indicative of the examination region 106.
  • a support 114 such as a couch, supports the object or subject in the examination region 106. The support 114 is movable along the x, y, and z-axis directions.
  • a general purpose computing system serves as an operator console 116, which includes human readable output devices such as a display and/or printer and input devices such as a keyboard and/or mouse. Software resident on the console 116 allows the operator to control the operation of the system 100, for example, by allowing the operator to select a motion compensation protocol, initiate scanning, etc.
  • a reformatter 118 reformats image data, for example, from the imaging system 100 and/or one or more other systems.
  • the illustrated reformatter 118 is configured to reformat image data at least in connection with one or more anatomical surfaces of interest of one or more anatomical structures (e.g., lung, liver, etc.) of interest represented in the volumetric image data.
  • this includes reformatting image data so as to adapt the image data to a shape of a surface of interest of an anatomical structure of interest.
  • the shape of the surface of interest may be planar (straight) or curved.
  • the image data may be the entire image data or a sub-portion thereof, for example, segmented image data or other sub- portion of image data.
  • the reformatter 118 can determine information about the voxels in the volumetric image data and variously reformat the image data based thereon. This includes determining information for one or more predetermined depths or thicknesses of image data, for example, relative to a reference region such as to a surface of interest.
  • the reformatter 118 can determine intensities of voxels along projections through a predetermined region and generate a MIP (maximum intensity projection) data set in which a voxel with a maximum intensity along each projection is projected in the visualization plane traced from a viewing angle or veiwpoint to the plane.
  • MIP maximum intensity projection
  • the foregoing allows for generating MIP projection data well-suited for visualizing the smaller peripheral or distal vessels nearer the surface of the lung, liver, etc., while mitigating occlusion of the smaller peripheral vessels by the larger more central or proximal vessels that are relatively farther away from the surface of the lung, liver, etc.
  • This allows for visually enhancing the smaller peripheral vessels for improved inspection of the peripheral vessels and health thereof, relative to conventional MIP.
  • This also allows for viewing of the lobar and segmental structure of lung, liver, etc. without using explicit lobar segmentation, which might be prone to errors.
  • reformatting approach also provides a computationally inexpensive approach for visualizing the smaller distal vessels of lung, liver, etc.
  • suitable reformattting includes, but not limited to, producing, 2D, 3D, 4D, MPR, minimum intensity projection (mlP), etc.
  • the reformatter 118 may include a reconstructor that reconstructs projection data and/or can otherwise process projection data.
  • the reformatter 118 may be part of or integrated with a computing device (e.g., a computer) having one or more processors that execute one or more instructions encoded or stored on computer readable storage medium to implement the functions thereof.
  • a computing device e.g., a computer
  • the reformatter 118 is part of the console 116.
  • the reformatter 118 resides in a computing device remotely located from the imaging apparatus 100 such as a workstation, computer, etc.
  • FIGURE 2 illustrates an example reformatter 118.
  • a segmenter 202 can be used to segment anatomical structure(s) of interest (e.g., an individual lung lobe, both lung lobes together, the liver, etc.) in the volumetric image data.
  • the segmenter 202 can employ various segmentation techniques. For example, in one instance, an automated approach is used. The automated approach may be based on a grey level, an anatomical model, and/or other information.
  • a user provides an input indicative of the structure of interest to the segementer 202 such as by selecting a button or other indicia (corresponding to the structure of interest) of a graphical user interface, entering data via a keyboard/pad, or otherwise.
  • the segmenter 202 then automatically segments the structure of interest.
  • a user may adjust the automated segmentation, for example, by re-shaping or otherwise adjusting the segmentation.
  • the user manually identifies the structure of interest in the image data. This may include the user using a mouse, a free hand draw tool, an adjustable predefined geometrical object to determine a perimeter or otherwise identify the structure of interest in the image data, etc.
  • the segmenter 202 is omitted. In this instance, the reformatter 118 may process already segmented image data or image data that has not been segmented.
  • a surface identifier 204 identifies a surface(s) of the structure of interest in the segmented data. Likewise, automated and/or manual techniques can be used. By way of example, an anatomical model, a gradient, and/or other information can be used to automatically identify surfaces and/or user input can identify surfaces.
  • the surface identifier 204 may identify an outer or peripheral surface, an inner or central surface, or a surface therebetween.
  • a voxel intensity determiner 206 identifies voxel intensities along projections through or into the segmented data.
  • the illustrated intensity determiner 206 identifies voxel intensities based on various input.
  • the input may identify a viewing angle for the projections.
  • the viewing angle can be located with respect to the projection plane such that the projections are substantially perpendicular or normal to the projection plane. In another instance, the viewing angle can be located such that the projections are oblique or parallel to the projection plane.
  • the viewing angle can be a default, user defined, or other viewing angle.
  • the input may identify a sub-volume thickness or depth for the projections.
  • the input may indicate that one or more projections extend 0.5 millimeters (mm), 1.0 mm, 10.0 mm, 25 mm, or other depth from the identified surface (or other region of the segmented structure) into the segmented structure.
  • the sub-volume thickness or depth is uniform along the surface. In another embodiment, the sub-volume thickness or depth may vary along the surface.
  • the illustrated intensity determiner 206 may identify voxel intensities based on other information.
  • the viewing angle, sub-volume thickness, and/or other information is determined via a machine learning approach based on an implicitly and/or explicitly trained classifier, probabilities, neural networks, support vector machines, cost functions, statistics, heuristics, history, or the like.
  • a rendering engine 208 renders the segmented data based on the identified surface, viewing angle, thickness, etc.
  • a presentation component 210 allows for a visual presentation of the rendered image data.
  • An interface 212 allows a user to interact with the reformatter 118. Such interaction may include entering various information such as at least one or more of a tissue of interest, a surface of interest, a view angle of interest, and a sub-volume thickness of interest. Such information includes pre and post reformatting information. When provided after reformatting, the image data can be reformatted again based on the latest information. This allows a user to tweak or fine turn various parameters for visualization purposes.
  • FIGURE 3 provides an example based on a lung study.
  • FIGURE 3A shows volumetric image data 300, including the lungs 304 and other anatomy 302, prior to segmentation.
  • FIGURE 3B illustrates the image data 300 with the lungs 304 after they are segmented therefrom, including an outer peripheral surface 306 and an inner central surface 308.
  • indicia 310 showing a first MIP viewing angle and sub-volume thickness with respect to the outer peripheral surface 308, are superimposed over the segmented data for illustrative purposes.
  • FIGURE 4 provides another example based on a lung study.
  • FIGURE 4A shows the volumetric image data 300, including the lungs 304 and the other anatomy 302, prior to segmentation.
  • FIGURE 4B illustrates the image data 300 with the lungs 304 after they are segmented therefrom, including the outer peripheral surface 306 and the inner central surface 308.
  • indicia 400 showing a second MIP viewing angle and sub- volume thickness from the outer peripheral surface 306, are superimposed over the segmented data for illustrative purposes.
  • the viewing angles and the thickness are different than those in FIGURE 3C.
  • the viewing angles may be the same and the sub-volume thickness may be different.
  • the sub-volume thickness may be the same and the viewing angles may be different.
  • different viewing angles and/or sub-volume thickness are utilized.
  • FIGURE 5 illustrates a method for reformatting image data.
  • image data is obtained.
  • suitable image data includes, but is not limited to, data generated by one or more of a CT, MRI, radiography, PET, SPECT, US, etc. imaging modality.
  • the image data is segmented based on an anatomical structure of interest such as the lungs, liver, etc.
  • the segmentation may include the entire anatomical structure (e.g., the whole lung) or a sub-portion thereof (e.g., the right lobe to the lung, the left lobe of the lung, or another sub-portion of the lung), and may be performed manually, automatically, or semi-automatically.
  • the surface may be the surface of a lung, the liver, etc. with relatively smaller vessels such as the peripheral vessels of the lunch, the liver, etc., and/or other vessels.
  • Suitable surfaces include curved (curvilinear) surfaces and flat surfaces.
  • a viewing angle for the projections lines is identified. As described herein, the viewing angle may be generally perpendicular or oblique to a viewing plane.
  • a sub-volume thickness of the segmented image data to be processed is identified.
  • an intensity of the voxels along each of the projection lines is determined.
  • the projections may begin at the identified surface and extend through the identified thickness. In other embodiments, other starting points and/or distances are contemplated.
  • the voxel with the maximum intensity along each projection line is identified. In one instance, this includes casting rays from the surface into the structure through the thickness and determining a maximum intensity projection of the voxels along each ray.
  • a MIP image data set is rendered based on the identified voxels.
  • the data can be presented in axial, sagittal, and/or coronal viewing direction.
  • a user adjusts one or more parameters such as the viewing angle and/or sub-volume thickness, and the acts 508 to 514 are repeated. In one instance, this includes dynamically updating the presented image data based on the viewing angle and/or sub-volume thickness. Furthermore, multiple renderings based on different viewing angle and/or sub-volume thickness data can be concurrently and/or individually presented.
  • the foregoing allows for generating MIP projection data well-suited for visualizing the smaller distal vessels nearer the surface of anatomical structure such as the lung, liver, etc., while mitigating occlusion of the smaller distal vessels by the larger proximal vessels which generally are located relatively farther away from the distal surface.
  • This provides for improved inspection of the distal vessels, relative to conventional MIP, and a computationally inexpensive approach for visualizing the smaller distal vessels.
  • the segmentor 202 (or other component) additionally or alternatively generates a direct volume rendering (DVR).
  • DVR direct volume rendering
  • this approach is applied to slabs or volumes of interest in the same way as MIPs.
  • the volume rendering does not rely on any explicit surface segmentation, but directly converts the gray- values in the volume of interest into a projection image, for example, by an opacity transfer function (OTF) or otherwise, instead of using the maximum intensity principle.
  • OTF opacity transfer function
  • the above acts may be implemented by way of computer readable instructions, which, when executed by a computer processor(s), causes the processor(s) to carry out the acts described herein. In such a case, the instructions are stored in a computer readable storage medium such as memory associated with and/or otherwise accessible to the relevant computer.

Abstract

A method for reformatting image data includes obtaining volumetric image data indicative of an anatomical structure of interest, identifying a surface of interest of the anatomical structure of interest in the volumetric image data, identifying a thickness for a sub- volume of interest of the volumetric image data, shaping the sub-volume of interest such that at least one of its sides follows the surface of interest, and generating, via a processor, a maximum intensity projection (MIP) or direct volume rendering (DVR) based on the identified surface of interest and the shaped sub-volume of interest.

Description

IMAGE DATA REFORMATTING
DESCRIPTION
The following generally relates to reformatting image data and is described with particular application to computed tomography (CT); however, other imaging modalities such as magnetic resonance imaging (MRI), 3D x-ray, positron emission tomography (PET), single photon emission tomography (SPECT), ultrasound (US), and/or other imaging modalities are also contemplated herein.
Diagnostic imaging (e.g., CT, MRI, 3D x-ray, PET, SPECT, US, etc.) has been used for visual inspection of the lungs, liver, and/or other tissue of interest to assess function, disease, progression, therapy success, etc. The volumetric image data generated thereby has been variously rendered and reformatted for visually enhancing tissue of interest and/or suppressing other tissue.
One reformatting technique that has been used to visually enhance tissue of interest is maximum intensity projection (MIP). Generally, MIP is a visualization technique that projects, in the visualization plane, those voxels of the volumetric image data with maximum intensity that fall within rays traced from the viewing angle or viewpoint to the plane of projection through the image data.
Unfortunately, with state of the art or conventional MIP for the lungs (and other tissue), reformatting the volumetric image data may render data that mainly shows the larger more central vessels of the structure of interest as the smaller peripheral vessels of the structure of interest may be hidden or occluded thereby. As a consequence, disease corresponding to the smaller peripheral vessels may not be readily apparent in the rendered reformatted volumetric image data.
In view of at least the foregoing, there is an unresolved need for new and non- obvious techniques reformatting image data. Aspects of the present application address the above-referenced matters and others.
According to one aspect, a method for reformatting image data includes obtaining volumetric image data indicative of an anatomical structure of interest, identifying a surface of interest of the anatomical structure of interest in the volumetric image data, identifying a thickness for a sub-volume of interest of the volumetric image data, shaping the sub-volume of interest such that at least one of its sides follows the surface of interest, and generating, via a processor, a maximum intensity projection (MIP) or direct volume rendering (DVR) based on the identified surface of interest and the shaped sub-volume of interest.
According to another aspect, a reformatter includes a processor that generates at least one of maximum intensity projection (MIP) or direct volume rendering (DVR) for a sub- portion of an anatomical structure of interest based on an identified surface of interest of the anatomical structure of interest and an identified sub-volume of interest of the anatomical structure of interest, wherein the MIP or DVR is generated based on a side of the sub-portion that follows the surface of interest.
According to another aspect, a computer readable storage medium encoded with instructions which, when executed by a computer, cause a processor of the computer to perform the step of: identifying a sub- volume of interest in an anatomical structure in volumetric image data, wherein the sub-volume of interest follows a surface of the anatomical structure, and generating at least one of a maximum intensity projection (MIP) or direct volume rendering (DVR) based on the identified sub-volume of interest.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
FIGURE 1 illustrates an imaging system in connection with an image data reformatter.
FIGURE 2 illustrates an example image data reformatter. FIGURES 3A - 3C illustrate example image data reformatting viewing angle and sub-volume thickness.
FIGURES 4A - 4C illustrate example image data reformatting viewing angle and sub-volume thickness.
FIGURE 5 illustrates an example method for reformatting image data.
FIGURE 1 illustrates an imaging system 100 such as a computed tomography (CT) scanner. The imaging system 100 includes a stationary gantry 102 and a rotating gantry 104, which is rotatably supported by the stationary gantry 102. The rotating gantry 104 rotates around an examination region 106 about a longitudinal or z-axis. A radiation source 108, such as an x-ray tube, is supported by the rotating gantry 104 and rotates with the rotating gantry 104, and emits radiation that traverses the examination region 106. A radiation sensitive detector array 110 detects radiation emitted by the radiation source 108 that traverses the examination region 106 and generates projection data indicative of the detected radiation.
A recons true tor 112 reconstructs projection data and generates volumetric image data indicative of the examination region 106. A support 114, such as a couch, supports the object or subject in the examination region 106. The support 114 is movable along the x, y, and z-axis directions. A general purpose computing system serves as an operator console 116, which includes human readable output devices such as a display and/or printer and input devices such as a keyboard and/or mouse. Software resident on the console 116 allows the operator to control the operation of the system 100, for example, by allowing the operator to select a motion compensation protocol, initiate scanning, etc.
A reformatter 118 reformats image data, for example, from the imaging system 100 and/or one or more other systems. The illustrated reformatter 118 is configured to reformat image data at least in connection with one or more anatomical surfaces of interest of one or more anatomical structures (e.g., lung, liver, etc.) of interest represented in the volumetric image data. In one instance, this includes reformatting image data so as to adapt the image data to a shape of a surface of interest of an anatomical structure of interest. The shape of the surface of interest may be planar (straight) or curved. The image data may be the entire image data or a sub-portion thereof, for example, segmented image data or other sub- portion of image data.
As described in greater detail below, the reformatter 118 can determine information about the voxels in the volumetric image data and variously reformat the image data based thereon. This includes determining information for one or more predetermined depths or thicknesses of image data, for example, relative to a reference region such as to a surface of interest. By way of example, the reformatter 118 can determine intensities of voxels along projections through a predetermined region and generate a MIP (maximum intensity projection) data set in which a voxel with a maximum intensity along each projection is projected in the visualization plane traced from a viewing angle or veiwpoint to the plane.
With respect to lung, liver, etc. studies, the foregoing allows for generating MIP projection data well-suited for visualizing the smaller peripheral or distal vessels nearer the surface of the lung, liver, etc., while mitigating occlusion of the smaller peripheral vessels by the larger more central or proximal vessels that are relatively farther away from the surface of the lung, liver, etc. This allows for visually enhancing the smaller peripheral vessels for improved inspection of the peripheral vessels and health thereof, relative to conventional MIP. This also allows for viewing of the lobar and segmental structure of lung, liver, etc. without using explicit lobar segmentation, which might be prone to errors.
The foregoing reformatting approach also provides a computationally inexpensive approach for visualizing the smaller distal vessels of lung, liver, etc. Other suitable reformattting includes, but not limited to, producing, 2D, 3D, 4D, MPR, minimum intensity projection (mlP), etc. In addition, the reformatter 118 may include a reconstructor that reconstructs projection data and/or can otherwise process projection data.
It is to be appreciated that the reformatter 118 may be part of or integrated with a computing device (e.g., a computer) having one or more processors that execute one or more instructions encoded or stored on computer readable storage medium to implement the functions thereof. For example, in one instance, the reformatter 118 is part of the console 116. In yet another instance, the reformatter 118 resides in a computing device remotely located from the imaging apparatus 100 such as a workstation, computer, etc. Although the above is describe in connection with CT data, it is to be understood that other imaging data such as MRI, radiography, PET, SPECT, US, and/or other imaging data can be reformatted by the reformatter 118.
FIGURE 2 illustrates an example reformatter 118.
A segmenter 202 can be used to segment anatomical structure(s) of interest (e.g., an individual lung lobe, both lung lobes together, the liver, etc.) in the volumetric image data. The segmenter 202 can employ various segmentation techniques. For example, in one instance, an automated approach is used. The automated approach may be based on a grey level, an anatomical model, and/or other information.
In one embodiment, a user provides an input indicative of the structure of interest to the segementer 202 such as by selecting a button or other indicia (corresponding to the structure of interest) of a graphical user interface, entering data via a keyboard/pad, or otherwise. The segmenter 202 then automatically segments the structure of interest. A user may adjust the automated segmentation, for example, by re-shaping or otherwise adjusting the segmentation.
In another embodiment, the user manually identifies the structure of interest in the image data. This may include the user using a mouse, a free hand draw tool, an adjustable predefined geometrical object to determine a perimeter or otherwise identify the structure of interest in the image data, etc. In another embodiment, the segmenter 202 is omitted. In this instance, the reformatter 118 may process already segmented image data or image data that has not been segmented.
A surface identifier 204 identifies a surface(s) of the structure of interest in the segmented data. Likewise, automated and/or manual techniques can be used. By way of example, an anatomical model, a gradient, and/or other information can be used to automatically identify surfaces and/or user input can identify surfaces. The surface identifier 204 may identify an outer or peripheral surface, an inner or central surface, or a surface therebetween.
A voxel intensity determiner 206 identifies voxel intensities along projections through or into the segmented data. The illustrated intensity determiner 206 identifies voxel intensities based on various input. By way of example, the input may identify a viewing angle for the projections. The viewing angle can be located with respect to the projection plane such that the projections are substantially perpendicular or normal to the projection plane. In another instance, the viewing angle can be located such that the projections are oblique or parallel to the projection plane. The viewing angle can be a default, user defined, or other viewing angle.
Additionally or alternatively, the input may identify a sub-volume thickness or depth for the projections. For example, the input may indicate that one or more projections extend 0.5 millimeters (mm), 1.0 mm, 10.0 mm, 25 mm, or other depth from the identified surface (or other region of the segmented structure) into the segmented structure. In one embodiment, the sub-volume thickness or depth is uniform along the surface. In another embodiment, the sub-volume thickness or depth may vary along the surface.
Additionally or alternatively, the illustrated intensity determiner 206 may identify voxel intensities based on other information. In one embodiment, the viewing angle, sub-volume thickness, and/or other information is determined via a machine learning approach based on an implicitly and/or explicitly trained classifier, probabilities, neural networks, support vector machines, cost functions, statistics, heuristics, history, or the like.
A rendering engine 208 renders the segmented data based on the identified surface, viewing angle, thickness, etc. A presentation component 210 allows for a visual presentation of the rendered image data.
An interface 212 allows a user to interact with the reformatter 118. Such interaction may include entering various information such as at least one or more of a tissue of interest, a surface of interest, a view angle of interest, and a sub-volume thickness of interest. Such information includes pre and post reformatting information. When provided after reformatting, the image data can be reformatted again based on the latest information. This allows a user to tweak or fine turn various parameters for visualization purposes.
FIGURE 3 provides an example based on a lung study. FIGURE 3A shows volumetric image data 300, including the lungs 304 and other anatomy 302, prior to segmentation. FIGURE 3B illustrates the image data 300 with the lungs 304 after they are segmented therefrom, including an outer peripheral surface 306 and an inner central surface 308. In FIGURE 3C, indicia 310, showing a first MIP viewing angle and sub-volume thickness with respect to the outer peripheral surface 308, are superimposed over the segmented data for illustrative purposes.
FIGURE 4 provides another example based on a lung study. FIGURE 4A shows the volumetric image data 300, including the lungs 304 and the other anatomy 302, prior to segmentation. FIGURE 4B illustrates the image data 300 with the lungs 304 after they are segmented therefrom, including the outer peripheral surface 306 and the inner central surface 308. In FIGURE 4C, indicia 400, showing a second MIP viewing angle and sub- volume thickness from the outer peripheral surface 306, are superimposed over the segmented data for illustrative purposes.
Note in FIGURE 4C that the viewing angles and the thickness are different than those in FIGURE 3C. In other embodiment, the viewing angles may be the same and the sub- volume thickness may be different. In other embodiment, the sub-volume thickness may be the same and the viewing angles may be different. In other embodiment, different viewing angles and/or sub-volume thickness are utilized.
FIGURE 5 illustrates a method for reformatting image data.
At 502, image data is obtained. As described herein, suitable image data includes, but is not limited to, data generated by one or more of a CT, MRI, radiography, PET, SPECT, US, etc. imaging modality.
At 504, the image data is segmented based on an anatomical structure of interest such as the lungs, liver, etc. The segmentation may include the entire anatomical structure (e.g., the whole lung) or a sub-portion thereof (e.g., the right lobe to the lung, the left lobe of the lung, or another sub-portion of the lung), and may be performed manually, automatically, or semi-automatically.
At 506, one or more surfaces of interest of the structure are identified. As described herein, the surface may be the surface of a lung, the liver, etc. with relatively smaller vessels such as the peripheral vessels of the lunch, the liver, etc., and/or other vessels. Suitable surfaces include curved (curvilinear) surfaces and flat surfaces.
At 508, a viewing angle for the projections lines is identified. As described herein, the viewing angle may be generally perpendicular or oblique to a viewing plane. At 510, a sub-volume thickness of the segmented image data to be processed is identified.
At 512, an intensity of the voxels along each of the projection lines is determined. As described herein, the projections may begin at the identified surface and extend through the identified thickness. In other embodiments, other starting points and/or distances are contemplated.
At 514, the voxel with the maximum intensity along each projection line is identified. In one instance, this includes casting rays from the surface into the structure through the thickness and determining a maximum intensity projection of the voxels along each ray.
At 516, a MIP image data set is rendered based on the identified voxels. The data can be presented in axial, sagittal, and/or coronal viewing direction.
At 518, optionally, a user adjusts one or more parameters such as the viewing angle and/or sub-volume thickness, and the acts 508 to 514 are repeated. In one instance, this includes dynamically updating the presented image data based on the viewing angle and/or sub-volume thickness. Furthermore, multiple renderings based on different viewing angle and/or sub-volume thickness data can be concurrently and/or individually presented.
As described herein, the foregoing allows for generating MIP projection data well-suited for visualizing the smaller distal vessels nearer the surface of anatomical structure such as the lung, liver, etc., while mitigating occlusion of the smaller distal vessels by the larger proximal vessels which generally are located relatively farther away from the distal surface. This provides for improved inspection of the distal vessels, relative to conventional MIP, and a computationally inexpensive approach for visualizing the smaller distal vessels.
In another embodiment, the segmentor 202 (or other component) additionally or alternatively generates a direct volume rendering (DVR). In one instance, this approach is applied to slabs or volumes of interest in the same way as MIPs. With this rendering approach, the volume rendering does not rely on any explicit surface segmentation, but directly converts the gray- values in the volume of interest into a projection image, for example, by an opacity transfer function (OTF) or otherwise, instead of using the maximum intensity principle. The above acts may be implemented by way of computer readable instructions, which, when executed by a computer processor(s), causes the processor(s) to carry out the acts described herein. In such a case, the instructions are stored in a computer readable storage medium such as memory associated with and/or otherwise accessible to the relevant computer.
The invention has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be constructed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims

CLAIMS What is claimed is:
1. A method for reformatting image data, comprising:
obtaining volumetric image data indicative of an anatomical structure of interest; identifying a surface of interest of the anatomical structure of interest in the volumetric image data;
identifying a thickness for a sub-volume of interest of the volumetric image data; shaping the sub-volume of interest such that at least one of its sides follows the surface of interest; and
generating, via a processor, a maximum intensity projection (MIP) or direct volume rendering (DVR) based on the identified surface of interest, the shaped sub-volume of interest, and the side the follows the surface of interest.
2. The method of claim 1, further comprising:
adjusting the thickness of interest; and
updating the MIP or DVR based on the adjusted thickness,
3. The method of any of claims 1 to 2, further comprising:
identifying a viewing angle of interest for the anatomical structure of interest; and generating the MIP or DVR based on the sub-volume of interest and the viewing angle of interest.
4. The method of claim 3, further comprising:
adjusting the viewing angle of interest; and
updating the MIP or DVR based on the adjusted view angle of interest.
5. The method of any of claims 1 to 4, wherein the MIP or DVR is determined for a region of the anatomical structure of interest beginning at about the identified surface of interest and extending into the anatomical structure of interest a distance equal to the thickness.
6. The method of any of claims 1 to 5, wherein the structure of interest includes at least one other surface, wherein a set of vessels of interest is located nearer the surface of interest relative to the at least one other surface.
7. The method of claim 6, wherein the set of vessels include smaller peripheral vessels and larger central vessel are located nearer the at least one other surface.
8. The method of any of claims 1 to 7, wherein the surface of interest is a curved surface.
9 The method of any of claims 1 to 8, wherein a maximum intensity for a projection is determined for voxels along a ray extending from the surface of interest through the thickness of interest.
10 The method of any of claims 1 to 9, further comprising:
presenting the MIP or DVR.
11. A reformatter (1118), comprising:
a processor that generates at least one of a maximum intensity projection (MIP) or direct volume rendering (DVR) for a sub-portion of an anatomical structure of interest based on an identified surface of interest of the anatomical structure of interest and an identified sub- volume of interest of the anatomical structure of interest, wherein the MIP or DVR is generated based on a side of the sub-portion that follows the surface of interest.
12. The reformatter of claim 11, wherein the processor generates the MIP or DVR based on a region of data defined by the surface of interest and the sub- volume of interest.
13. The reformatter of any of claims 11 to 12, wherein the processor generates the MIP or DVR based on a viewing angle of interest.
14. The reformatter of claim 13, wherein the viewing angle is generally perpendicular to a projection plane.
15. The reformatter of claim 13, wherein the viewing angle is oblique to a projection plane.
16. The reformatter of any of claims 11 to 12, further comprising:
a segmentor (202) that segments the anatomical structure of interest from volumetric image data.
17. The reformatter of any of claims 11 to 16, further comprising:
an interface (212) for receiving a signal indicative of a change of a thickness of the sub-volume of interest, wherein the processor updates the MIP or DVR based on the signal.
18. The reformatter of any of claims 11 to 17, wherein the anatomical structure of interest includes a first set of vessels of interest and a second set of vessels, and the first set of vessels of interest are located nearer to the surface of interest than the second set of vessels.
19. The reformatter of claim 18, wherein the sub-volume of interest includes a substantial portion of the first set of vessels of interest.
20. A computer readable storage medium encoded with instructions which, when executed by a processor of a computer, cause the computer to perform the step of:
identifying a sub-volume of interest in an anatomical structure in volumetric image data, wherein the sub-volume of interest follows a surface of the anatomical structure; and generating at least one of a maximum intensity projection (MIP) or direct volume rendering (DVR) based on the identified sub-volume of interest.
PCT/IB2011/051107 2010-04-16 2011-03-16 Image data reformatting WO2011128792A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/639,189 US9424680B2 (en) 2010-04-16 2011-03-16 Image data reformatting
EP11721104A EP2559007A2 (en) 2010-04-16 2011-03-16 Image data reformatting
CN201180019052.8A CN102844794B (en) 2010-04-16 2011-03-16 View data reformatting

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US32480910P 2010-04-16 2010-04-16
US61/324,809 2010-04-16

Publications (2)

Publication Number Publication Date
WO2011128792A2 true WO2011128792A2 (en) 2011-10-20
WO2011128792A3 WO2011128792A3 (en) 2012-02-23

Family

ID=44626549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2011/051107 WO2011128792A2 (en) 2010-04-16 2011-03-16 Image data reformatting

Country Status (4)

Country Link
US (1) US9424680B2 (en)
EP (1) EP2559007A2 (en)
CN (1) CN102844794B (en)
WO (1) WO2011128792A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103162627A (en) * 2013-03-28 2013-06-19 广西工学院鹿山学院 Method for estimating fruit size by citrus fruit peel mirror reflection
EP3499467A3 (en) * 2017-11-22 2019-12-04 Canon U.S.A. Inc. Devices, systems, and methods for ablation-zone simulation and visualization
EP3618005A1 (en) * 2018-08-30 2020-03-04 Koninklijke Philips N.V. Image processing system and method
US10918441B2 (en) 2017-11-22 2021-02-16 Canon U.S.A., Inc. Devices, systems, and methods for ablation-zone simulation and visualization

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010040096A1 (en) * 2010-09-01 2012-03-01 Sirona Dental Systems Gmbh Method of creating a shot from a 3D volume
DE102013218821A1 (en) 2013-09-19 2015-03-19 Siemens Aktiengesellschaft Method and device for displaying an object with the aid of X-rays
DE102015204957A1 (en) 2014-03-27 2015-10-01 Siemens Aktiengesellschaft Imaging tomosynthesis system, in particular mammography system
EP3499458B1 (en) * 2014-06-20 2020-09-30 Analogic Corporation Image generation via computed tomography system
CN108885797B (en) * 2016-04-04 2023-06-13 皇家飞利浦有限公司 Imaging system and method
CN113573640A (en) * 2019-04-04 2021-10-29 中心线生物医药股份有限公司 Modeling a region of interest of an anatomical structure

Family Cites Families (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6041097A (en) * 1998-04-06 2000-03-21 Picker International, Inc. Method and apparatus for acquiring volumetric image data using flat panel matrix image receptor
US6352509B1 (en) * 1998-11-16 2002-03-05 Kabushiki Kaisha Toshiba Three-dimensional ultrasonic diagnosis apparatus
JP2000201925A (en) * 1999-01-12 2000-07-25 Toshiba Corp Three-dimensional ultrasonograph
US6674894B1 (en) * 1999-04-20 2004-01-06 University Of Utah Research Foundation Method and apparatus for enhancing an image using data optimization and segmentation
US6211674B1 (en) * 1999-05-14 2001-04-03 General Electric Company Method and system for providing a maximum intensity projection of a non-planar image
US6697663B1 (en) * 2000-11-09 2004-02-24 Koninklijke Philips Electronics N.V. Method and apparatus for reducing noise artifacts in a diagnostic image
AU2002222702A1 (en) * 2000-11-25 2002-06-03 Infinitt Co., Ltd. 3-dimensional multiplanar reformatting system and method and computer-readable recording medium having 3-dimensional multiplanar reformatting program recorded thereon
US6634552B2 (en) * 2001-09-26 2003-10-21 Nec Laboratories America, Inc. Three dimensional vision device and method, and structured light bar-code patterns for use in the same
DE10254908B4 (en) * 2002-11-25 2006-11-30 Siemens Ag Method for producing an image
US7616801B2 (en) * 2002-11-27 2009-11-10 Hologic, Inc. Image handling and display in x-ray mammography and tomosynthesis
US7471814B2 (en) * 2002-11-27 2008-12-30 The Board Of Trustees Of The Leland Stanford Junior University Curved-slab maximum intensity projections
JP4421203B2 (en) * 2003-03-20 2010-02-24 株式会社東芝 Luminous structure analysis processing device
US7301538B2 (en) * 2003-08-18 2007-11-27 Fovia, Inc. Method and system for adaptive direct volume rendering
US7233329B2 (en) * 2003-11-03 2007-06-19 Siemens Corporate Research, Inc. Rendering for coronary visualization
US7233330B2 (en) * 2003-11-03 2007-06-19 Siemens Corporate Research, Inc. Organ wall analysis with ray-casting
US7574247B2 (en) * 2003-11-17 2009-08-11 Siemens Medical Solutions Usa, Inc. Automatic coronary isolation using a n-MIP ray casting technique
US6990169B2 (en) * 2003-12-23 2006-01-24 General Electric Company Method and system for viewing a rendered volume
US7609902B2 (en) * 2004-04-13 2009-10-27 Microsoft Corporation Implementation of discrete cosine transformation and its inverse on programmable graphics processor
US7339585B2 (en) * 2004-07-19 2008-03-04 Pie Medical Imaging B.V. Method and apparatus for visualization of biological structures with use of 3D position information from segmentation results
CN101031938A (en) * 2004-09-28 2007-09-05 皇家飞利浦电子股份有限公司 Image processing apparatus and method
US7885440B2 (en) * 2004-11-04 2011-02-08 Dr Systems, Inc. Systems and methods for interleaving series of medical images
JP4335817B2 (en) * 2005-01-07 2009-09-30 ザイオソフト株式会社 Region of interest designation method, region of interest designation program, region of interest designation device
WO2006099490A1 (en) * 2005-03-15 2006-09-21 The University Of North Carolina At Chapel Hill Methods, systems, and computer program products for processing three-dimensional image data to render an image from a viewpoint within or beyond an occluding region of the image data
JP4170305B2 (en) * 2005-04-05 2008-10-22 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー Radiography equipment
US7599542B2 (en) * 2005-04-08 2009-10-06 John Philip Brockway System and method for detection and display of diseases and abnormalities using confidence imaging
US7532214B2 (en) * 2005-05-25 2009-05-12 Spectra Ab Automated medical image visualization using volume rendering with local histograms
US8884959B2 (en) * 2005-06-03 2014-11-11 Siemens Aktiengesellschaft Gradient free shading for volume rendering using shadow information
US7236558B2 (en) * 2005-07-07 2007-06-26 Terarecon, Inc. Three-dimensional image display device creating three-dimensional image directly from projection data
US7877128B2 (en) * 2005-08-02 2011-01-25 Biosense Webster, Inc. Simulation of invasive procedures
CN101243475B (en) * 2005-08-17 2013-04-17 皇家飞利浦电子股份有限公司 Method and apparatus featuring simple click style interactions according to a clinical task workflow
US7483939B2 (en) * 2005-08-25 2009-01-27 General Electric Company Medical processing system allocating resources for processing 3D to form 2D image data based on report of monitor data
JP4675753B2 (en) * 2005-11-11 2011-04-27 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー X-ray CT system
US20070127791A1 (en) * 2005-11-15 2007-06-07 Sectra Ab Automated synchronization of 3-D medical images, related methods and computer products
US7835500B2 (en) * 2005-11-16 2010-11-16 Accuray Incorporated Multi-phase registration of 2-D X-ray images to 3-D volume studies
US8300912B2 (en) * 2006-01-05 2012-10-30 National University Corporation Kanazawa University Continuous X-ray image screening examination device, program, and recording medium
US7768652B2 (en) * 2006-03-16 2010-08-03 Carl Zeiss Meditec, Inc. Methods for mapping tissue with optical coherence tomography data
US7889194B2 (en) * 2006-03-30 2011-02-15 Siemens Medical Solutions Usa, Inc. System and method for in-context MPR visualization using virtual incision volume visualization
US7660461B2 (en) * 2006-04-21 2010-02-09 Sectra Ab Automated histogram characterization of data sets for image visualization using alpha-histograms
DE102006032990A1 (en) * 2006-07-17 2008-01-31 Siemens Ag Vessel axis spatial distribution determining method, involves interactively marking vessel axis in sequence of planar tomograms or images by marking points, and obtaining three-dimensional distribution of vessel axis by joining points
WO2008018014A2 (en) * 2006-08-11 2008-02-14 Koninklijke Philips Electronics N.V. Anatomy-related image-context-dependent applications for efficient diagnosis
US8483462B2 (en) * 2006-11-03 2013-07-09 Siemens Medical Solutions Usa, Inc. Object centric data reformation with application to rib visualization
US8781193B2 (en) * 2007-03-08 2014-07-15 Sync-Rx, Ltd. Automatic quantitative vessel analysis
JP4588736B2 (en) * 2007-04-12 2010-12-01 富士フイルム株式会社 Image processing method, apparatus, and program
FR2919747B1 (en) * 2007-08-02 2009-11-06 Gen Electric METHOD AND SYSTEM FOR DISPLAYING TOMOSYNTHESIS IMAGES
CN101796544B (en) * 2007-09-03 2012-09-05 皇家飞利浦电子股份有限公司 Visualization method and system of voxel data
US7929743B2 (en) * 2007-10-02 2011-04-19 Hologic, Inc. Displaying breast tomosynthesis computer-aided detection results
US9070181B2 (en) * 2007-12-21 2015-06-30 General Electric Company System and method for extracting features of interest from an image
US9641822B2 (en) * 2008-02-25 2017-05-02 Samsung Electronics Co., Ltd. Method and apparatus for processing three-dimensional (3D) images
JP5380121B2 (en) * 2008-06-09 2014-01-08 株式会社東芝 Ultrasonic diagnostic equipment
JP5317580B2 (en) * 2008-08-20 2013-10-16 株式会社東芝 X-ray CT system
EP2597615A1 (en) * 2008-11-25 2013-05-29 Algotec Systems Ltd. Method and system for segmenting medical imaging data according to a skeletal atlas
JP2010131257A (en) * 2008-12-05 2010-06-17 Ziosoft Inc Medical image processor and medical image processing program
CN101814191B (en) * 2009-02-25 2011-08-24 中国科学院自动化研究所 Three-dimensional image visualization method based on two-dimensional transfer function
US20100246914A1 (en) * 2009-03-31 2010-09-30 Porikli Fatih M Enhanced Visualizations for Ultrasound Videos
US9280822B2 (en) * 2009-05-08 2016-03-08 Edda Technology, Inc. Method, system, apparatus, and computer program product for interactive hepatic vascular and biliary system assessment
US20110090222A1 (en) * 2009-10-15 2011-04-21 Siemens Corporation Visualization of scaring on cardiac surface
US8675940B2 (en) * 2009-10-27 2014-03-18 Siemens Aktiengesellschaft Generation of moving vascular models and blood flow analysis from moving vascular models and phase contrast MRI
JP5572470B2 (en) * 2010-07-28 2014-08-13 富士フイルム株式会社 Diagnosis support apparatus, method, and program
TWI469088B (en) * 2010-12-31 2015-01-11 Ind Tech Res Inst Depth map generation module for foreground object and the method thereof
US9536312B2 (en) * 2011-05-16 2017-01-03 Microsoft Corporation Depth reconstruction using plural depth capture units
US8761474B2 (en) * 2011-07-25 2014-06-24 Siemens Aktiengesellschaft Method for vascular flow pattern analysis
GB201117807D0 (en) * 2011-10-14 2011-11-30 Siemens Medical Solutions Identifying hotspots hidden on mip
CN103322937A (en) * 2012-03-19 2013-09-25 联想(北京)有限公司 Method and device for measuring depth of object using structured light method
WO2015105314A1 (en) * 2014-01-07 2015-07-16 Samsung Electronics Co., Ltd. Radiation detector, tomography imaging apparatus thereof, and radiation detecting apparatus thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103162627A (en) * 2013-03-28 2013-06-19 广西工学院鹿山学院 Method for estimating fruit size by citrus fruit peel mirror reflection
EP3499467A3 (en) * 2017-11-22 2019-12-04 Canon U.S.A. Inc. Devices, systems, and methods for ablation-zone simulation and visualization
US10751128B2 (en) 2017-11-22 2020-08-25 Canon U.S.A., Inc. Devices, systems, and methods for ablation-zone simulation and visualization
US10918441B2 (en) 2017-11-22 2021-02-16 Canon U.S.A., Inc. Devices, systems, and methods for ablation-zone simulation and visualization
EP3618005A1 (en) * 2018-08-30 2020-03-04 Koninklijke Philips N.V. Image processing system and method
WO2020043585A1 (en) * 2018-08-30 2020-03-05 Koninklijke Philips N.V. Image processing system and method
US11694386B2 (en) 2018-08-30 2023-07-04 Koninklijke Philips N.V. Image processing system and method

Also Published As

Publication number Publication date
EP2559007A2 (en) 2013-02-20
US20130064440A1 (en) 2013-03-14
US9424680B2 (en) 2016-08-23
CN102844794B (en) 2016-07-06
WO2011128792A3 (en) 2012-02-23
CN102844794A (en) 2012-12-26

Similar Documents

Publication Publication Date Title
US9424680B2 (en) Image data reformatting
JP5639739B2 (en) Method and system for volume rendering of multiple views
US9754390B2 (en) Reconstruction of time-varying data
US9471987B2 (en) Automatic planning for medical imaging
US20170135655A1 (en) Facial texture mapping to volume image
US9478048B2 (en) Prior image based three dimensional imaging
JP5114044B2 (en) Method and system for cutting out images having biological structures
JP5295562B2 (en) Flexible 3D rotational angiography-computed tomography fusion method
JP6251721B2 (en) Selective tissue visual suppression in image data
US20160287201A1 (en) One or more two dimensional (2d) planning projection images based on three dimensional (3d) pre-scan image data
EP2559003B1 (en) Image data segmentation
CN108320314B (en) Image processing method and device based on CT transverse image and display system
JP2002078706A (en) Computer-aided diagnosis method for supporting diagnosis of three-dimensional digital image data and program storage device
JP2011506032A (en) Image registration based on consistency index
WO2017202712A1 (en) Depth-enhanced tomosynthesis reconstruction
WO2012038863A1 (en) Quantification of a characteristic of a lumen of a tubular structure
EP2828826B1 (en) Extracting bullous emphysema and diffuse emphysema in e.g. ct volume images of the lungs
US8817014B2 (en) Image display of a tubular structure
JP5632920B2 (en) System and method for determining blur characteristics in a blurred image
US11950947B2 (en) Generation of composite images based on live images
US20230386128A1 (en) Image clipping method and image clipping system
EP4160546A1 (en) Methods relating to survey scanning in diagnostic medical imaging
EP4129182A1 (en) Technique for real-time volumetric imaging from multiple sources during interventional procedures

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180019052.8

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11721104

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 2011721104

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 8271/CHENP/2012

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 13639189

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE