US20200265632A1 - System and method for real-time rendering of complex data - Google Patents

System and method for real-time rendering of complex data Download PDF

Info

Publication number
US20200265632A1
US20200265632A1 US16/867,322 US202016867322A US2020265632A1 US 20200265632 A1 US20200265632 A1 US 20200265632A1 US 202016867322 A US202016867322 A US 202016867322A US 2020265632 A1 US2020265632 A1 US 2020265632A1
Authority
US
United States
Prior art keywords
imaging data
material classification
voxel
determining
voxels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/867,322
Inventor
Oren KREDI
Yaron Vaxman
Roy PORAT
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HSBC Bank USA NA
Original Assignee
3D Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3D Systems Inc filed Critical 3D Systems Inc
Priority to US16/867,322 priority Critical patent/US20200265632A1/en
Publication of US20200265632A1 publication Critical patent/US20200265632A1/en
Assigned to HSBC BANK USA, N.A. reassignment HSBC BANK USA, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: 3D SYSTEMS, INC.
Assigned to HSBC BANK USA, N.A. reassignment HSBC BANK USA, N.A. CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NUMBER: 16873739 PREVIOUSLY RECORDED ON REEL 055206 FRAME 0487. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST. Assignors: 3D SYSTEMS, INC.
Assigned to 3D SYSTEMS, INC. reassignment 3D SYSTEMS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: HSBC BANK USA, NATIONAL ASSOCIATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the invention relates generally to rendering data on a computer. More specifically, the invention relates to rendering three-dimensional (3D) objects on a two-dimensional (2D) display, virtual reality and/or augmented reality.
  • 3D objects can be rendered.
  • Current systems can allow for visualizing 3D objects obtained with, for example, imaging devices, other systems and/or inputs.
  • 3D objects can be processed for 3D printing, visualized on a two-dimensional (“2D”) screen, and/or visualized in augmented and/or virtual reality,
  • Current 3D objects can provide vital information for medical professionals. For example, for a doctor performing heart surgery on a neonate, visualizing the heart of the neonate, rather than a generic neonate heart model, can be the difference between a successful and unsuccessful surgery. It can be desirable to visualize via a computer screen (e.g., two-dimensional (“2D”) screen), via virtual reality, via augmented reality and/or by 3D printing a model of the neonate.
  • a computer screen e.g., two-dimensional (“2D”) screen
  • 3D objects can be presented to a user (e.g., doctor), it can be a presentation that typically does render the 3D object with sufficient speed to avoid delay when zooming or rotating the object. The delay can be so long that it can make usage of these current systems unrealizable.
  • a doctor may want to visualize the 3D object with the patient looking on.
  • a doctor may want to review the 3D object with another doctor.
  • the duration it takes current systems to render volumes may prevent the doctor from utilizing these current system in scenarios where real-time processing can be more important.
  • current methods for rendering a CT image with 100 slices can have a rendering rate of 2 frames/second.
  • Virtual reality can require rendering at 90 frames/second,
  • current systems can suffer from an inability to render the 3D object with a high level of resolution such that small details (e.g., blood vessels of a heart) of an object can be understood when viewed.
  • One difficulty with accuracy can include differentiating between different parts of a 3D) object. For example, when rendering a 3D object that includes tissue, current rendering techniques typically cannot distinguish between soft tissue and blood vessels. Thus, both parts are typically rendered having the same color. Therefore, it can be desirable to volume render 3D objects with sufficient speed and/or accuracy such that the rendering is usable.
  • One advantage of the invention is that it can provide an increased rendering speed and/or a reduction in an amount of data during visualization and/or 3D printing, by for example, providing methods that can be faster and/or can require less computations then known methods.
  • one method can include determining a material classification for each voxel in the 3D imaging data; determining a corresponding transfer function for each material classification; and rendering each voxel in the 3D imaging data based on a transfer function that corresponds to the material classification corresponding to the voxel. Determining the corresponding transfer function may be further based on a HU value.
  • the determination of the material. classification can further include: determining an initial material classification; segmenting the 3D imaging data; and determining the material classification based on the initial material classification and the segmented 3D imaging data. Determining an initial material classification value may be based on a HU value of the respective voxel, a probability map, or any combination thereof.
  • the segmenting is based on a magnitude of a gradient of each voxel. Segmenting the 3D imaging data may further include determining an intersection between the segmented 3D imaging data.
  • another method could include: performing a first raycasting on a 3D object to produce a first intermediary frame, performing a second raycasting on the 3D object to produce a second intermediary frame, and mixing the first intermediary frame and the second intermediary frame to render the 3D object.
  • the first raycasting can have a first position and a first step size
  • the second raycasting can have a second start position and a second step size.
  • the first start position and the second position can be different, and the first step size and the second step size can be different.
  • the first step size can be based on the sampling rate of the recasting.
  • the second step size can be based on the first step size and an offset.
  • the second start position can be the first start position with an offset value.
  • the offset value can be randomly generated, input by a user, or any combination thereof.
  • mixing the first intermediary frame values and the second intermediary frame values can further include, for each pixel that is in the same pixel location in the first intermediary frame and the second intermediary frame, mixing the first intermediary frame values and the second intermediary frame values at the pixel location to determine final pixel values at the pixel location.
  • mixing the first intermediary frame values and the second intermediary frame values can further include averaging the first intermediary frame values and the second intermediary frame value, performing a weighted averaging of the first intermediary frame values and the second intermediary frame values, accumulated averaging, or any combination thereof.
  • another method could include generating a voxel grid based on the 3D object.
  • Each voxel in the voxel grid can have a three dimensional location, a size specification, and voxel values.
  • Each voxel in the voxel grid can represent a center point of a 3D cube volume having the respective size specification.
  • the size specification can be input by a user, based on a type of the 3D object, size of the 3D object of any combination thereof.
  • the 3D cube volume For each voxel in the voxel grid, whether the 3D cube volume is empty can be determined. If the 3D cube volume is empty, an empty value can be assigned to the current voxel in the voxel grid. If the 3D cube volume is not empty, a present value can be assigned to the current voxel in the voxel grid. For each voxel in the voxel grid having a present value, the corresponding 3D cube volume based on the corresponding 3D object can be rendered to a frame for display on the 3D screen.
  • the method involves rendering the voxel grid based on the present and empty values to a frame for display on the 3D screen.
  • the rendering can be performed on a graphical processing unit.
  • Non-limiting examples of embodiments of the disclosure are described below with reference to figures attached hereto that are listed following this paragraph.
  • Identical features that appear in more than one figure are generally labeled with a same label in all the figures in which they appear.
  • a label labeling an icon representing a given feature of an embodiment of the disclosure in a figure can be used to reference the given feature.
  • Dimensions of features shown in the figures are chosen for convenience and clarity of presentation and are not necessarily shown to scale.
  • FIG. 1 shows a block diagram of a computing system for volume rendering, according to an illustrative embodiment of the invention
  • FIG. 2 shows a flowchart of a method for rendering 3D objects, according to an illustrative embodiment of the invention
  • FIG. 3 shows a table that can be used with the method of FIG. 2 , according to an illustrative embodiment of the invention
  • FIG. 4 shows a flowchart of a method for rendering 3D objects, according to an illustrative embodiment of the invention
  • FIG. 5 shows a flowchart of a method for rendering 3D objects, according to an illustrative embodiment of the invention.
  • FIG. 6 shows screenshots of a volume, an optimized tree and a frame, according to some embodiments of the invention.
  • the terms “plurality” and “a plurality” as used herein can include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” can be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
  • the term set when used herein can include one or more items.
  • the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently.
  • FIG. 1 shows a block diagram of a computing system for volume rendering, according to an illustrative embodiment of the invention.
  • Computing system 100 can include a controller 105 that can be, for example, a graphical processing unit (GPU) and/or a central processing unit processor (CPU), a chip or any suitable computing or computational device, an operating system 115 , a memory 120 , executable code 125 , a storage system 130 , input devices 135 and output devices 140 .
  • GPU graphical processing unit
  • CPU central processing unit processor
  • Controller 105 (or one or more controllers or processors, possibly across multiple units or devices) can carry out methods described herein, and/or execute the various modules, units.
  • Operating system 115 can include any code segment (e.g., one similar to executable code 125 described herein) that can perform tasks involving coordination, scheduling, arbitration, supervising, controlling or other managing operation of computing system 100 , for example, scheduling execution of software programs or executable code segments, or enabling software programs or other modules or units to communicate.
  • Operating system 115 can be a commercial operating system.
  • Memory 120 can be a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units, or any combination thereof.
  • Memory 120 can be or can include a plurality of memory units.
  • Memory 120 can be a computer or processor non-transitory readable medium, or a computer non-transitory storage medium, e.g., a RAM.
  • Executable code 125 can be any executable code, e.g., an application, a program, a process, task or script. Executable code 125 can be executed by controller 105 possibly under control of operating system 115 .
  • executable code 125 can be an application that renders a 3 D objects onto a 2D screen or produces instructions for printing a 3D object by a 3D printer as further described herein.
  • a system according to some embodiments of the invention can include a plurality of executable code segments similar to executable code 125 that can be loaded into memory 120 and cause controller 105 to carry out methods described herein.
  • units or modules described herein can be, or can include, controller 105 , memory 120 and executable code 125 .
  • Storage system 130 can be or can include, for example, a hard disk drive, a Compact Disk (CI)) drive, a CD-Recordable (CD-R) drive, a Blu-ray disk (BD), a universal serial bus (USB) device or other suitable removable and/or fixed storage unit.
  • Content can be stored in storage system 130 and can be loaded from storage system 130 into memory 120 where it can be processed by controller 105 .
  • storage system 130 can store 3D objects (e.g., imaging data 131 ), mask data 132 , rendering data 133 , a transfer function data 134 , voxel grid data 136 , and/or a material ID list 137 .
  • the 3D object can include any data that is representative of a 3D object.
  • the 3D objects can include 3D imaging data, mesh data, volumetric objects, polygon mesh objects, point clouds, functional representations of 3D objects, cad files, 3D pdf files, STL files, and/or any inputs that can represent a 3D object.
  • the 3D imaging data can include medical imaging data, including Computed Tomography (“CT”) imaging data, a Cone Beam Computed Tomography (“CBCT”) imaging data, a Magnetic Resonance Imaging (“MRI”) imaging data and/or MRA imaging data (e.g., MRI with a contrast agent) or ultrasound imaging data.
  • CT Computed Tomography
  • CBCT Cone Beam Computed Tomography
  • MRI Magnetic Resonance Imaging
  • MRA imaging data e.g., MRI with a contrast agent
  • the 3D objects can be of anatomy (e.g., complex anatomy), industrial data, or any 3D object.
  • Storage system 130 Data stored in storage system 130 is further described herein. As is apparent to one of ordinary skill in the art, the storage system 130 and each dataset therein, can all be in one storage system or distributed into multiple storage systems, in various configurations.
  • memory 120 can be a non-volatile memory having the storage capacity of storage system 130 . Accordingly, although shown as a separate component, storage system 130 can be embedded or included in memory 120 .
  • Input devices 135 can be or can include a mouse, a keyboard, a touch screen or pad or any suitable input device. It will be recognized that any suitable number of input devices can be operatively connected to computing system 100 as shown by block 135 .
  • Output devices 140 can include one or more screens, displays or monitors, speakers and/or any other suitable output devices. It will be recognized that any suitable number of output devices can be operatively connected to computing system 100 as shown by block 140 .
  • I/O devices can be connected to computing system 100 as shown by blocks 135 and 140 .
  • a wired or wireless network interface card (NIC), a printer, a universal serial bus (USB) device or external hard drive can be included in input devices 135 and/or output devices 140 .
  • NIC network interface card
  • USB universal serial bus
  • a system can include components such as, but not limited to, a plurality of central processing units (CPU) or any other suitable multi-purpose or specific processors or controllers controllers similar to controller 105 ), a plurality of input units, a plurality of output units, a plurality of memory units, and a plurality of storage units.
  • a system can additionally include other suitable hardware components and/or software components.
  • a system can include or can be, for example, a personal computer, a desktop computer, a laptop computer, a workstation, a server computer, a network device, or any other suitable computing device.
  • a system as described herein can include one or more devices such as computing system 100 .
  • Imaging data 131 can be any imaging data as known in the art, e.g., imaging data 131 can be medical data such as produced by a CT system or by an MRI system and/or by a CBCT system.
  • a mask included in masks 132 can correspond to object types.
  • the masks can be the right ventricle, the left ventricle, a right aorta, a left aorta.
  • the masks can be keys, buttons, wires, computer chips, screens, etc.
  • a mask may be, for example, a data construct including, for example, Boolean (e.g., 1/0, yes/no) values for each voxel or data point in a larger data set.
  • a mask may include visual markers (e.g., color and/or patterns) that specify appearance of the voxel. The mask when applied to the larger data set may indicate that a voxel is marked or is not marked.
  • a 3D object e.g., an organ such as a head or heart
  • the system can retrieve from memory a set of masks that correspond to a type of the object.
  • the system can populate each mask in the set of masks with the corresponding 3D object data. For example, an imaging data of a heart, a left chamber mask and a right chamber mask can be retrieved from memory. A portion of the imaging data that corresponds to the right chamber of the heart can be assigned to the right chamber mask and the portion of the imaging data that corresponds to the left chamber mask can be assigned to the left chamber mask.
  • Operations performed on masks can include operations performed on voxels, e.g., eroding, dilating, expanding or shrining voxel sizes and so on.
  • voxels e.g., eroding, dilating, expanding or shrining voxel sizes and so on.
  • masks, meshes and voxels can be, or can be represented or described by, digital objects as known in the art, accordingly, any morphological or logical operations can be performed for or on masks, meshes and voxels and described herein.
  • dilating a voxel can include changing one or more values of the voxel, e.g., size, location and so on.
  • a group A of voxels can be dilated with spherical element of radius R (denoted herein as “A ⁇ R”), a group A of voxels can be eroded with spherical element of radius R (denoted herein as “A ⁇ R”), a group A of voxels can be opened with spherical element of radius R (denoted herein as “A ⁇ R”) and a group A of voxels can be closed with spherical element of radius R(denoted herein as “A ⁇ R”).
  • the system can render the set of masks having assigned imaging data into a format that can be displayed on a 2D screen, displayed in virtual reality, displayed in augmented reality and/or 3D printed. For example, based on imaging data received from a CT or an MRI system and based on a set of masks, some embodiments can produce frames for displaying a 3D object on a 2D screen.
  • Rendering data 133 can be any data usable for rendering an image or for printing a 3D object.
  • rendering data 133 can be a set of pixel values (e.g., RGBA and HU values, hue, intensity and so on) that can be used to render or print an image or object as known in the art and/or rendering data 133 can be a set of instructions that can be used to cause a 3D printer to print an object as known in the art.
  • Transfer function 134 can be any function or logic.
  • transfer function 134 can be a function or logic that receives a set of input values (e.g., a HU value and a material identification) and produces as output a description of a pixel, e.g., RGBA values, location or offset of the pixel and the like.
  • transfer function 134 can include a set of transformation tables for a respective set of organs, e.g., a first transformation table for a heart, a second transformation table for a bone, and each transformation table can associate HU values with pixel descriptions as described.
  • Voxel grid 136 can be a set of voxels that represents a space or volume.
  • a. voxel grid can be a set of voxels that represents a space containing a 3D organ or system, e.g., voxels in a voxel grid can cover, include, or occupy the space that includes a 3D representation of a blood vessel or system, a head and the like.
  • Any number of voxel grids can be used, for example, a first voxel grid can cover an entire object, organ or space, a second voxel grid can cover or relate to bones, a third voxel grid can describe, include, cover or be otherwise related to blood vessels and so on.
  • Material identification (ID) list 137 can be any list, table or construct that maps or links a set of values to a respective set of materials.
  • material ID list 137 can map HU values to materials, e.g., as shown by table 310 in FIG. 3 .
  • a mesh (or polygon mesh or triangle mesh as known in the art) can be a set of vertices, edges and faces that define or describe a 3D object. As known in the art, a mesh can be used for conversion into a format for 3D printing.
  • volume rendering it can be desirable to render a volume that distinguishes between parts of a 3D imaging data. For example, distinguishing between bone and a bed a patient lies upon during imaging.
  • the invention can involve determining a transfer function from a set of transfer functions for each voxel in the 3D imaging data based on a material classification index. Varying the transfer function among voxels within the 3D imaging data can allow for high quality images that have clarity, by for example, selecting a transfer function that corresponds to a particular part of an object being rendered.
  • FIG. 2 is flowchart of a method for volume rendering, according to an illustrative embodiment of the invention.
  • the method involves determining a material classification for each voxels in a 3D imaging data (Step 210 ).
  • the material classification can be determined based one or more attribute values of the voxel.
  • the material classification can be determined based on the HU attribute as follows: a voxel with the FLU value in the range of ⁇ 2048 to ⁇ 600 can be assigned a material classification of air, a voxel with a HU value that is in the range of 200 to 450 can be assigned a material classification of bone.
  • the material classification is determined by i) performing an initial material classification of the voxels in the 3D imaging data, ii) segmenting the voxels in the 3D imaging data, and/or iii) performing a final material classification of the segmented voxels in the 3D imaging data.
  • the initial classification can vary based on the type of 3 D imaging data (e.g., CT imaging data, CBCT imaging data, or MRA (MR plus contrast agent) imaging data).
  • 3 D imaging data e.g., CT imaging data, CBCT imaging data, or MRA (MR plus contrast agent) imaging data.
  • the initial material classification can be based on the HU attribute.
  • FIG. 3 shows an example of table 310 that can be used to determine an initial material classification.
  • the HU ranges can correspond to material type.
  • table 310 is an example only, and HU values can be assigned to different material types.
  • the initial material classification can be based on results of a bias field estimation.
  • the expected means can be determined by determining a Gaussian Mixture Model of 6 Gaussians with means U i .
  • the expected means of each material can be determined by a maximization upon voxels in the range of [ ⁇ 500, 3000]
  • the expected means can be set to [U 1 , . . . U 6 , ⁇ 1000].
  • An output of the bias field estimation can represent the bias field and a set of probability 3D Volumes P i for each material.
  • a Best Probability Map (BPM) can be determined based on the index i of the maximal probability for each voxel.
  • the initial material classification can be based on the BPM values.
  • voxels with a BPM value of 7 can be initially classified as “Air”; voxels with BPM values of 1,2,3 can be initially classified as “Soft Tissue”; voxels with BPM value of 4, 5, 6 can be initially classified as “Bone.”
  • the expected means can be determined by determining a Gaussian Mixture Model of 5 Gaussians with means U i .
  • the expected means of each material can be calculated based on a smoothed and/or filtered MRA imaging data.
  • the MRA imaging data can be smoothed and/or a background threshold can be determined.
  • the background threshold can be determined as shown in EQN. 1 and 2, as follows:
  • NoDataThreshold 0 . 02 *max HU (SmoothedMRAimagingdata) EQN. 1
  • modifiedMRAimagingdata [I>NoDataThreshold] EQN. 2
  • the expected means can be set to [U 1 , . . . U 5 ].
  • the BPM can be determined based on the output of the bias field estimation.
  • the initial material classification for the MRA imaging data can be based on the BPM values. For example, voxels with a BPM value of 1 can be initially classified as “Air”; all voxels not classified as air can be classified as “body”; voxels at the perimeter of the “body” can be classified as “skin”; voxels with a BPM value of 2, that are classified as “body” and not classified as “skin” can be classified as “soft tissue”; voxels with a BPM value of 3 that are classified as “body” and not classified as “skin” can be classified as “muscle”; voxels with a BPM value above 4 that are classified as “body” and not classified as “skin” are classified as “vasculature”; remaining voxels are classified as “noise”.
  • the 3D imaging data voxels can be segmented based on the initial material classification index. For example, assume that the material classification is bone, vasculature, and muscle. Each voxel in the 3D imaging data having a material classification of bone can be segmented into a first segment, each voxel in the 3D imaging data having a material classification of vasculature can be segmented into a second segment, and each voxel in the 3D imaging data having a material classification of muscle can be segmented into a third segment.
  • the segmentation can further involve dilating, eroding, opening and/or dosing initially segmented voxels.
  • the segmentation can vary based on the type of 3D imaging data (e.g., CT imaging data, CBCT imaging data, or MRA imaging data).
  • the CT imaging data can be segmented into vessels, bones, muscle, and/or low contrast vasculature.
  • a vessel segment (“VesselSegment”), the segmentation can be determined as follows:
  • x′ y′ and z′ are coordinates of the current voxel for which the material classification index is being determined.
  • a magnitude of the gradient can be determined as shown in EQN. 7, as follows:
  • a bone segment (“BoneSegment”) can be determined as follows:
  • a muscle segment (“MuscleSegment”) can be determined as follows:
  • certain voxels segmented as “Vasculature” or “Bone” are improperly segmented, as they can be of another object (e.g., an object with an HU>100).
  • voxels can be segmented as “Bed” or “Skin,” as follows:
  • the CBCT imaging data can be segmented into skin and/or noise.
  • a CBCT skin segment (“SkinSegment”) and a noise segment (“NoiseSegment”) can be determined. as follows:
  • a final material classification can be determined.
  • the 3D imaging data can have a final material classification that is equal to the initial material classification.
  • the 3D imaging data can be finally materially classified based on the segmentation.
  • voxels belonging to (GeneralSegment ⁇ VesselSegment ⁇ PossibleBoneSegment) can be finally materially classified as “Dense Bone”; voxels belonging to (GeneralSegment ⁇ (VesselSegment ⁇ ( ⁇ VesselSegment ⁇ PossibleBoneSegment))) can be finally materially classified as “Vasculature”; voxels belonging to (TrueBoneSegment ⁇ (MuscleMsak ⁇ LowContrastSegment)) can be classified as “Bone/Vasculature”; voxels belonging to ( ⁇ TrueBoneSegment ⁇ MuscleSegment ⁇ (Muscle Msak ⁇ LowContrastSegment)) can be finally materially classified as “Noise.”
  • voxels that have an initial material classification of “Fat” are removed using the open operation with 1 voxel radius of the “Fat” voxels.
  • the removed voxels can be finally materially classified as “Noise.”
  • voxels in SkinSegment can be finally materially classified as “Skin” and/or voxels in “BedSegment” can be finally materially classified as “Bed.”
  • voxels belong to NoiseSegment are finally materially classified as “Noise” and SkinSegment can be finally materially classified as “Skin.”
  • voxels of MRA imaging data are finally materially classified according to their initial material classification.
  • voxels material classifications can be made. For example, there might be voxels with HU values that are far from their initial threshold (e.g., due to the operations as described above) which can be misclassified. To correct such misclassifications, an embodiment can perform some or all of the following steps:
  • Determining the material classification index for CT imaging data can also involve determining a maximal dimension of the input CT imaging data (e.g., the maximal width/length/depth) and, if the maximal dimension is above a threshold value, the CT imaging data can be scaled e.g., to a predefined dimension
  • the method can also involve determining a transfer function (e.g., transfer function 134 , as described above in FIG. 1 ) for each material classification (Step 215 ). For example, if the material classification (e.g., indicates a bone) of a first voxel of the 3D imaging data is different than the material classification (e.g., value indicates a tissue) of a second voxel, then the first voxel can be rendered using a first transfer function, and the second voxel can be rendered using a second transfer function.
  • the first transfer function and the second transfer function can be different.
  • the transfer function can be determined by selecting one transfer function from a set of transfer functions.
  • the set of transfer functions can be stored in memory and/or input by a user.
  • the set of transfer functions can be based on desired color for a particular object type. For example, vessels are typically red, and bone is typically white. Therefore, the transfer function can include a smooth transfer of colors between red and white. As is apparent to one of ordinary skill, this transfer function and colors discussed are for example purposes only.
  • the transfer functions can be constant for different HU/grayscale values (e.g., for CBCT and/or MRA). Table 1 shows an examples of transfer functions based on classification and HU value, as follows:
  • multiple material classifications can have the same transfer function.
  • material classified as “Soft Tissue” can have the same transfer function as material classified as “Fat.”
  • material classified as “Bone” can have the same transfer function as material classified as “Dense Bone”
  • the method can also involve rendering each voxel by applying a transfer function that corresponds to the material classification corresponding to the voxel (Step 220 ).
  • the transfer function can receive as input a HU value for the respective voxel. Based on the HU value, the transfer function can output a color (e.g., RGB color) to render the voxel.
  • FIG. 4 is a flowchart of a method for volume rendering, according to some embodiments of the invention.
  • Rendering 3D objects can involve identifying a region of interest and determining which points in that region of interest to render.
  • rendering 3D objects can involve raycasting. For each frame that is rendered on a 2D screen, into a virtual reality setting, and/or into an augmenting reality setting, raycasting can be performed, Each time a visualized image is changed (e.g., zoom, rotate, and/or modified), typically multiple new frames can be determined, and, for each new frame, raycastimg can be performed.
  • a visualized image e.g., zoom, rotate, and/or modified
  • the method can involve performing a first raycasting on the 3D object to produce a first intermediary frame, the first raycasting having a first start position and a first step size (Step 410 ).
  • the raycasting can have a sampling rate, the step size can be based on the sampling rate. For example, if the sampling rate is 10 points per ray, then the step size can be 1/10.
  • the 3D object can be described by a volume.
  • the volume can be a cube that wholly encompasses(or substantially wholly encompasses) the 3D object.
  • the volume size can depend on a size of the 3D object, resolution of the 3D object, or any combination thereof.
  • a pediatric cardiac CT can be physically small, but the volume can be high, due to, for example, a high resolution scan.
  • the volume can have a size of 255 ⁇ 255 ⁇ 300 voxels.
  • the volume is any volumetric shape as is known in the art.
  • the first start position can be a first voxel that has data in the volume that from the viewpoint direction.
  • the method can involve performing a second raycasting on the 3D object to produce a second intermediary frame (Step 415 ).
  • the second raycasting can include a second start positon a second step size.
  • the first start position and the second start position can be different.
  • the second start position can be offset from the first start position.
  • the offset can be a randomly generated number, a user input, based a noise function of a GPU, a constant value, or any combination thereof.
  • the offset is a random number between 0 and 1.
  • the offset is checked to ensure that the second start position is not beyond a predetermined distance from the first start position.
  • the first step size and the second step size can be different.
  • the second step size can be offset from the first step size. In some embodiments, the offset is greater than 1 ⁇ 4 the first step size.
  • the method can include mixing the first intermediary frame and the second intermediary frame to render the 3D object.
  • Mixing the first intermediary frame and the second intermediary frame can involve mixing values (e.g., color values) of pixels of the first frame and the second frame that are at the same location in the frame.
  • mixing values involves taking an average of pixels of the first intermediary frame and the second intermediary frame that are at the same location in the frame. In some embodiments, mixing values involves taking a weighted average of pixels. For example, a higher weight can be given to a first frame and a lower weight can be given to a second, subsequent frame such that the second subsequent frame has less influence on the resulting pixel or frame. In some embodiments, mixing can be done with other functions that are selected based on the 3D object data type. For example, if the 3D object data type is mesh and ray intersection is performed, then a mixing function capable of mixing two or more ray intersections can be used.
  • the raycasting is performed more than n times, where n is an integer. In some embodiments, n is based on a desired level of detail in the rendered object. In some embodiments, n is an input.
  • the start position and/or the step size for each raycasting performed can be different or the same as the previous raycasting operation. In these embodiments, each time a raycasting is done, the mixing can involve determining an accumulated average of all raycastings.
  • each time a ray is cast detail is added and/or the rendered object will appear richer. In some embodiments, the number of times the raycasting is performed depends on a desired level of fidelity for the image.
  • volume rendering can be prohibitively slow.
  • One difficulty is that typically during volume rendering all voxels within the volume, whether present or not, are rendered.
  • some parts of a volume can contain or include no corresponding data.
  • a 3 D object of a head can occupy only part of a volume having a size of 64 ⁇ 64 ⁇ 256 voxels. Rendering all voxels in the 64 ⁇ 64 ⁇ 256 volume can be inefficient, since many of the voxels do not have any data that is of interest to rendered (e.g., data outside of or unrelated to the object).
  • the volume is any volumetric shape as is known in the art.
  • FIG. 5 a flowchart of a method for volume rendering, according to an illustrative embodiment of the invention.
  • the method can involve creating cubes within a volume, and determining whether any data exists within each cube (e.g., determining a voxel grid). If a cube is empty the method can involve marking the cube as empty, e.g., by assigning a predefined value to the voxel. If the cube is not empty, e.g., includes data, then the method can involve using the value of the cube for rendering the 3D object to a frame for display on a 2D screen. In this manner, a number of cubes to render can be reduced.
  • the cube is any volumetric shape as is known in the art.
  • the method can involve generating a voxel grid based on the 3 D object, each voxel in the voxel grid having a 3D location, size specification and voxel values (Step 510 ).
  • Each voxel in the voxel grid can represent a center point of a 3D cube volume. Multiple 3D cube volumes can be used such that they fill a volume space.
  • a volume can have a size of 255 ⁇ 255 ⁇ 300 voxels.
  • Multiple 3D cubes can be generated within the volume into 3D cubes volumes, each of the multiple 3D cubes having a unique 3D location within volume.
  • the 3D cube volume can have size specification.
  • the size specification can be a number of voxels to contain within the 3D cube volume or a 3D size specification (e.g., A ⁇ B ⁇ C voxels).
  • the 3D cube volume can have a size specification of 27 neighbor voxels.
  • the 3D cube volume has size of 3 ⁇ 3 ⁇ 3 voxels.
  • the 3D cube volume can have a size specification of 4 ⁇ 4 ⁇ 4 voxels.
  • the 3D cube volume has 64 voxels disposed therein.
  • the size specification can be input by a user and/or depend on the object type of the 3D object.
  • the voxel grid can be generated by determining a center point of each 3D cube volume and assigning that center point to the voxel grid. For example, assume a volume that has four 3D cubes and that each 3D cube has 4 voxels disposed therein. In this example, the volume has 16 voxels, and the voxel grid has 4 voxels.
  • the method also involves for each voxel in the voxel grid: i) determining if the 3D cube volume is empty, and ii) if the 3d cube volume is empty then assigning an empty value to the current voxel in the voxel grid, otherwise assign a present value to the current voxel in the voxel grid (Step 515 ).
  • Determining if the 3D cube volume cube volume is empty can involve evaluating the sampled voxels. If the sampled voxels contain values, then it is determined that the 3D cube volume is not empty. If the sampled voxels do not contain values, then it is determined that the 3D cube volume is empty. In some embodiments, if the values of the sampled voxels are above a threshold, then it is determined that the 3D cube volume is not empty, and if they are below a threshold, then it is determined that the 3D cube volume is empty.
  • Processing can decrease dramatically by identifying and ignoring parts of the volume that have little or no data. For example, cubes located in the space (e.g., air) around a 3D model of a head can be given a value of zero and can be ignored, thus, drastically decreasing the required processing .
  • space e.g., air
  • a tree structure is created, by setting a size of cubes (e.g., relative to the size of the 3D object and/or 1 / 64 cube sizes, 64 ⁇ 64 ⁇ 64), checking input data (e.g. CT data) for each cube, if the input data (e.g. the CT data) in the cube doesn't have values bigger than a threshold, some embodiments can mark the cube as not interesting (e.g., give it a value of zero or any other predefined value that represents no data). Cubes marked as not interesting (e.g., containing no data) can be ignored in further processing, e.g., left out of a tree structure.
  • a size of cubes e.g., relative to the size of the 3D object and/or 1 / 64 cube sizes, 64 ⁇ 64 ⁇ 64
  • input data e.g. CT data
  • some embodiments can mark the cube as not interesting (e.g., give it a value of zero or any other predefined value that represents no data
  • some embodiments can mark the cube as interesting (e.g., give it a value of one or any other predefined value that represents data). Cubes marked as interesting (e.g., containing data) can be inserted or included in a tree structure. In some embodiments, a mesh can be created based on the tree structure, e.g., based on the cubes marked as interesting.
  • the method can also involve for each cube in a voxel grid having a present value, rendering the corresponding 3D cube volume based on corresponding 3D object (Step 520 ). For example, if a cube in a volume includes data (e.g., RGBA values that indicate or correspond to an object such as bone or blood vessel) then values of a pixel representing the cube in a 2D frame can be determined based on the data of the cube.
  • a tree that includes cubes marked as interesting can be created by a CPU, e.g., by controller 105 .
  • a tree can be created by a dedicated hardware unit, e.g., by a graphics processing unit (GPU).
  • GPU graphics processing unit
  • a GPU can define a voxel grid, e.g., generate, based on 64 ⁇ 64 ⁇ 64 voxels in a memory of the GPU, a voxel grid, having a cube at each of the points, (each having a 3D position and 3D size). It will be noted that implementing the method of FIG. 5 with a GPU can increase speed in comparison with a CPU.
  • a size of a 3D space over which an embodiment can search for cubes with (and/or without) data as described can be defined by a user. For example, rather than a volume of size 64 ⁇ 64 ⁇ 256 as described, any other size can be selected by a user. Any sampling rate, step or resolution can be used, e.g., instead of an 8 ⁇ 8 ⁇ 8 cube used for analyzing cubes in a GPU, a 16 ⁇ 16 ⁇ 16 cube can be used, e.g., in order to increase performance.
  • FIG. 6 screenshots of a volume, an optimized tree and a frame according to some embodiments of the invention.
  • a volume can be represented by a single cube.
  • the volume can be broken into smaller cubes. For example, instead of using one cube for the whole, or entire, volume, some embodiments can break the volume into smaller cubes, possible with the same size.
  • the color of cubes can be based on whether or not an area in the volume includes data.
  • Screenshot 630 shows an example of volume rendered created using a tree as described.
  • the volume rendering can include any combination of performing the material classification, tree optimization, and incremental raycasting.
  • each of the verbs, “comprise” “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of components, elements or parts of the subject or subjects of the verb.
  • adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of an embodiment as described.
  • the word “or” is considered to be the inclusive “or” rather than the exclusive or, and indicates at least one of, or any combination of items it conjoins.
  • the method embodiments described herein are not constrained to a particular order in time or chronological sequence. Additionally, some of the described method elements can be skipped, or they can be repeated, during a sequence of operations of a method.

Abstract

Methods for volume rendering of 3D object data are provided. The methods can include classifying 3D object data and determining a transfer function to use to render the 3D object data. based on the classification; incrementally determining voxels to render from the 3D object data; and/or determining a grid that represents the voxels and rendering based on the voxel grid.

Description

    PRIOR APPLICATION DATA
  • The present application is a continuation of prior U.S. application Ser. No. 15/360,326, entitled “SYSTEM AND METHOD FOR REAL-TIME RENDERING OF COMPLEX DATA”, filed on Nov. 23, 2016, incorporated herein by reference in its entirety.
  • FIELD OF THE INVENTION
  • The invention relates generally to rendering data on a computer. More specifically, the invention relates to rendering three-dimensional (3D) objects on a two-dimensional (2D) display, virtual reality and/or augmented reality.
  • BACKGROUND OF THE INVENTION
  • Currently, three-dimensional (“3D”) objects can be rendered. Current systems can allow for visualizing 3D objects obtained with, for example, imaging devices, other systems and/or inputs. Currently, 3D objects can be processed for 3D printing, visualized on a two-dimensional (“2D”) screen, and/or visualized in augmented and/or virtual reality,
  • Current 3D objects can provide vital information for medical professionals. For example, for a doctor performing heart surgery on a neonate, visualizing the heart of the neonate, rather than a generic neonate heart model, can be the difference between a successful and unsuccessful surgery. It can be desirable to visualize via a computer screen (e.g., two-dimensional (“2D”) screen), via virtual reality, via augmented reality and/or by 3D printing a model of the neonate.
  • Current systems for visualizing 3D) objects can be limited. For example, although 3D objects can be presented to a user (e.g., doctor), it can be a presentation that typically does render the 3D object with sufficient speed to avoid delay when zooming or rotating the object. The delay can be so long that it can make usage of these current systems unrealizable. For example, a doctor may want to visualize the 3D object with the patient looking on. In another example, a doctor may want to review the 3D object with another doctor. In these scenarios, the duration it takes current systems to render volumes may prevent the doctor from utilizing these current system in scenarios where real-time processing can be more important. For example, current methods for rendering a CT image with 100 slices can have a rendering rate of 2 frames/second. Virtual reality can require rendering at 90 frames/second,
  • Separately, current systems can suffer from an inability to render the 3D object with a high level of resolution such that small details (e.g., blood vessels of a heart) of an object can be understood when viewed. One difficulty with accuracy can include differentiating between different parts of a 3D) object. For example, when rendering a 3D object that includes tissue, current rendering techniques typically cannot distinguish between soft tissue and blood vessels. Thus, both parts are typically rendered having the same color. Therefore, it can be desirable to volume render 3D objects with sufficient speed and/or accuracy such that the rendering is usable.
  • SUMMARY OF THE INVENTION
  • One advantage of the invention is that it can provide an increased rendering speed and/or a reduction in an amount of data during visualization and/or 3D printing, by for example, providing methods that can be faster and/or can require less computations then known methods.
  • Another advantage of the invention is that it can provide a more accurate representation of 3D objects, showing for example, small details of an object with a high level of fidelity. Another advantage of the invention is that it can provide faster zooming, rotating, mesh creation, mask creation, data modification, and addition and/or removal of segments of the 3D object when visualized. Another advantage of the invention is that it can allow for improved distinguishing between parts of an object.
  • According to embodiments of the present invention, there is provided methods for volume rendering three-dimensional (3D) image data, and non-transient computer readable mediums containing program instructions for causing the methods.
  • According to embodiments of the present invention, one method can include determining a material classification for each voxel in the 3D imaging data; determining a corresponding transfer function for each material classification; and rendering each voxel in the 3D imaging data based on a transfer function that corresponds to the material classification corresponding to the voxel. Determining the corresponding transfer function may be further based on a HU value.
  • In some embodiments of the present invention, the determination of the material. classification can further include: determining an initial material classification; segmenting the 3D imaging data; and determining the material classification based on the initial material classification and the segmented 3D imaging data. Determining an initial material classification value may be based on a HU value of the respective voxel, a probability map, or any combination thereof.
  • In some embodiments of the invention, the segmenting is based on a magnitude of a gradient of each voxel. Segmenting the 3D imaging data may further include determining an intersection between the segmented 3D imaging data.
  • According to embodiments of the present invention, another method could include: performing a first raycasting on a 3D object to produce a first intermediary frame, performing a second raycasting on the 3D object to produce a second intermediary frame, and mixing the first intermediary frame and the second intermediary frame to render the 3D object.
  • The first raycasting can have a first position and a first step size, and the second raycasting can have a second start position and a second step size. The first start position and the second position can be different, and the first step size and the second step size can be different. The first step size can be based on the sampling rate of the recasting. The second step size can be based on the first step size and an offset.
  • In some embodiments of the invention, the second start position can be the first start position with an offset value. The offset value can be randomly generated, input by a user, or any combination thereof.
  • In some embodiments of the invention, mixing the first intermediary frame values and the second intermediary frame values can further include, for each pixel that is in the same pixel location in the first intermediary frame and the second intermediary frame, mixing the first intermediary frame values and the second intermediary frame values at the pixel location to determine final pixel values at the pixel location.
  • In some embodiments of the invention, mixing the first intermediary frame values and the second intermediary frame values can further include averaging the first intermediary frame values and the second intermediary frame value, performing a weighted averaging of the first intermediary frame values and the second intermediary frame values, accumulated averaging, or any combination thereof.
  • According to embodiments of the present invention, another method could include generating a voxel grid based on the 3D object. Each voxel in the voxel grid can have a three dimensional location, a size specification, and voxel values. Each voxel in the voxel grid can represent a center point of a 3D cube volume having the respective size specification. The size specification can be input by a user, based on a type of the 3D object, size of the 3D object of any combination thereof.
  • For each voxel in the voxel grid, whether the 3D cube volume is empty can be determined. If the 3D cube volume is empty, an empty value can be assigned to the current voxel in the voxel grid. If the 3D cube volume is not empty, a present value can be assigned to the current voxel in the voxel grid. For each voxel in the voxel grid having a present value, the corresponding 3D cube volume based on the corresponding 3D object can be rendered to a frame for display on the 3D screen.
  • In some embodiments, the method involves rendering the voxel grid based on the present and empty values to a frame for display on the 3D screen. The rendering can be performed on a graphical processing unit.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Non-limiting examples of embodiments of the disclosure are described below with reference to figures attached hereto that are listed following this paragraph. Identical features that appear in more than one figure are generally labeled with a same label in all the figures in which they appear. A label labeling an icon representing a given feature of an embodiment of the disclosure in a figure can be used to reference the given feature. Dimensions of features shown in the figures are chosen for convenience and clarity of presentation and are not necessarily shown to scale.
  • The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to the organization and the method of operation, together with objects, features and advantages thereof, can best be understood by reference to the following detailed description when read with the accompanied drawings. Embodiments of the invention are illustrated by way of example and are not limited to the figures of the accompanying drawings, in which like reference numerals indicate corresponding, analogous or similar elements, and in which:
  • FIG. 1 shows a block diagram of a computing system for volume rendering, according to an illustrative embodiment of the invention;
  • FIG. 2 shows a flowchart of a method for rendering 3D objects, according to an illustrative embodiment of the invention;
  • FIG. 3 shows a table that can be used with the method of FIG. 2, according to an illustrative embodiment of the invention;
  • FIG. 4 shows a flowchart of a method for rendering 3D objects, according to an illustrative embodiment of the invention;
  • FIG. 5 shows a flowchart of a method for rendering 3D objects, according to an illustrative embodiment of the invention; and
  • FIG. 6 shows screenshots of a volume, an optimized tree and a frame, according to some embodiments of the invention.
  • It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn accurately or to scale. For example, the dimensions of some of the elements can be exaggerated relative to other elements for clarity, or several physical components can be included in one functional block or element. Further, where considered appropriate, reference numerals can be repeated among the figures to indicate corresponding or analogous elements.
  • DETAILED DESCRIPTION
  • In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the invention can be practiced without these specific details. In other instances, well-known methods, procedures, and components, modules, units and/or circuits have not been described in detail so as not to obscure the invention. Some features or elements described with respect to one embodiment can be combined with features or elements described with respect to other embodiments. For the sake of clarity, discussion of same or similar features or elements cannot be repeated.
  • Although embodiments of the invention are not limited in this regard, discussions utilizing terms such as, for example, “processing,” “computing,” “calculating,” “determining,” “establishing”, “analyzing”, “checking”, or the like, can refer to operation(s) and/or processes) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium that can store instructions to perform operations and/or processes. Although embodiments of the invention are not limited in this regard, the terms “plurality” and “a plurality” as used herein can include, for example, “multiple” or “two or more”. The terms “plurality” or “a plurality” can be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. The term set when used herein can include one or more items. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently.
  • FIG. 1 shows a block diagram of a computing system for volume rendering, according to an illustrative embodiment of the invention. Computing system 100 can include a controller 105 that can be, for example, a graphical processing unit (GPU) and/or a central processing unit processor (CPU), a chip or any suitable computing or computational device, an operating system 115, a memory 120, executable code 125, a storage system 130, input devices 135 and output devices 140.
  • Controller 105 (or one or more controllers or processors, possibly across multiple units or devices) can carry out methods described herein, and/or execute the various modules, units.
  • Operating system 115 can include any code segment (e.g., one similar to executable code 125 described herein) that can perform tasks involving coordination, scheduling, arbitration, supervising, controlling or other managing operation of computing system 100, for example, scheduling execution of software programs or executable code segments, or enabling software programs or other modules or units to communicate. Operating system 115 can be a commercial operating system.
  • Memory 120 can be a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units, or any combination thereof. Memory 120 can be or can include a plurality of memory units. Memory 120 can be a computer or processor non-transitory readable medium, or a computer non-transitory storage medium, e.g., a RAM.
  • Executable code 125 can be any executable code, e.g., an application, a program, a process, task or script. Executable code 125 can be executed by controller 105 possibly under control of operating system 115. For example, executable code 125 can be an application that renders a 3D objects onto a 2D screen or produces instructions for printing a 3D object by a 3D printer as further described herein. Although, for the sake of clarity, a single item of executable code 125 is shown in FIG. 1, a system according to some embodiments of the invention can include a plurality of executable code segments similar to executable code 125 that can be loaded into memory 120 and cause controller 105 to carry out methods described herein. For example, units or modules described herein can be, or can include, controller 105, memory 120 and executable code 125.
  • Storage system 130 can be or can include, for example, a hard disk drive, a Compact Disk (CI)) drive, a CD-Recordable (CD-R) drive, a Blu-ray disk (BD), a universal serial bus (USB) device or other suitable removable and/or fixed storage unit. Content can be stored in storage system 130 and can be loaded from storage system 130 into memory 120 where it can be processed by controller 105. As shown, storage system 130 can store 3D objects (e.g., imaging data 131), mask data 132, rendering data 133, a transfer function data 134, voxel grid data 136, and/or a material ID list 137. The 3D object can include any data that is representative of a 3D object. The 3D objects can include 3D imaging data, mesh data, volumetric objects, polygon mesh objects, point clouds, functional representations of 3D objects, cad files, 3D pdf files, STL files, and/or any inputs that can represent a 3D object. The 3D imaging data can include medical imaging data, including Computed Tomography (“CT”) imaging data, a Cone Beam Computed Tomography (“CBCT”) imaging data, a Magnetic Resonance Imaging (“MRI”) imaging data and/or MRA imaging data (e.g., MRI with a contrast agent) or ultrasound imaging data. The 3D objects can be of anatomy (e.g., complex anatomy), industrial data, or any 3D object.
  • Data stored in storage system 130 is further described herein. As is apparent to one of ordinary skill in the art, the storage system 130 and each dataset therein, can all be in one storage system or distributed into multiple storage systems, in various configurations.
  • In some embodiments, some of the components shown in FIG. 1 can be omitted. For example, memory 120 can be a non-volatile memory having the storage capacity of storage system 130. Accordingly, although shown as a separate component, storage system 130 can be embedded or included in memory 120.
  • Input devices 135 can be or can include a mouse, a keyboard, a touch screen or pad or any suitable input device. It will be recognized that any suitable number of input devices can be operatively connected to computing system 100 as shown by block 135. Output devices 140 can include one or more screens, displays or monitors, speakers and/or any other suitable output devices. It will be recognized that any suitable number of output devices can be operatively connected to computing system 100 as shown by block 140.
  • Any applicable input/output (I/O) devices can be connected to computing system 100 as shown by blocks 135 and 140. For example, a wired or wireless network interface card (NIC), a printer, a universal serial bus (USB) device or external hard drive can be included in input devices 135 and/or output devices 140.
  • A system according to some embodiments of the invention can include components such as, but not limited to, a plurality of central processing units (CPU) or any other suitable multi-purpose or specific processors or controllers controllers similar to controller 105), a plurality of input units, a plurality of output units, a plurality of memory units, and a plurality of storage units. A system can additionally include other suitable hardware components and/or software components. In some embodiments, a system can include or can be, for example, a personal computer, a desktop computer, a laptop computer, a workstation, a server computer, a network device, or any other suitable computing device. For example, a system as described herein can include one or more devices such as computing system 100.
  • Imaging data 131 can be any imaging data as known in the art, e.g., imaging data 131 can be medical data such as produced by a CT system or by an MRI system and/or by a CBCT system.
  • A mask included in masks 132 can correspond to object types. For example, for an object of a heart, the masks can be the right ventricle, the left ventricle, a right aorta, a left aorta. For a cellphone, the masks can be keys, buttons, wires, computer chips, screens, etc. A mask may be, for example, a data construct including, for example, Boolean (e.g., 1/0, yes/no) values for each voxel or data point in a larger data set. A mask may include visual markers (e.g., color and/or patterns) that specify appearance of the voxel. The mask when applied to the larger data set may indicate that a voxel is marked or is not marked.
  • In some embodiments, input of a 3D object (e.g., an organ such as a head or heart) is received. The system can retrieve from memory a set of masks that correspond to a type of the object. The system can populate each mask in the set of masks with the corresponding 3D object data. For example, an imaging data of a heart, a left chamber mask and a right chamber mask can be retrieved from memory. A portion of the imaging data that corresponds to the right chamber of the heart can be assigned to the right chamber mask and the portion of the imaging data that corresponds to the left chamber mask can be assigned to the left chamber mask.
  • Operations performed on masks can include operations performed on voxels, e.g., eroding, dilating, expanding or shrining voxel sizes and so on. It will be understood that masks, meshes and voxels can be, or can be represented or described by, digital objects as known in the art, accordingly, any morphological or logical operations can be performed for or on masks, meshes and voxels and described herein. For example, dilating a voxel can include changing one or more values of the voxel, e.g., size, location and so on.
  • For example, using morphological or logical operations, a group A of voxels can be dilated with spherical element of radius R (denoted herein as “A⊕R”), a group A of voxels can be eroded with spherical element of radius R (denoted herein as “A⊖R”), a group A of voxels can be opened with spherical element of radius R (denoted herein as “A⊚R”) and a group A of voxels can be closed with spherical element of radius R(denoted herein as “A⊙R”).
  • The system can render the set of masks having assigned imaging data into a format that can be displayed on a 2D screen, displayed in virtual reality, displayed in augmented reality and/or 3D printed. For example, based on imaging data received from a CT or an MRI system and based on a set of masks, some embodiments can produce frames for displaying a 3D object on a 2D screen.
  • Rendering data 133 can be any data usable for rendering an image or for printing a 3D object. For example, rendering data 133 can be a set of pixel values (e.g., RGBA and HU values, hue, intensity and so on) that can be used to render or print an image or object as known in the art and/or rendering data 133 can be a set of instructions that can be used to cause a 3D printer to print an object as known in the art.
  • Transfer function 134 can be any function or logic. For example, transfer function 134 can be a function or logic that receives a set of input values (e.g., a HU value and a material identification) and produces as output a description of a pixel, e.g., RGBA values, location or offset of the pixel and the like. For example, transfer function 134 can include a set of transformation tables for a respective set of organs, e.g., a first transformation table for a heart, a second transformation table for a bone, and each transformation table can associate HU values with pixel descriptions as described.
  • Voxel grid 136 can be a set of voxels that represents a space or volume. For example, a. voxel grid can be a set of voxels that represents a space containing a 3D organ or system, e.g., voxels in a voxel grid can cover, include, or occupy the space that includes a 3D representation of a blood vessel or system, a head and the like. Any number of voxel grids can be used, for example, a first voxel grid can cover an entire object, organ or space, a second voxel grid can cover or relate to bones, a third voxel grid can describe, include, cover or be otherwise related to blood vessels and so on.
  • Material identification (ID) list 137 can be any list, table or construct that maps or links a set of values to a respective set of materials. For example, material ID list 137 can map HU values to materials, e.g., as shown by table 310 in FIG. 3.
  • A mesh (or polygon mesh or triangle mesh as known in the art) can be a set of vertices, edges and faces that define or describe a 3D object. As known in the art, a mesh can be used for conversion into a format for 3D printing.
  • During volume rendering, it can be desirable to render a volume that distinguishes between parts of a 3D imaging data. For example, distinguishing between bone and a bed a patient lies upon during imaging.
  • In one aspect, the invention can involve determining a transfer function from a set of transfer functions for each voxel in the 3D imaging data based on a material classification index. Varying the transfer function among voxels within the 3D imaging data can allow for high quality images that have clarity, by for example, selecting a transfer function that corresponds to a particular part of an object being rendered.
  • FIG. 2 is flowchart of a method for volume rendering, according to an illustrative embodiment of the invention. The method involves determining a material classification for each voxels in a 3D imaging data (Step 210). The material classification can be determined based one or more attribute values of the voxel. For example, the material classification can be determined based on the HU attribute as follows: a voxel with the FLU value in the range of −2048 to −600 can be assigned a material classification of air, a voxel with a HU value that is in the range of 200 to 450 can be assigned a material classification of bone.
  • In some embodiments, the material classification is determined by i) performing an initial material classification of the voxels in the 3D imaging data, ii) segmenting the voxels in the 3D imaging data, and/or iii) performing a final material classification of the segmented voxels in the 3D imaging data.
  • i. Performing an Initial Material Classification
  • The initial classification can vary based on the type of 3D imaging data (e.g., CT imaging data, CBCT imaging data, or MRA (MR plus contrast agent) imaging data).
  • For CT imaging data, the initial material classification can be based on the HU attribute. For example, FIG. 3 shows an example of table 310 that can be used to determine an initial material classification. As shown by table 310, the HU ranges can correspond to material type. As is apparent to one of ordinary skill in the art, table 310 is an example only, and HU values can be assigned to different material types.
  • For CBCT imaging data, the initial material classification can be based on results of a bias field estimation. The bias field estimation can be determined as shown in M. N. Ahmed et. al. “A Modified Fuzzy C-Means Algorithm for Bias Field estimation and Segmentation of MRI Data” 2002, IEEE Transactions on medical imaging, incorporated herein by reference in its entirety, with the following as example inputs: expected means, σ=0.5, ϑ=10−5, α=1, p=2. The expected means can be determined by determining a Gaussian Mixture Model of 6 Gaussians with means Ui. The expected means of each material can be determined by a maximization upon voxels in the range of [−500, 3000] The expected means can be set to [U1, . . . U6, −1000]. An output of the bias field estimation can represent the bias field and a set of probability 3D Volumes Pi for each material. A Best Probability Map (BPM) can be determined based on the index i of the maximal probability for each voxel. The initial material classification can be based on the BPM values. For example, voxels with a BPM value of 7 can be initially classified as “Air”; voxels with BPM values of 1,2,3 can be initially classified as “Soft Tissue”; voxels with BPM value of 4, 5, 6 can be initially classified as “Bone.”
  • For MRA imaging data, the initial material classification can be based on the bias field estimation, e.g., as described above with respect to CBCT imaging data, with the following as example inputs: expected means, σ=1, ε=10−5, α=1, p=2. The expected means can be determined by determining a Gaussian Mixture Model of 5 Gaussians with means Ui. The expected means of each material can be calculated based on a smoothed and/or filtered MRA imaging data.
  • For example, the MRA imaging data can be smoothed and/or a background threshold can be determined. The MRA imaging data can be smoothed via a Gaussian smoothing method with σ=1. The background threshold can be determined as shown in EQN. 1 and 2, as follows:

  • NoDataThreshold=0.02*maxHU(SmoothedMRAimagingdata)  EQN. 1

  • modifiedMRAimagingdata=[I>NoDataThreshold]  EQN. 2
  • The expected means can be set to [U1, . . . U5]. The BPM can be determined based on the output of the bias field estimation. The initial material classification for the MRA imaging data can be based on the BPM values. For example, voxels with a BPM value of 1 can be initially classified as “Air”; all voxels not classified as air can be classified as “body”; voxels at the perimeter of the “body” can be classified as “skin”; voxels with a BPM value of 2, that are classified as “body” and not classified as “skin” can be classified as “soft tissue”; voxels with a BPM value of 3 that are classified as “body” and not classified as “skin” can be classified as “muscle”; voxels with a BPM value above 4 that are classified as “body” and not classified as “skin” are classified as “vasculature”; remaining voxels are classified as “noise”.
  • ii. Segmenting the Voxels
  • The 3D imaging data voxels can be segmented based on the initial material classification index. For example, assume that the material classification is bone, vasculature, and muscle. Each voxel in the 3D imaging data having a material classification of bone can be segmented into a first segment, each voxel in the 3D imaging data having a material classification of vasculature can be segmented into a second segment, and each voxel in the 3D imaging data having a material classification of muscle can be segmented into a third segment.
  • In some embodiments, the segmentation can further involve dilating, eroding, opening and/or dosing initially segmented voxels. In some embodiments, the segmentation can vary based on the type of 3D imaging data (e.g., CT imaging data, CBCT imaging data, or MRA imaging data).
  • In some embodiments, the CT imaging data can be segmented into vessels, bones, muscle, and/or low contrast vasculature. For a vessel segment (“VesselSegment”), the segmentation can be determined as follows:
    • a. Voxels that are initially classified as “Bone” or “Vasculature” can be segmented into a general segment (“GeneralSegment”);
    • b. Voxels that are initially classified as “Dense Bone” or “Teeth” can be segmented into possibly metal segment (“PossibleMetalSegment”);
    • c. Voxels that are initially classified as “Vasculature” can be intermittently into a low vessel segment (“LowVesselSegment”);
    • d. VesselSegment=GeneralSegmentυ(|∇g|<150):
      • The magnitude of the gradient |∇g| can be determined as shown in EQN. 3 through 6, as follows:
  • G x ( x , y , z ) = δ I δ x = I ( x + 1 , y , z ) - I ( x , y , z ) 2 EQN . 3 G y ( x , y , z ) = δ I δ y = I ( x , y + 1 , z ) - I ( x , y , z ) 2 EQN . 4 G z ( x , y , z ) = δ I δ y = I ( x , y , z + 1 ) - I ( x , y , z ) 2 EQN . 5 g = [ G x , G y , G z ] EQN . 6
  • where x′ y′ and z′ are coordinates of the current voxel for which the material classification index is being determined. A magnitude of the gradient can be determined as shown in EQN. 7, as follows:

  • |∇g|=√{square root over (G x 2 G y 2 +G z 2)}  EQN. 7
    • e. PossibleMetalSegment=PossibleMetalSegment⊕3;
    • f. VesselSegment=VesselSegment υ(PossibleMetalSegment∩GeneralSegment—e.g., step “f” can remove images of a stent or other metal elements around vessels;
    • g. LowVesselSegment=(LowVesselSegment⊙1)⊚3—e.g., step “g” can connect small vessels to each other, the opening operation can remove noise caused by muscle tissue;
    • h. VesselSegrnent=(VesselSegmentυLowVesselSegment)⊚1)⊙2—e.g., step “v” can connect small vessels to the large vessels in VesselSegment, and can remove noise from bone tissues around the vessels;
    • i. Remove all connected components that are under 1000 voxels from VesselSegment —e.g., to eliminate noise that is, for example, characterized, or caused, by small connected components; and
    • j. VesselSegment=VesselSegment⊙4—VesselSegment can include vessel tissues or boundaries of soft tissue.
  • A bone segment (“BoneSegment”) can be determined as follows:
    • a. Voxels that are initially classified as “Bone” or “Dense Bone” or “Teeth” can be segmented into true bone segment (“TrueBoneSegment”);
    • b. Remove connected components below 50 voxels from the TrueBoneSegment—e.g., step “b” can reduce calcifications that were classified as bone;
    • c. TrueBoneSegment=TrueBoneSegment⊕4—e.g., can ensure that TrueBoneSegment covers an area surrounding dense bone tissue;
    • d. PossibleBoneSegment=GeneralSegmentυ¬WesselSegment;
    • e. Remove from PossibleBoneSegment every connected component that does not touch TrueBoneSegment by 100 voxels to obtain BoneSegment—e.g., step “e” can connect the soft bone tissue voxels to the dense bone tissue voxels.
  • A muscle segment (“MuscleSegment”) can be determined as follows:
    • a. Voxels that are initially classified as “Muscle” can be segmented into MuscleSegment;
    • b. Voxels that are initially classified as “Vasculature” can be segmented into a low contrast Segment “(LowContrastSegment”).
  • In some embodiments, certain voxels segmented as “Vasculature” or “Bone” are improperly segmented, as they can be of another object (e.g., an object with an HU>100). In some embodiments, voxels can be segmented as “Bed” or “Skin,” as follows:
    • a. Voxels with HU above −150 can be segmented into a relevant segment (“RelevantSegment”);
    • b. A largest connected component in RelevantSegment can be segmented into an object segment (“ObjectSegment”);
    • c. ObjectSegrnent=ObjectSegment⊕2—e.g., step “c” can fill holes in ObjectSegment, ObjectSegment can describe the objects body in the scene, with its close surroundings;
    • d. BedSegrnent=[HU>−500]∩ObjectSegment;
    • e. SkinSegment=((ObjectSegment⊖1)∩¬(ObjectSegment⊕3))⊕2
  • In some embodiments, the CBCT imaging data can be segmented into skin and/or noise. A CBCT skin segment (“SkinSegment”) and a noise segment (“NoiseSegment”) can be determined. as follows:
    • a. A minimal HU the CBCT 3D imaging data can be marked as (“MinHUVal”).
    • b. A probable object segment (“ProbableObjectSegment) can be determined: ProbableObjectSegment=([HU>−500]⊙1)⊙1;
    • c. An object segment (“ObjectSegment”) can be determined by region growing ProbableObjectSegment with a standard deviation (e.g., 10), bounding the HU values to be between [−600 ,−500];
    • d. A relaxed object segment (“ObjectSegmentRelaxed”) can be the result of region growing ProbableObjectSegment with HU values of [−700, −500] and standard deviation of 10;
    • e. ObjectSegment=(ObjectSegment⊚5)υ(ObjectSegmentRelaxed⊚5)—e.g., step “e” can fill holes in ObjectSegment;
    • f. NoiseSegment=[HU>−500]∩¬ObjectSegment;
    • g. SkinSegment=(ObjectSegment⊕1)∩¬ObjectSegment⊖1);
    • h. A DICOM scene segment (“DicomSceneSegment”) can be determined: DicomSceneSegment=[HU>MiniHUVal]
    • i. DicomSceneSegment=DicomSceneSegmen⊚3—e.g., step “i” can fill holes in the DicorriSceneSeninent;
    • j. Remove the last 50 rows in DicomSceneMask—e.g., Step “j” can be required due to the fact that most scans are in shape of a tube, thus the back parts of the skull can be neglected;
    • k. SkinSegment=(SkinSegment∩DicomSceneSegment)⊕2;
    • l. NoiseSegment=NoiseSegment∩¬SkinSegment.
      iii. Performing a Final Material Classification
  • A final material classification can be determined. In some embodiments, the 3D imaging data can have a final material classification that is equal to the initial material classification.
  • The 3D imaging data can be finally materially classified based on the segmentation.
  • In various embodiments, for CT imaging data, voxels belonging to (GeneralSegment∩¬VesselSegment∩PossibleBoneSegment) can be finally materially classified as “Dense Bone”; voxels belonging to (GeneralSegment∩(VesselSegment∩(¬VesselSegmentυ¬PossibleBoneSegment))) can be finally materially classified as “Vasculature”; voxels belonging to (TrueBoneSegment∩(MuscleMsakυLowContrastSegment)) can be classified as “Bone/Vasculature”; voxels belonging to (¬TrueBoneSegment∩¬MuscleSegment∩(Muscle MsakυLowContrastSegment)) can be finally materially classified as “Noise.”
  • In some embodiments, voxels that have an initial material classification of “Fat” are removed using the open operation with 1 voxel radius of the “Fat” voxels. The removed voxels can be finally materially classified as “Noise.”
  • In various embodiments, voxels in SkinSegment can be finally materially classified as “Skin” and/or voxels in “BedSegment” can be finally materially classified as “Bed.”
  • In various embodiments, for CBCT imaging data, voxels belong to NoiseSegment are finally materially classified as “Noise” and SkinSegment can be finally materially classified as “Skin.”
  • In some embodiments, voxels of MRA imaging data are finally materially classified according to their initial material classification.
  • Various corrections or modifications of voxels material classifications can be made. For example, there might be voxels with HU values that are far from their initial threshold (e.g., due to the operations as described above) which can be misclassified. To correct such misclassifications, an embodiment can perform some or all of the following steps:
    • a. “Fat” voxels with HU above −20 are classified as Vasculature;
    • b.“Fat” voxels that are neighbors to a bone tissue voxel are classified as “Bone”;
    • c. “Muscle” voxels that have HU above 390 are classified as Bone/Vasculature;
    • d.“Skin” voxels with HU above 10 are classified as Muscle.
  • Determining the material classification index for CT imaging data can also involve determining a maximal dimension of the input CT imaging data (e.g., the maximal width/length/depth) and, if the maximal dimension is above a threshold value, the CT imaging data can be scaled e.g., to a predefined dimension
  • The method can also involve determining a transfer function (e.g., transfer function 134, as described above in FIG. 1) for each material classification (Step 215). For example, if the material classification (e.g., indicates a bone) of a first voxel of the 3D imaging data is different than the material classification (e.g., value indicates a tissue) of a second voxel, then the first voxel can be rendered using a first transfer function, and the second voxel can be rendered using a second transfer function. The first transfer function and the second transfer function can be different.
  • The transfer function can be determined by selecting one transfer function from a set of transfer functions. The set of transfer functions can be stored in memory and/or input by a user. The set of transfer functions can be based on desired color for a particular object type. For example, vessels are typically red, and bone is typically white. Therefore, the transfer function can include a smooth transfer of colors between red and white. As is apparent to one of ordinary skill, this transfer function and colors discussed are for example purposes only. In some embodiments, the transfer functions can be constant for different HU/grayscale values (e.g., for CBCT and/or MRA). Table 1 shows an examples of transfer functions based on classification and HU value, as follows:
  • TABLE 1
    Classification HU Value RGB Value
    Air −2048 and −600 (0, 150, 150)
    Skin Fat −2048 and −600 (255, 70, 30)
    Skin Fat 400 and −61 (255, 70, 30)
    Skin Fat −60 and 0 Smoothly transferring between
    (255, 70, 30) to (134, 6, 6)
    Skin Fat 1 and 100 Smoothly transferring between
    (134, 6, 6) to (251, 135, 115)
    Skin Fat 101 and 500 Smoothly transferring between
    (251, 135, 115) to (247, 170, 119)
    Skin Fat above 501 (247, 170, 119)
    Skin −2048 and −61 (255, 70, 30)
    Skin −60 and 20 Smoothly transferring between
    (255, 70, 30) to (65, 0, 00)
    Skin above 20 (65, 0, 0)
    Fat −2048 and −61 (255, 120, 17)
    Fat −60 and 200 Smoothly transferring between
    (255, 120, 17) to (65, 0, 0)
    Fat 200 and 800 Smoothly transferring between
    (65, 0, 0) to (251, 135, 115)
    Fat above 800 (251, 135, 115)
    Muscle −2048 and 100 (65, 0, 0)
    Muscle 101 and 800 Smoothly transferring between
    (65, 0, 0) to (251, 135, 115)
    Muscle above 800 (251, 135, 115)
    Muscle/Vessel −2048 and 199 (134, 6, 6)
    Muscle/Vessel 200 and 800 Smoothly transferring between
    (134, 6, 6) to (251, 135, 115)
    Muscle/Vessel above 800 (251, 135, 115)
    Vessel −2048 and 300 (134, 6, 6)
    Vessel 200 and 800 Smoothly transferring between
    (134, 6, 6) to (251, 135, 115)
    Vessel above 800 (251, 135, 115)
    Bone/Vessel −2048 and 0 (134, 6, 6)
    Bone/Vessel 0 and 150 Smoothly transferring between
    (134, 6, 6) to (251, 135, 115)
    Bone/Vessel 151 and 400 Smoothly transferring between
    (251, 135, 115) to (247, 170, 119)
    Bone/Vessel above 400 (247, 170, 119)
    Bone Boundary −2048 and 0 (247, 170, 119)
    Bone Boundary 0 and 150 Smoothly transferring between
    (247, 170, 119) to (205, 205, 205)
    Bone Boundary above 150 (205, 205, 205)
    Strong Bone All (255, 255, 255)
    Teeth All (255, 255, 255)
    Vessel Noise All (0, 0, 0)
    Bed All (0, 0, 0)
  • In some embodiments, multiple material classifications can have the same transfer function. For example, for CBCT imaging data, material classified as “Soft Tissue” can have the same transfer function as material classified as “Fat.” In another example, material classified as “Bone” can have the same transfer function as material classified as “Dense Bone”
  • The method can also involve rendering each voxel by applying a transfer function that corresponds to the material classification corresponding to the voxel (Step 220). The transfer function can receive as input a HU value for the respective voxel. Based on the HU value, the transfer function can output a color (e.g., RGB color) to render the voxel.
  • FIG. 4 is a flowchart of a method for volume rendering, according to some embodiments of the invention. Rendering 3D objects can involve identifying a region of interest and determining which points in that region of interest to render. For example, rendering 3D objects can involve raycasting. For each frame that is rendered on a 2D screen, into a virtual reality setting, and/or into an augmenting reality setting, raycasting can be performed, Each time a visualized image is changed (e.g., zoom, rotate, and/or modified), typically multiple new frames can be determined, and, for each new frame, raycastimg can be performed. Current methods for raycasting can be computationally intensive and/or result in large amounts of data, which can cause rendering of 3D objects via raycasting to be prohibitively slow. For example, rendering a 3D object of a heart onto a 2D screen, such that a user can rotate the heart, can cause the image of the heart when rotating to appear pixelated and/or cause a user's computer to crash. The method can involve raycasting incrementally, which can cause computational intensity and/or data amount to be reduced.
  • The method can involve performing a first raycasting on the 3D object to produce a first intermediary frame, the first raycasting having a first start position and a first step size (Step 410). The raycasting can have a sampling rate, the step size can be based on the sampling rate. For example, if the sampling rate is 10 points per ray, then the step size can be 1/10. The 3D object can be described by a volume. The volume can be a cube that wholly encompasses(or substantially wholly encompasses) the 3D object.
  • The volume size can depend on a size of the 3D object, resolution of the 3D object, or any combination thereof. For example, a pediatric cardiac CT can be physically small, but the volume can be high, due to, for example, a high resolution scan. In some embodiments, the volume can have a size of 255×255×300 voxels. In various embodiments, the volume is any volumetric shape as is known in the art.
  • The first start position can be a first voxel that has data in the volume that from the viewpoint direction.
  • The method can involve performing a second raycasting on the 3D object to produce a second intermediary frame (Step 415). The second raycasting can include a second start positon a second step size. The first start position and the second start position can be different. The second start position can be offset from the first start position. The offset can be a randomly generated number, a user input, based a noise function of a GPU, a constant value, or any combination thereof. In some embodiments, the offset is a random number between 0 and 1. In some embodiments, the offset is checked to ensure that the second start position is not beyond a predetermined distance from the first start position.
  • The first step size and the second step size can be different. The second step size can be offset from the first step size. In some embodiments, the offset is greater than ¼ the first step size.
  • As shown by step 420, the method can include mixing the first intermediary frame and the second intermediary frame to render the 3D object. Mixing the first intermediary frame and the second intermediary frame can involve mixing values (e.g., color values) of pixels of the first frame and the second frame that are at the same location in the frame.
  • In some embodiments, mixing values involves taking an average of pixels of the first intermediary frame and the second intermediary frame that are at the same location in the frame. In some embodiments, mixing values involves taking a weighted average of pixels. For example, a higher weight can be given to a first frame and a lower weight can be given to a second, subsequent frame such that the second subsequent frame has less influence on the resulting pixel or frame. In some embodiments, mixing can be done with other functions that are selected based on the 3D object data type. For example, if the 3D object data type is mesh and ray intersection is performed, then a mixing function capable of mixing two or more ray intersections can be used.
  • In some embodiments, the raycasting is performed more than n times, where n is an integer. In some embodiments, n is based on a desired level of detail in the rendered object. In some embodiments, n is an input. Each time a raycasting is done, the start position and/or the step size for each raycasting performed can be different or the same as the previous raycasting operation. In these embodiments, each time a raycasting is done, the mixing can involve determining an accumulated average of all raycastings. In some embodiments, each time a ray is cast, detail is added and/or the rendered object will appear richer. In some embodiments, the number of times the raycasting is performed depends on a desired level of fidelity for the image.
  • As described above, current method for volume rendering can be prohibitively slow. One difficulty is that typically during volume rendering all voxels within the volume, whether present or not, are rendered. For example, some parts of a volume can contain or include no corresponding data. For example, a 3D object of a head can occupy only part of a volume having a size of 64×64×256 voxels. Rendering all voxels in the 64×64×256 volume can be inefficient, since many of the voxels do not have any data that is of interest to rendered (e.g., data outside of or unrelated to the object). In various embodiments, the volume is any volumetric shape as is known in the art.
  • FIG. 5 a flowchart of a method for volume rendering, according to an illustrative embodiment of the invention. Generally, the method can involve creating cubes within a volume, and determining whether any data exists within each cube (e.g., determining a voxel grid). If a cube is empty the method can involve marking the cube as empty, e.g., by assigning a predefined value to the voxel. If the cube is not empty, e.g., includes data, then the method can involve using the value of the cube for rendering the 3D object to a frame for display on a 2D screen. In this manner, a number of cubes to render can be reduced. In various embodiments, the cube is any volumetric shape as is known in the art.
  • The method can involve generating a voxel grid based on the 3D object, each voxel in the voxel grid having a 3D location, size specification and voxel values (Step 510). Each voxel in the voxel grid can represent a center point of a 3D cube volume. Multiple 3D cube volumes can be used such that they fill a volume space.
  • For example, a volume can have a size of 255×255×300 voxels. Multiple 3D cubes can be generated within the volume into 3D cubes volumes, each of the multiple 3D cubes having a unique 3D location within volume. The 3D cube volume can have size specification. The size specification can be a number of voxels to contain within the 3D cube volume or a 3D size specification (e.g., A×B×C voxels). For example, the 3D cube volume can have a size specification of 27 neighbor voxels. In this example, the 3D cube volume has size of 3×3×3 voxels. In another example, the 3D cube volume can have a size specification of 4×4×4 voxels. In this example, the 3D cube volume has 64 voxels disposed therein. The size specification can be input by a user and/or depend on the object type of the 3D object.
  • The voxel grid can be generated by determining a center point of each 3D cube volume and assigning that center point to the voxel grid. For example, assume a volume that has four 3D cubes and that each 3D cube has 4 voxels disposed therein. In this example, the volume has 16 voxels, and the voxel grid has 4 voxels.
  • The method also involves for each voxel in the voxel grid: i) determining if the 3D cube volume is empty, and ii) if the 3d cube volume is empty then assigning an empty value to the current voxel in the voxel grid, otherwise assign a present value to the current voxel in the voxel grid (Step 515).
  • Determining if the 3D cube volume cube volume is empty can involve evaluating the sampled voxels. If the sampled voxels contain values, then it is determined that the 3D cube volume is not empty. If the sampled voxels do not contain values, then it is determined that the 3D cube volume is empty. In some embodiments, if the values of the sampled voxels are above a threshold, then it is determined that the 3D cube volume is not empty, and if they are below a threshold, then it is determined that the 3D cube volume is empty.
  • Processing can decrease dramatically by identifying and ignoring parts of the volume that have little or no data. For example, cubes located in the space (e.g., air) around a 3D model of a head can be given a value of zero and can be ignored, thus, drastically decreasing the required processing .
  • In some embodiments, a tree structure is created, by setting a size of cubes (e.g., relative to the size of the 3D object and/or 1/64 cube sizes, 64×64×64), checking input data (e.g. CT data) for each cube, if the input data (e.g. the CT data) in the cube doesn't have values bigger than a threshold, some embodiments can mark the cube as not interesting (e.g., give it a value of zero or any other predefined value that represents no data). Cubes marked as not interesting (e.g., containing no data) can be ignored in further processing, e.g., left out of a tree structure.
  • If a value greater than a threshold is found in a cube, some embodiments can mark the cube as interesting (e.g., give it a value of one or any other predefined value that represents data). Cubes marked as interesting (e.g., containing data) can be inserted or included in a tree structure. In some embodiments, a mesh can be created based on the tree structure, e.g., based on the cubes marked as interesting.
  • The method can also involve for each cube in a voxel grid having a present value, rendering the corresponding 3D cube volume based on corresponding 3D object (Step 520). For example, if a cube in a volume includes data (e.g., RGBA values that indicate or correspond to an object such as bone or blood vessel) then values of a pixel representing the cube in a 2D frame can be determined based on the data of the cube. In some embodiments, a tree that includes cubes marked as interesting can be created by a CPU, e.g., by controller 105. In some embodiments, a tree can be created by a dedicated hardware unit, e.g., by a graphics processing unit (GPU).
  • For example, in some embodiments, a GPU can define a voxel grid, e.g., generate, based on 64×64×64 voxels in a memory of the GPU, a voxel grid, having a cube at each of the points, (each having a 3D position and 3D size). It will be noted that implementing the method of FIG. 5 with a GPU can increase speed in comparison with a CPU.
  • A size of a 3D space over which an embodiment can search for cubes with (and/or without) data as described can be defined by a user. For example, rather than a volume of size 64×64×256 as described, any other size can be selected by a user. Any sampling rate, step or resolution can be used, e.g., instead of an 8×8×8 cube used for analyzing cubes in a GPU, a 16×16×16 cube can be used, e.g., in order to increase performance.
  • FIG. 6 screenshots of a volume, an optimized tree and a frame according to some embodiments of the invention. As shown by screenshot 610, a volume can be represented by a single cube. As shown by screenshot 620, the volume can be broken into smaller cubes. For example, instead of using one cube for the whole, or entire, volume, some embodiments can break the volume into smaller cubes, possible with the same size. The color of cubes can be based on whether or not an area in the volume includes data. Screenshot 630 shows an example of volume rendered created using a tree as described.
  • As is apparent to one of ordinary skill in the art, the volume rendering can include any combination of performing the material classification, tree optimization, and incremental raycasting.
  • In the description and claims of the present application, each of the verbs, “comprise” “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of components, elements or parts of the subject or subjects of the verb. Unless otherwise stated, adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of an embodiment as described. In addition, the word “or” is considered to be the inclusive “or” rather than the exclusive or, and indicates at least one of, or any combination of items it conjoins.
  • Descriptions of embodiments of the invention in the present application are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments. Some embodiments utilize only some of the features or possible combinations of the features. Variations of embodiments of the invention that are described, and embodiments comprising different combinations of features noted in the described embodiments, will occur to a person having ordinary skill in the art. The scope of the invention is limited only by the claims.
  • Unless explicitly stated, the method embodiments described herein are not constrained to a particular order in time or chronological sequence. Additionally, some of the described method elements can be skipped, or they can be repeated, during a sequence of operations of a method.
  • While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents can occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
  • Various embodiments have been presented. Each of these embodiments can of course include features from other embodiments presented, and embodiments not specifically described can include various features described herein.

Claims (15)

1. A method for volume rendering three--dimensional (3D) imaging data via a computer processor, the method comprising:
for each voxel in the 3D imaging data, the computer processor determining a material classification index;
the computer processor selecting, based on each material classification. index and an HU value, a corresponding transfer function from a stored plurality of transfer functions; and
the computer processor rendering each voxel in the 3D imaging data based on its respective selected transfer function.
2. The method of claim 1 wherein determining the material classification further comprises:
determining an initial material classification;
segmenting the 3D imaging data; and
determining the material classification based on the initial material classification and. the segmented 3D imaging data.
3. The method of claim 2 wherein determining an initial material classification value is based on a HU value of the respective voxel.
4. The method of claim 2 wherein determining an initial material classification value is based on a probability map.
5. The method of claim 2 wherein the segmentingis based on a magnitude of a gradient of each. voxel.
6. The method of claim 2 wherein segmenting the 3D imaging data further comprises eroding, opening or closing all or a portion of the 3D imaging data.
7. The method of claim 2 wherein determining the material classification further comprises determining an intersection between the segmented 3D imaging data.
8. A non-transient computer readable medium containing program instructions for causing a computer processor to perform the method of:
for each voxel in 3D imaging data, the computer processor determining a material classification index;
for each combination of HU value and material classification index the computer processor selecting a corresponding transfer function from a stored plurality of transfer functions; and
the computer processor rendering each voxel in the 3D imaging data based on its respective selected transfer function.
9. A system for volume rendering three-dimensional (3D) imaging data via a computer processor, the system comprising:
a memory; and
a processor to:
for each voxel in the 3D imaging data, determine a material classification index;
select, based on each material classification index and an HU value, a corresponding transfer function from a stored plurality of transfer functions; and
render each voxel in the 3D imaging data based on its respective selected transfer function.
10. The system of claim 9 wherein determining the material classification further comprises:
determining an initial material classification;
segmenting the 3D imaging data; and
determining the material classification based on the initial material classification and the segmented 3D imaging data.
11. The system of claim 10 wherein determining an initial material classification value is based on a HU value of the respective voxel.
12. The system of claim 10 wherein determining an initial material classification value is based on a probability map.
13. The system of claim 10 wherein the segmenting is based on a magnitude of a gradient of each voxel.
14. The system of claim 10 wherein segmenting the 3D imaging data further comprises dilating, eroding, opening or closing all or a portion of the 3D imaging data.
15. The system of claim 10 wherein determining the material classification further comprises determining an intersection between the segmented 3D imaging data.
US16/867,322 2016-11-23 2020-05-05 System and method for real-time rendering of complex data Abandoned US20200265632A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/867,322 US20200265632A1 (en) 2016-11-23 2020-05-05 System and method for real-time rendering of complex data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/360,326 US10726608B2 (en) 2016-11-23 2016-11-23 System and method for real-time rendering of complex data
US16/867,322 US20200265632A1 (en) 2016-11-23 2020-05-05 System and method for real-time rendering of complex data

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/360,326 Continuation US10726608B2 (en) 2016-11-23 2016-11-23 System and method for real-time rendering of complex data

Publications (1)

Publication Number Publication Date
US20200265632A1 true US20200265632A1 (en) 2020-08-20

Family

ID=62147169

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/360,326 Active 2037-01-13 US10726608B2 (en) 2016-11-23 2016-11-23 System and method for real-time rendering of complex data
US16/867,322 Abandoned US20200265632A1 (en) 2016-11-23 2020-05-05 System and method for real-time rendering of complex data

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/360,326 Active 2037-01-13 US10726608B2 (en) 2016-11-23 2016-11-23 System and method for real-time rendering of complex data

Country Status (5)

Country Link
US (2) US10726608B2 (en)
EP (1) EP3545499A4 (en)
JP (2) JP7039607B2 (en)
CN (2) CN117422812A (en)
WO (1) WO2018097881A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10964095B1 (en) * 2020-04-07 2021-03-30 Robert Edwin Douglas Method and apparatus for tandem volume rendering

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10586400B2 (en) * 2018-02-23 2020-03-10 Robert E Douglas Processing 3D medical images to enhance visualization
US11318677B2 (en) * 2017-12-20 2022-05-03 Hewlett-Packard Development Company, L.P. Feature protection for three-dimensional printing
JP2020040219A (en) * 2018-09-06 2020-03-19 富士ゼロックス株式会社 Generating device of three-dimensional shape data, three-dimensional molding device, and generating program of three-dimensional shape data
US11010931B2 (en) * 2018-10-02 2021-05-18 Tencent America LLC Method and apparatus for video coding
CN111068313B (en) * 2019-12-05 2021-02-19 腾讯科技(深圳)有限公司 Scene update control method and device in application and storage medium
CN111144449B (en) * 2019-12-10 2024-01-19 东软集团股份有限公司 Image processing method, device, storage medium and electronic equipment
CN111070664B (en) * 2019-12-30 2022-03-04 上海联影医疗科技股份有限公司 3D printing slice generation method, device, equipment and storage medium
US11151771B1 (en) 2020-05-07 2021-10-19 Canon Medical Systems Corporation Image data processing method and apparatus
EP4002288A1 (en) * 2020-11-12 2022-05-25 Koninklijke Philips N.V. Methods and systems for rendering representations of subject vasculature
CN112597662A (en) * 2020-12-30 2021-04-02 博锐尚格科技股份有限公司 Method and system for checking correctness and mistakes of building model
CN115953372B (en) * 2022-12-23 2024-03-19 北京纳通医用机器人科技有限公司 Bone grinding image display method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030223627A1 (en) * 2001-10-16 2003-12-04 University Of Chicago Method for computer-aided detection of three-dimensional lesions
US20050074155A1 (en) * 2001-07-27 2005-04-07 Alyassin Abdalmajeid Musa Method and system for unsupervised transfer function generation for images of rendered volumes
US20060023924A1 (en) * 2004-07-29 2006-02-02 Christian Asbeck Method and apparatus for visualizing deposits in blood vessels, particularly in coronary vessels
US20070008317A1 (en) * 2005-05-25 2007-01-11 Sectra Ab Automated medical image visualization using volume rendering with local histograms
US20070165917A1 (en) * 2005-11-26 2007-07-19 Zhujiang Cao Fully automatic vessel tree segmentation
US20070297560A1 (en) * 2006-03-03 2007-12-27 Telesecurity Sciences, Inc. Method and system for electronic unpacking of baggage and cargo
US20130039558A1 (en) * 2011-08-11 2013-02-14 The Regents Of The University Of Michigan Patient modeling from multispectral input image volumes
US20160180525A1 (en) * 2014-12-19 2016-06-23 Kabushiki Kaisha Toshiba Medical image data processing system and method
US20160361043A1 (en) * 2015-06-12 2016-12-15 Samsung Medison Co., Ltd. Method and apparatus for displaying ultrasound images
US20170061672A1 (en) * 2015-09-01 2017-03-02 Siemens Healthcare Gmbh Semantic cinematic volume rendering

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU758086B2 (en) 1998-02-23 2003-03-13 Algotec Systems Ltd. Raycasting system and method
KR100450823B1 (en) * 2001-11-27 2004-10-01 삼성전자주식회사 Node structure for representing 3-dimensional objects using depth image
US20050043835A1 (en) 2002-09-30 2005-02-24 Medical Modeling Llc Method for design and production of custom-fit prosthesis
US7755625B2 (en) 2005-05-04 2010-07-13 Medison Co., Ltd. Apparatus and method for rendering volume data
US7660461B2 (en) 2006-04-21 2010-02-09 Sectra Ab Automated histogram characterization of data sets for image visualization using alpha-histograms
JP2008018016A (en) 2006-07-12 2008-01-31 Toshiba Corp Medical image processing equipment and method
US8294706B2 (en) 2006-08-03 2012-10-23 Siemens Medical Solutions Usa, Inc. Volume rendering using N-pass sampling
JP2009160306A (en) 2008-01-09 2009-07-23 Ziosoft Inc Image display device, control method of image display device and control program of image display device
EP2526509A1 (en) * 2010-01-22 2012-11-28 The Research Foundation Of The State University Of New York System and method for prostate visualization and cancer detection
US20110227934A1 (en) * 2010-03-19 2011-09-22 Microsoft Corporation Architecture for Volume Rendering
US9401047B2 (en) * 2010-04-15 2016-07-26 Siemens Medical Solutions, Usa, Inc. Enhanced visualization of medical image data
US8379955B2 (en) * 2010-11-27 2013-02-19 Intrinsic Medical Imaging, LLC Visualizing a 3D volume dataset of an image at any position or orientation from within or outside
US8754888B2 (en) 2011-05-16 2014-06-17 General Electric Company Systems and methods for segmenting three dimensional image volumes
IN2014DN06675A (en) 2012-02-17 2015-05-22 Advanced Mr Analytics Ab
US9218524B2 (en) 2012-12-06 2015-12-22 Siemens Product Lifecycle Management Software Inc. Automatic spatial context based multi-object segmentation in 3D images
EP3022525B1 (en) * 2013-07-18 2020-04-29 a.tron3d GmbH Method of capturing three-dimensional (3d) information on a structure
JP6257949B2 (en) 2013-08-06 2018-01-10 東芝メディカルシステムズ株式会社 Image processing apparatus and medical image diagnostic apparatus
EP3072111A4 (en) 2013-11-20 2017-05-31 Fovia, Inc. Volume rendering polygons for 3-d printing
EP2886043A1 (en) * 2013-12-23 2015-06-24 a.tron3d GmbH Method for continuing recordings to detect three-dimensional geometries of objects
US20150320507A1 (en) * 2014-05-09 2015-11-12 Siemens Aktiengesellschaft Path creation using medical imaging for planning device insertion
CN105785462B (en) * 2014-06-25 2019-02-22 同方威视技术股份有限公司 Mesh calibration method and safety check CT system in a kind of positioning three-dimensional CT image
US10002457B2 (en) 2014-07-01 2018-06-19 Toshiba Medical Systems Corporation Image rendering apparatus and method
US10004403B2 (en) 2014-08-28 2018-06-26 Mela Sciences, Inc. Three dimensional tissue imaging system and method
US9704290B2 (en) * 2014-09-30 2017-07-11 Lucasfilm Entertainment Company Ltd. Deep image identifiers
US9964499B2 (en) 2014-11-04 2018-05-08 Toshiba Medical Systems Corporation Method of, and apparatus for, material classification in multi-energy image data
US9552663B2 (en) 2014-11-19 2017-01-24 Contextvision Ab Method and system for volume rendering of medical images
EP3035290B1 (en) 2014-12-17 2019-01-30 Siemens Healthcare GmbH Method for generating a display data set with volume renders, computing device and computer program
JP5957109B1 (en) * 2015-02-20 2016-07-27 株式会社日立製作所 Ultrasonic diagnostic equipment
US10433796B2 (en) * 2015-06-19 2019-10-08 Koninklijke Philips N.V. Selecting transfer functions for displaying medical images
US10282888B2 (en) * 2016-01-28 2019-05-07 Biosense Webster (Israel) Ltd. High definition coloring of heart chambers
US10121277B2 (en) * 2016-06-20 2018-11-06 Intel Corporation Progressively refined volume ray tracing

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050074155A1 (en) * 2001-07-27 2005-04-07 Alyassin Abdalmajeid Musa Method and system for unsupervised transfer function generation for images of rendered volumes
US20030223627A1 (en) * 2001-10-16 2003-12-04 University Of Chicago Method for computer-aided detection of three-dimensional lesions
US20060023924A1 (en) * 2004-07-29 2006-02-02 Christian Asbeck Method and apparatus for visualizing deposits in blood vessels, particularly in coronary vessels
US20070008317A1 (en) * 2005-05-25 2007-01-11 Sectra Ab Automated medical image visualization using volume rendering with local histograms
US20070165917A1 (en) * 2005-11-26 2007-07-19 Zhujiang Cao Fully automatic vessel tree segmentation
US20070297560A1 (en) * 2006-03-03 2007-12-27 Telesecurity Sciences, Inc. Method and system for electronic unpacking of baggage and cargo
US20130039558A1 (en) * 2011-08-11 2013-02-14 The Regents Of The University Of Michigan Patient modeling from multispectral input image volumes
US20160180525A1 (en) * 2014-12-19 2016-06-23 Kabushiki Kaisha Toshiba Medical image data processing system and method
US20160361043A1 (en) * 2015-06-12 2016-12-15 Samsung Medison Co., Ltd. Method and apparatus for displaying ultrasound images
US20170061672A1 (en) * 2015-09-01 2017-03-02 Siemens Healthcare Gmbh Semantic cinematic volume rendering

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10964095B1 (en) * 2020-04-07 2021-03-30 Robert Edwin Douglas Method and apparatus for tandem volume rendering

Also Published As

Publication number Publication date
US10726608B2 (en) 2020-07-28
US20180144539A1 (en) 2018-05-24
CN110249367A (en) 2019-09-17
JP2022043168A (en) 2022-03-15
JP2020503628A (en) 2020-01-30
CN117422812A (en) 2024-01-19
WO2018097881A1 (en) 2018-05-31
EP3545499A4 (en) 2020-06-17
JP7324268B2 (en) 2023-08-09
EP3545499A1 (en) 2019-10-02
CN110249367B (en) 2023-11-07
JP7039607B2 (en) 2022-03-22

Similar Documents

Publication Publication Date Title
US20200265632A1 (en) System and method for real-time rendering of complex data
CN109754361B (en) 3D anisotropic hybrid network: transferring convolved features from 2D images to 3D anisotropic volumes
CN108022238B (en) Method, computer storage medium, and system for detecting object in 3D image
US10582907B2 (en) Deep learning based bone removal in computed tomography angiography
EP3545500B1 (en) System and method for rendering complex data in a virtual reality or augmented reality environment
RU2571523C2 (en) Probabilistic refinement of model-based segmentation
EP3035287A1 (en) Image processing apparatus, and image processing method
EP3493161B1 (en) Transfer function determination in medical imaging
US9129391B2 (en) Semi-automated preoperative resection planning
Cuadros Linares et al. Mandible and skull segmentation in cone beam computed tomography using super-voxels and graph clustering
US9224236B2 (en) Interactive changing of the depiction of an object displayed using volume rendering
Patel et al. Moment curves
Banerjee et al. A semi-automated approach to improve the efficiency of medical imaging segmentation for haptic rendering
EP3975037A1 (en) Providing a classification explanation and a generative function
US20100265252A1 (en) Rendering using multiple intensity redistribution functions
EP3989172A1 (en) Method for use in generating a computer-based visualization of 3d medical image data
AU2019430258B2 (en) VRDS 4D medical image-based tumor and blood vessel ai processing method and product
JP2023161110A (en) Image identification program, image identification method, image identification device, and information processing system
Jung Feature-Driven Volume Visualization of Medical Imaging Data
CN114882163A (en) Volume rendering method, system, apparatus and storage medium

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: HSBC BANK USA, N.A., NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:3D SYSTEMS, INC.;REEL/FRAME:055206/0487

Effective date: 20210201

AS Assignment

Owner name: HSBC BANK USA, N.A., NEW YORK

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NUMBER: 16873739 PREVIOUSLY RECORDED ON REEL 055206 FRAME 0487. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST;ASSIGNOR:3D SYSTEMS, INC.;REEL/FRAME:055358/0891

Effective date: 20210201

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

AS Assignment

Owner name: 3D SYSTEMS, INC., SOUTH CAROLINA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HSBC BANK USA, NATIONAL ASSOCIATION;REEL/FRAME:057651/0374

Effective date: 20210824

STCV Information on status: appeal procedure

Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: TC RETURN OF APPEAL

STCV Information on status: appeal procedure

Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED

STCV Information on status: appeal procedure

Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS

STCV Information on status: appeal procedure

Free format text: BOARD OF APPEALS DECISION RENDERED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION