US20180144516A1 - Systems and methods for an integrated system for visualizing, simulating, modifying and 3d printing 3d objects - Google Patents

Systems and methods for an integrated system for visualizing, simulating, modifying and 3d printing 3d objects Download PDF

Info

Publication number
US20180144516A1
US20180144516A1 US15/360,313 US201615360313A US2018144516A1 US 20180144516 A1 US20180144516 A1 US 20180144516A1 US 201615360313 A US201615360313 A US 201615360313A US 2018144516 A1 US2018144516 A1 US 2018144516A1
Authority
US
United States
Prior art keywords
mask
mask data
data
volume
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/360,313
Other versions
US10275909B2 (en
Inventor
Dan PRI-TAL
Roy PORAT
Oren Kalisman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
3D Systems Inc
Original Assignee
3D Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3D Systems Inc filed Critical 3D Systems Inc
Priority to US15/360,313 priority Critical patent/US10275909B2/en
Priority to PCT/US2017/052690 priority patent/WO2018097880A1/en
Assigned to 3DSYSTEMS, INC. reassignment 3DSYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KALISMAN, Oren, Porat, Roy, PRI-TAL, DAN
Publication of US20180144516A1 publication Critical patent/US20180144516A1/en
Application granted granted Critical
Publication of US10275909B2 publication Critical patent/US10275909B2/en
Assigned to 3D SYSTEMS, INC. reassignment 3D SYSTEMS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: HSBC BANK USA, NATIONAL ASSOCIATION
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/003Deblurring; Sharpening
    • G06T5/004Unsharp masking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10084Hybrid tomography; Concurrent acquisition with multiple different tomographic modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/416Exact reconstruction

Definitions

  • the invention relates generally to systems and methods for visualizing, simulating and 3D printing 3D objects.
  • correcting by adding missing portions of masks or correcting by removing unwanted portions of masks is also referred to.
  • 3D objects can be processed for 3D printing, visualized on a two-dimensional (“2D”) screen, and/or visualized in augmented and/or virtual reality.
  • 2D two-dimensional
  • the 3D object is manually processed through various systems such that it can be 3D printed, visualized on a 2D screen, and/or visualized in augmented and/or virtual reality.
  • the 3D object can be segmented, masks created, and/or a mesh created.
  • manual transformation can require that a user engage multiple systems, provide inputs/outputs for each system, and/or understand how to run each system. This can be time consuming and unrealistic for a medical professional to perform.
  • Manual processing of 3D objects e.g., transforming 3D objects and moving the data between systems
  • 3D objects can contribute to introduction of errors.
  • an end-to-end system that can allow 3D objects to be rendered for a 2D screen, rendered for virtual reality and/or 3D printed. It can also be desirable to provide a system for volume rendering that has sufficient speed such that a user can zoom into and out of the visualized volume, modify the visualized volume, create one or more masks from the 3D object and/or create one or more mesh from the 3D object.
  • the one or more masks can be based on a type of the object of the 3D object.
  • the corresponding masks can include a right ventricle mask, a left ventricle mask, a right atrium mask, a left atrium mask, an aorta mask, a pulmonary artery mask, a blood volume mask and/or a soft tissue mask.
  • a portion of the CT scan data can correspond to each mask and can be assigned to its respective mask accordingly.
  • the masks can be used to visualize the 3D object on a two-dimensional (“2D”) screen.
  • the 3D object can be missing data or can include extraneous data.
  • a mask created from that imaging data when rendered into a format that is suitable for viewing on a 2D screen, can appear erroneous to the viewer.
  • the masks can be used as a basis for 3D printing.
  • a 3D printed model of mask data that is missing portions or includes extraneous portions can result in a 3D printed model that does not fully represent the object.
  • a doctor can use the 3D printed model or the visualized mask data to learn about/practice operations on body part of a particular patient. If the mask data is missing portions or includes extraneous portions, the doctor may not know this is an error, and may base treatment of a patient on this erroneous data.
  • missing mask data can result in many errors, for example, erroneous 3D printed objects. Therefore, it can be desirable to correct masks. Further it is desirable to correct masks to improve precision of the mask.
  • Current methods for correcting mask data typically can involve a user manually correcting the mask.
  • a user can modify the 3D imaging data slice by slice.
  • Current methods can require an in-depth understanding of the 3D object and/or computer image rendering by the user.
  • the person viewing the data e.g., a doctor or an industrial process engineer
  • manual correction can typically require two people.
  • One advantage of the invention can include providing a more accurate representation of an object being imaged. Another advantage of the invention can include a reduction in an amount of data needed for correcting masks. Another advantage of the invention can include increasing a speed at which corrections can be made. Another advantage of the invention can include a reduction in cost, due to, for example, requiring less time, reduction of number of people, and more accuracy in mask correction. Another advantage of the invention can include providing an end-to-end system that can allow 3D objects to be processed such that it can be 3D printed, visualized on a 2D screen, and/or visualized in augmented and/or virtual reality without user intervention.
  • a method for correcting mask data a non-transient computer readable medium containing program instructions for causing a computer to perform the method, and a system for visualizing and/or 3D printing 3D objects.
  • the method can include receiving a single voxel location, mask data of a 3D volume, and a region of interest of the 3D Volume.
  • the single voxel location can be input by a user via clicking on a displayed image of the mask data.
  • the similarity between any voxel in the region of interest and one or more voxels in the mask data may be determined, and the mask may be modified based on the similarity.
  • the similarity may be based on one or more features values of the one or more voxels in the mask data, the single voxel location, and the region of interest of the 3D volume.
  • the 3D Volume data can be from a CT scan, a CBCT scan, an MRI image, or any combination thereof.
  • modifying the mask data includes removing one or more voxels form the mask data that are a connected component of the single voxel location and above a similarity threshold.
  • Modifying the mask data may include adding one or more voxels into the mask data that are a connected component of the single voxel location and below a similarity threshold.
  • the similarity threshold can be based on the single voxel location. In some embodiments of the invention, the similarity threshold can be further based on a distance between each voxel in the mask data of the 3D volume and the single voxel location.
  • one or more feature values can be based on a volume intensity value, a response of a 3D volume to one or more spatial filters, a gradient magnitude of the volume intensity value, a Frangi filter response, or any combination thereof.
  • the system includes at least one processor and at least one 3D printer.
  • the processor can receive the 3D object and a process flow for the 3D object.
  • the processor flow can be any combination of segmentation, mask modification, meshing, planning, designing, simulation, virtual reality, augmented reality, preparing for 3D printing, previewing for 3D printing, and/or 3D printing the 3D object.
  • FIG. 1 is a diagram of a system for correcting three-dimensional mask data, according to an illustrative embodiment of the invention.
  • FIG. 1A is a diagram for an end-to-end system for 3D object handling, according to an illustrative embodiment of the invention.
  • FIG. 1B is an example of inputs/outputs of the system of FIG. 1A .
  • FIG. 2 is a flow diagram illustrating a method for correcting three-dimensional mask data, according to an illustrative embodiment of the invention.
  • FIGS. 3A-5B are examples of masks that are corrected via adding or removing from the mask, according to illustrative embodiments of the invention.
  • 3D objects can be assigned to masks.
  • the masks can be used to visualize (e.g., on a 2D screen or in virtual reality) and/or 3D print all or a portion of the 3D object.
  • Each mask can include extraneous data or be missing data.
  • Each mask can be corrected by, adding or removing data from the mask.
  • Correcting the mask can involve identifying a region in the mask to correct (e.g., identifying a region of interest).
  • the region of interest can be a subgroup of the entire 3D data. For example, one or more masks can be displayed to a user on a screen or in a virtual reality system, the user can select the region of interest (e.g., by clicking on the display with a mouse, or pointing to an area in virtual reality).
  • the region of interest can include one mask or multiple masks. If the region of interest includes multiple masks, each mask in the region of interest can be corrected.
  • Correcting the mask can involve modifying at least one mask included in the region of interest.
  • removing one or more voxels from the mask data can involve performing a logical negation of the mask, and performing substantially the same steps (or the same steps) as what is performed for adding to the mask.
  • FIG. 1 is a diagram of a system 100 for correcting three-dimensional mask data, according to an illustrative embodiment of the invention.
  • the system 100 can include an imaging device 101 , a post imaging processor 103 , a user input device 111 , and a display 109 .
  • the display 109 is a part of user input device 111 .
  • the post imaging processor 103 may include a mask generator 105 , a mask corrector 107 , and a memory 113 to store imaging data, masks, and mask data.
  • a mask may be, for example, data construct including for example Boolean (e.g., I/O, yes/no) values for each voxel or data point in a larger data set.
  • a mask may include visual markers (e.g., color and/or patterns) that specify appearance of the voxel. The mask when applied to the larger data set may indicate that a voxel is marked or is not marked.
  • the components of system 100 can communicate either by wire or wirelessly. In some embodiments the components of system 100 can be implemented on a single computer, or across different computers.
  • the imaging device 101 may transmit image data to the post imaging processor 103 .
  • the transmitted image data can be of a 3D volume of an object.
  • the imaging device 101 is a 3D Computed Tomography (“CT”) device, a Cone Beam Computed Tomography (“CBCT”) device, a Magnetic Resonance Imaging (“MRI”) device, or any combination thereof.
  • CT Computed Tomography
  • CBCT Cone Beam Computed Tomography
  • MRI Magnetic Resonance Imaging
  • the user input device 111 can be used to input data, and the user input device 111 can communicate with the post imaging processor 103 .
  • the user input device 111 can be a personal computer, desktop computer, mobile computer, laptop computer, and notebook computer or any other suitable device such as a cellular telephone, personal digital assistant (PDA), video game console, etc.
  • PDA personal digital assistant
  • the user input device 111 and display 109 can be used to select a region of interest in the 3D volume.
  • the region of interest in the 3D volume may be transmitted from the user input device 111 to the post imaging processor 103 .
  • the region of interest is about [50 ⁇ 50 ⁇ 50] voxels.
  • the user selects the size of the region of interest.
  • the user input device 111 and display 109 can be used to select a single voxel location within a mask.
  • the single voxel location is input, via the user input device 111 , e.g., by a user via clicking on an image of the mask data displayed on the display 109 .
  • the post imaging processor 103 may determine a location of the region of interest based on the single voxel location, an object type of the data and/or screen resolution. In various embodiments of the invention, the post imaging processor 103 determines the single voxel location based on regions that are expected to be present or absent in the masks, based on mask type and/or object type.
  • the display 109 is a touch screen.
  • the user visualizes the 3D data via virtual reality glasses.
  • the user may input/select the region of interest in the 3D volume and/or the single voxel location in virtual reality.
  • the mask generator 105 of the post imaging processor 103 can create mask data from the image data received from the imaging device 101 .
  • a mask set can be retrieved from memory and/or input by a user.
  • the mask set can be one or more masks that can be used to represent the object type.
  • the mask data can be created by segmenting the 3D object into the mask. The segmenting can be performed as is known in the art.
  • the mask generator 105 of the post imaging processor 103 can transmit the mask data to the screen.
  • the mask generator 105 can transmit all or a portion of the mask data to the mask corrector 107 of the post imaging processor 103 . For example, if a region of interest is specified, the mask generator 105 can transmit the mask data to the mask corrector 107 .
  • the mask corrector 107 can modify the mask data created by the mask generator 105 based on the inputs of the imaging device 101 , the mask generator 105 , and/or the user input device 111 . In some embodiments of the invention, the mask corrector 107 modifies mask data. In some embodiments of the invention, the mask corrector 107 receives the mask and from mask generator 105 . In some embodiments, the mask corrector 107 receives the 3D volume directly from the imaging device 101 .
  • the mask corrector 107 can modify the mask data, based on a similarity between a single voxel location and one or more voxels in the mask data, to create a modified mask.
  • the similarity is based on one or more features values of the one or more voxels in the mask data, the single voxel location, and/or the region of interest of the 3D volume.
  • the post imaging processor 103 can store the corrected mask in the memory 113 .
  • the post imaging processor 103 can transmit the mask corrected by mask corrector 107 to the display 109 and/or virtual reality glasses.
  • the post imaging processor 103 transmits the corrected mask to a system that can convert the masks into data that can be used for 3D printing.
  • FIG. 1A shows a block diagram of a system an end-to-end system for 3D object handling (e.g., visualization, simulation, modification, creation, and/or 3D printing) according to some embodiments of the invention.
  • the system for volume rendering shown in FIG. 1A can be an end-to-end system that can take as input 3D objects and produce a visualization on a screen or in virtual reality, and/or produce a 3D printed object of the 3D object.
  • the 3D object can include any data that is representative of a 3D object.
  • the 3D objects can include 3D imaging data, mesh data, volumetric objects, polygon mesh objects, point clouds, functional representations of 3D objects, cad files, 3D pdf files, STL files, and/or any inputs that can represent a 3D object.
  • the 3D imaging data can include medical imaging data, including Computed Tomography (“CT”) imaging data, a Cone Beam Computed Tomography (“CBCT”) imaging data, a Magnetic Resonance Imaging (“MRI”) imaging data and/or MRA imaging data (e.g., MRI with a contrast agent) or ultrasound imaging data.
  • CT Computed Tomography
  • CBCT Cone Beam Computed Tomography
  • MRI Magnetic Resonance Imaging
  • MRA imaging data e.g., MRI with a contrast agent
  • the 3D objects can be of anatomy (e.g., complex anatomy), industrial data, or any 3D object.
  • the system can include a project/patient management controller 121 , a Digital Imagining and Communication Data (DICOM) management module 123 , a segmentation module 125 , a meshing module 127 , a planning module 129 , a design module 131 , a simulation/VR/AR module 133 , a preprinting module 135 , a previewing module 137 and/or a 3D print module 139 .
  • the project/patient management controller 121 can receive inputs 141 .
  • the inputs 141 can be user inputs or data from a database.
  • FIG. 1B shows an example set of inputs 170 and output 173 , according to illustrative embodiments of the invention.
  • inputs 173 can include scan data, 3D objects, 2D objects, metadata, flow description, and/or printer parameters.
  • the scan data can be 3D imaging data, and as described above, can be CT data, CBCT data, MRI data, and/or MRA data.
  • the scan data is any 3D imaging data as is known in the art.
  • the metadata includes object identification data (e.g., patient indicator), time of the scan, parameters used to take the scan and/or any other metadata as is known in the art.
  • the flow description can be a specification of which modules in the system to use and/or the order to use the modules.
  • the flow description can include any number of the modules in any combination. As is apparent to one of ordinary skill, some flows may not be possible (e.g., flow data that requires skipping module that creates masks before making a mesh).
  • the system can transmit, to the user, an error message based on the user's inputs.
  • the printer parameters can include parameters that are specific to the particular 3D printer the current scan data is to be printed with.
  • the 2D objects can be any documents input to the system, jpegs, and/or pdfs.
  • the outputs 173 can include scan data, VR/AR, metadata, case report, a 3D printed object, a design (e.g., modified input data), a plan, a simulation, 3D objects and/or 2D objects.
  • the DICOM management module 123 can keep track of and/or manage DICOM data within the system. For example, the DICOM management module 123 can keep track of the DICOM data as it is imported, exported and/or explored within the system. The DICOM management module 123 can modify/update the DICOM metadata, export output data into a DICOM format and/or attach case data and/or other DICOM data to other metadata within the system. As is apparent to one of ordinary skill in the art, the DICOM management module 123 can be replaced with an imaging data module for 3D objects generally (e.g., industrial applications).
  • the segmentation module 125 can receive a 3D object and DICOM data and from the DICOM module 123 .
  • the DICOM data can include CT data (e.g. medical and/or industrial), MR data, CBCT data, any series of images, jpeg or tiff series, or any combination thereof.
  • CT data e.g. medical and/or industrial
  • MR data e.g. medical and/or industrial
  • CBCT data e.g. medical and/or industrial
  • any series of images e.g. medical and/or industrial
  • jpeg or tiff series e.g., jpeg or tiff series
  • the DICOM data can include any data that represents a 3D object.
  • the segmentation module 125 can segment the 3D object based on a type of the object of the 3D object and/or the DICOM data.
  • the segments can include a right ventricle, a left ventricle, a right atrium, a left atrium, blood vessels, and/or an aorta.
  • the segmenting can involve segmenting the 3D object into one or more masks.
  • the one or more masks can be stored in a mask database.
  • the one or more masks can be input by a user.
  • the one or more masks can be a right ventricle mask, a left ventricle mask, a right atrium mask, a left atrium mask, a blood vessels mask, and/or an aorta mask.
  • the one or more masks can be populated with the 3D object data that corresponds to the particular mask type.
  • the meshing module 127 can receive the 3D object, the DICOM data and/or the one or more masks that correspond to the 3D object.
  • the meshing module 127 can create a mesh for the 3D object.
  • the mesh can be based on the type of object. For example, if the object type is a heart, a heart mesh can be retrieved from memory. The heart mesh that is retrieved can be modified based on the 3D object and the DICOM.
  • the planning module 129 can receive the 3D object, the DICOM data, the one or more masks and/or the mesh.
  • the planning module 129 can receive as input one or more scenarios for which the 3D object can be visualized within.
  • patient specific 3D object data can be input into the system, and a user (e.g., doctor) can simulate the surgery on the patient's specific data.
  • a user e.g., doctor
  • a user e.g., doctor
  • the design module 131 can receive the 3D object, the DICOM data, metadata, the one or more masks, and/or the mesh.
  • the design module 131 can allow creation of models for general use or tools such as surgical guides that are based on patient specific anatomy shown in the 3D object, for a surgeon to use during surgery.
  • the design module 131 can include tool as are known in the art (e.g., CAD tools) for object creation.
  • the simulation/virtual reality module 133 can receive the 3D object, the DICOM data, the one or more masks and/or the mesh.
  • the simulation/virtual reality module can transform its input into a format that is acceptable by virtual reality glasses.
  • the 3D object can be 3D printed.
  • the preprinting 135 , previewing 137 and print module 139 can receive as input a mesh corresponding to the 3D object.
  • the preprinting module 135 can determine whether or not the input can be 3D printed (e.g., based on wall thickness analysis and/or strength analysis. In some embodiments, the preprinting module 135 can smooth, simplify, close, fix and/or add support to the data to be printed.
  • the previewing module 137 can provide a visualized version of expected outcome of 3D printing of the data to be printed. This can allow, for example, identification and/or correction of any errors that are likely to occur based on the 3D printing of the data.
  • the print module 139 can be actual software of the 3D printer. In this manner, the system can seamlessly integrate with current 3D printers.
  • the system can include a project/patient management module 121 .
  • the project/patient management module can receive as input a desired task of a user (e.g., view 3D object in virtual reality, 3D print a mesh of the 3D object, and/or visualize the imaging data). Based on the desired task, the project/patient management module 121 can instantiate a subset of the modules shown in FIG. 1A to accomplish the desired task.
  • the project/patient management module 121 can evaluate a flow specified by the input and execute the flow, and/or determine modifications to the flow, based on, for example the 3D image data.
  • the system is end-to-end, such that a user can input the 3D object, specify whether the data is to be visualized and/or 3D printed, and the system performs the visualization/3D print.
  • a user can modify the visualized object.
  • the user can modify (e.g., recolor, zoom, stretch, etc.) an image of the segmented data, one or more masks, and/or the mesh.
  • the user can select to view/modify all masks or a subset of masks.
  • FIG. 2 is a flow diagram 200 illustrating a method for correcting three-dimensional mask data, according to an illustrative embodiment of the invention.
  • the method can involve receiving a single voxel location, mask data of a 3D volume, and a region of interest of the 3D volume (Step 210 ) (e.g., receiving by the post imaging processor 103 , as shown above in FIG. 1 ).
  • the single voxel location can be seed point coordinates (e.g., denoted by ⁇ right arrow over (s) ⁇ ).
  • the seed point coordinates can be coordinates on a 2D screen or a set of coordinates in a 3D virtual reality space.
  • the seed point can indicate a location on one or more masks to be corrected.
  • the correction can be addition of data into the one or more masks or removal of data from the one or more masks.
  • a determination of whether the correction is to remove data or add data can be based on the seed point ⁇ right arrow over (s) ⁇ .
  • Masks can be expressed generally as B( ⁇ right arrow over (x) ⁇ ).
  • the mask to be modified can have data added to it or removed from it.
  • the mask can be expressed as shown in EQNs. 1 and 2, as follows:
  • B 0 ( ⁇ right arrow over (x) ⁇ ) is the mask for the specific region of interest (e.g., an input mask)
  • B( ⁇ right arrow over (x) ⁇ ) is either the mask B 0 ( ⁇ right arrow over (x) ⁇ ) or the logical negation of the mask ⁇ B 0 ( ⁇ right arrow over (x) ⁇ ), if adding to the mask or removing from the mask, respectively.
  • a user selects the seed point via a user input device (e.g., the user input device 111 , as shown above in FIG. 1 ).
  • the seed point is determined based on a type of the mask, a type of the object, or any combination thereof. For example, assume the object is a wrist and the mask to be corrected represents a vein of the wrist, if portions of the interior of the vein are missing from the mask, the seed point can be set to a location within the missing portion of the interior of the vein.
  • the method can involve determining similarity between any voxel location within the region of interest and one or more voxels in the mask data (Step 220 ) (e.g., determining by the mask corrector 107 as described above in FIG. 1 ).
  • the similarity may be based on one or more features values of the one or more voxels in the mask data, the single voxel location, the region of interest of the 3D volume, or any combination thereof.
  • the similarity is determined as shown below in EQNs 3-13, as follows:
  • Determining the similarity can involve determining a Euclidian distance (e.g., denoted by D ( ⁇ right arrow over (x) ⁇ )) of each voxel from the seed point ⁇ right arrow over (s) ⁇ .
  • the Euclidian distance can be determined as shown in EQN. 3 , as follows:
  • x 1 , x 2 , and x 3 represent the x, y, and z coordinates a voxel in the region of interest that is compared to the seed point ⁇ right arrow over (s) ⁇ , and s 1 , s 2 , and s 3 represent the x, y, and z coordinates of the seed points.
  • Determining the similarity can involve determining a weight for each voxel based on the Euclidian distance between each voxel and the seed point.
  • Each voxel's weight can be determined as shown below in EQN. 4, as follows:
  • W( ⁇ right arrow over (x) ⁇ ) is the weight of a voxel that is compared to the seed point ⁇ right arrow over (s) ⁇ and ⁇ is a scalar function that decays as the distance from its origin increases.
  • co can be determined as shown in EQN. 5, as follows:
  • the similarity measure of a feature i is denoted by C i ( ⁇ right arrow over (x) ⁇ ).
  • the similarity measure C 1 ( ⁇ right arrow over (x) ⁇ ) be determined as shown in EQNs 6 through 10, as follows;
  • T i out ⁇ ⁇ x ⁇
  • B ⁇ ( x ⁇ ) FALSE ⁇ ⁇ ( W ⁇ ( x ⁇ ) ⁇ F i ⁇ ( x ⁇ ) ) ⁇ ⁇ x ⁇
  • B ⁇ ( x ⁇ ) FALSE ⁇ ⁇ W ⁇ ( x ⁇ ) EQN .
  • T i full ⁇ ⁇ x ⁇ ⁇ ⁇ ( W ⁇ ( x ⁇ ) ⁇ F i ⁇ ( x ⁇ ) ) ⁇ ⁇ x ⁇ ⁇ ⁇ ( W ⁇ ( x ⁇ ) EQN .
  • ⁇ i is a feature contribution power with a value of 1
  • F i ( ⁇ right arrow over (x) ⁇ )
  • feature values are feature values as a function of a voxel's coordinates ⁇ right arrow over (x) ⁇ in the region of interest.
  • feature values can include volume intensity values, a response of a 3D volume to spatial filters, a gradient magnitude of the volume intensity values, and a Frangi filter response, all of which are known in the art.
  • Determining the similarity can be based on C i ( ⁇ right arrow over (x) ⁇ ).
  • the similarity can be determined as shown in EQN 11-13, as follows:
  • G 2 ( ⁇ right arrow over (x) ⁇ ) [( Geo ( G i ( ⁇ right arrow over (x) ⁇ ), B ( ⁇ right arrow over (x) ⁇ ))) ⁇ 2 ⁇ (1+ D ( ⁇ right arrow over (x) ⁇ )) ⁇ 3 ] ⁇ 4 EQN. 12
  • G ( ⁇ right arrow over (x) ⁇ ) G 1 ( ⁇ right arrow over (x) ⁇ ) ⁇ G 2 ( ⁇ right arrow over (x) ⁇ ) EQN. 13
  • G ( ⁇ right arrow over (x) ⁇ ) is the similarity
  • ⁇ 1 is a first feature metric power
  • ⁇ 2 is a second feature metric power
  • ⁇ 3 is a third feature metric power
  • ⁇ 4 is a fourth feature metric power
  • all of the metric powers can be real numbers
  • NZ( ⁇ ) is a normalization function which may be a linear transformation which transforms the range of any scalar field to a normalized range [0,1]
  • Geo(Volume input, logical mask input) is a geodesic distance function as implement in MATLAB.
  • ⁇ 1 has a value of 2.
  • ⁇ 2 has a value of 0.5.
  • ⁇ 3 has a value of 0.5 if voxels are being added to the mask and a value of 2 if voxels are being removed from the mask.
  • ⁇ 4 has a value of 1 if voxels are being added to the mask and a value of 2 if voxels are being removed from the mask.
  • the method can involve modifying the mask data based on the similarity (Step 230 ) (e.g., modifying by the mask corrector 107 as described above in FIG. 1 ).
  • Modifying the mask based on the similarity value can involve determining a threshold based on the seed position.
  • the threshold can be determined as shown in EQN. 14, as follows:
  • g 0 is a threshold
  • is a scalar value (e.g., equal to 10 when adding data to the mask and equal to 100,000 when removing data from the mask)
  • G ( ⁇ right arrow over (s) ⁇ ) is a similarity evaluated at a seed position ⁇ right arrow over (s) ⁇ .
  • the modified mask can be determined as shown in EQN. 15, as follows:
  • M 0 G ( ⁇ right arrow over (x) ⁇ ) ⁇ g 0 ⁇ ⁇ B ( ⁇ right arrow over (x) ⁇ ) EQN. 15
  • M 0 is the modified mask containing all voxels with a similarity measure less than the threshold g 0 and outside of B( ⁇ right arrow over (x) ⁇ )
  • G ( ⁇ right arrow over (x) ⁇ ) is the similarity as described above in EQN. 13
  • ⁇ B( ⁇ right arrow over (x) ⁇ ) is an inverse of the mask B( ⁇ right arrow over (x) ⁇ ).
  • determining the modified mask also involves dilating the modified mask by one voxel in all directions.
  • the modified mask can be closed as shown below in EQN. 16, as follows:
  • M 1 Morphological closing with radius 1 of M 0 . EQN. 16
  • determining the modified mask can also involve reducing the modified mask M 1 to include only data that was not included in the original mask.
  • the modified mask M 1 can be reduced as shown below in EQN. 17 as follows:
  • M 2 is the portions of M 1 having data that is not in the B( ⁇ right arrow over (x) ⁇ ).
  • Modifying the mask based on the similarity can also involve determining an output mask, as shown below in EQN. 18:
  • Output ⁇ ⁇ Mask ⁇ M f ⁇ B ⁇ ( x ⁇ ) adding ⁇ ⁇ to ⁇ ⁇ the ⁇ ⁇ mask ⁇ ( M f ⁇ B ⁇ ( x ⁇ ) ) removing ⁇ ⁇ from ⁇ ⁇ the ⁇ ⁇ mask EQN . ⁇ 18
  • FIGS. 3A-5B are examples of masks that are corrected via addition or removing from the mask, according to illustrative embodiments of the invention.
  • FIG. 3A is an example of a mask that has a seed point where there is no data 310 .
  • FIG. 3B is an example of the correction that includes adding data in a region around the seed point 310 .
  • FIG. 4A is an example of a mask that includes teeth with a seed point 410 where there is data present.
  • FIG. 4B is an example of the correction that includes removing data in a region around the seed point 410 .
  • FIG. 5A is an example of a mask of that includes adding data in a region around a seed point 510 .
  • FIG. 5B is an example of the correction that includes adding data in a region around the seed point 510 .
  • Comprise, include, and/or plural forms of each are open ended and include the listed parts and can include additional parts that are not listed. And/or is open ended and includes one or more of the listed parts and combinations of the listed parts.

Abstract

Methods and systems for visualizing, simulating, modifying and/or 3D printing objects are provided. The system is an end-to-end system that can take as input 3D objects and allow for various levels of user intervention to produce the desired results.

Description

    FIELD OF THE INVENTION
  • The invention relates generally to systems and methods for visualizing, simulating and 3D printing 3D objects. In particular, also relates generally to correcting three-dimensional (“3D”) mask data. In particular, correcting by adding missing portions of masks or correcting by removing unwanted portions of masks.
  • BACKGROUND
  • Current systems can allow for visualizing three-dimensional (“3D”) objects obtained with, for example, imaging devices, other systems and/or inputs. Currently, 3D objects can be processed for 3D printing, visualized on a two-dimensional (“2D”) screen, and/or visualized in augmented and/or virtual reality. Typically the 3D object is manually processed through various systems such that it can be 3D printed, visualized on a 2D screen, and/or visualized in augmented and/or virtual reality. For example, in order to 3D print a 3D object, the 3D object can be segmented, masks created, and/or a mesh created.
  • Typically, manual transformation can require that a user engage multiple systems, provide inputs/outputs for each system, and/or understand how to run each system. This can be time consuming and unrealistic for a medical professional to perform. Manual processing of 3D objects (e.g., transforming 3D objects and moving the data between systems) can contribute to introduction of errors.
  • Therefore, it can be desirable to provide an end-to-end system that can allow 3D objects to be rendered for a 2D screen, rendered for virtual reality and/or 3D printed. It can also be desirable to provide a system for volume rendering that has sufficient speed such that a user can zoom into and out of the visualized volume, modify the visualized volume, create one or more masks from the 3D object and/or create one or more mesh from the 3D object.
  • The one or more masks can be based on a type of the object of the 3D object. For example, for a CT scan of a heart, the corresponding masks can include a right ventricle mask, a left ventricle mask, a right atrium mask, a left atrium mask, an aorta mask, a pulmonary artery mask, a blood volume mask and/or a soft tissue mask. A portion of the CT scan data can correspond to each mask and can be assigned to its respective mask accordingly.
  • The masks can be used to visualize the 3D object on a two-dimensional (“2D”) screen. In some scenarios, the 3D object can be missing data or can include extraneous data. In both cases, a mask created from that imaging data, when rendered into a format that is suitable for viewing on a 2D screen, can appear erroneous to the viewer. In some systems, the masks can be used as a basis for 3D printing. In these scenarios, a 3D printed model of mask data that is missing portions or includes extraneous portions can result in a 3D printed model that does not fully represent the object. For medical application, a doctor can use the 3D printed model or the visualized mask data to learn about/practice operations on body part of a particular patient. If the mask data is missing portions or includes extraneous portions, the doctor may not know this is an error, and may base treatment of a patient on this erroneous data.
  • In industrial applications, missing mask data can result in many errors, for example, erroneous 3D printed objects. Therefore, it can be desirable to correct masks. Further it is desirable to correct masks to improve precision of the mask.
  • Current methods for correcting mask data typically can involve a user manually correcting the mask. For example, for 3D imaging data, a user can modify the 3D imaging data slice by slice. Current methods can require an in-depth understanding of the 3D object and/or computer image rendering by the user. Typically the person viewing the data (e.g., a doctor or an industrial process engineer) may not have a sufficient level of understanding to modify the data. Thus, manual correction can typically require two people.
  • These methods can also contribute to imprecision in the mask data due to, for example, human error. For example, a user may accidentally correct a portion of the 3D object that is not actually erroneous, resulting in further errors in the masks. In addition, the manual process of correcting the data can increase an amount of data used by the computer overall. For example, each slice that is modified can increase the amount of data. Thus, manual correction is impracticable.
  • SUMMARY OF THE INVENTION
  • One advantage of the invention can include providing a more accurate representation of an object being imaged. Another advantage of the invention can include a reduction in an amount of data needed for correcting masks. Another advantage of the invention can include increasing a speed at which corrections can be made. Another advantage of the invention can include a reduction in cost, due to, for example, requiring less time, reduction of number of people, and more accuracy in mask correction. Another advantage of the invention can include providing an end-to-end system that can allow 3D objects to be processed such that it can be 3D printed, visualized on a 2D screen, and/or visualized in augmented and/or virtual reality without user intervention.
  • According to embodiments of the present invention, there is provided a method for correcting mask data, a non-transient computer readable medium containing program instructions for causing a computer to perform the method, and a system for visualizing and/or 3D printing 3D objects.
  • According to embodiments of the present invention, the method can include receiving a single voxel location, mask data of a 3D volume, and a region of interest of the 3D Volume. The single voxel location can be input by a user via clicking on a displayed image of the mask data.
  • The similarity between any voxel in the region of interest and one or more voxels in the mask data may be determined, and the mask may be modified based on the similarity. The similarity may be based on one or more features values of the one or more voxels in the mask data, the single voxel location, and the region of interest of the 3D volume. The 3D Volume data can be from a CT scan, a CBCT scan, an MRI image, or any combination thereof.
  • In some embodiments of the present invention, modifying the mask data includes removing one or more voxels form the mask data that are a connected component of the single voxel location and above a similarity threshold. Modifying the mask data may include adding one or more voxels into the mask data that are a connected component of the single voxel location and below a similarity threshold. The similarity threshold can be based on the single voxel location. In some embodiments of the invention, the similarity threshold can be further based on a distance between each voxel in the mask data of the 3D volume and the single voxel location.
  • In some embodiments of the invention, one or more feature values can be based on a volume intensity value, a response of a 3D volume to one or more spatial filters, a gradient magnitude of the volume intensity value, a Frangi filter response, or any combination thereof.
  • In some embodiments of the invention, the system includes at least one processor and at least one 3D printer. The processor can receive the 3D object and a process flow for the 3D object. The processor flow can be any combination of segmentation, mask modification, meshing, planning, designing, simulation, virtual reality, augmented reality, preparing for 3D printing, previewing for 3D printing, and/or 3D printing the 3D object.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing and other objects, features, and advantages of the present invention, as well as the invention itself, is more fully understood from the following description of various embodiments, when read together with the accompanying drawings.
  • FIG. 1 is a diagram of a system for correcting three-dimensional mask data, according to an illustrative embodiment of the invention.
  • FIG. 1A is a diagram for an end-to-end system for 3D object handling, according to an illustrative embodiment of the invention.
  • FIG. 1B is an example of inputs/outputs of the system of FIG. 1A.
  • FIG. 2 is a flow diagram illustrating a method for correcting three-dimensional mask data, according to an illustrative embodiment of the invention.
  • FIGS. 3A-5B are examples of masks that are corrected via adding or removing from the mask, according to illustrative embodiments of the invention.
  • It is appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
  • DETAILED DESCRIPTION
  • In the following description, various aspects of the present invention are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it is apparent to one skilled in the art that the present invention may be practiced without the specific details presented herein. Furthermore, well known features may be omitted or simplified in order not to obscure the present invention.
  • Generally, 3D objects can be assigned to masks. The masks can be used to visualize (e.g., on a 2D screen or in virtual reality) and/or 3D print all or a portion of the 3D object. Each mask can include extraneous data or be missing data. Each mask can be corrected by, adding or removing data from the mask.
  • Correcting the mask can involve identifying a region in the mask to correct (e.g., identifying a region of interest). The region of interest can be a subgroup of the entire 3D data. For example, one or more masks can be displayed to a user on a screen or in a virtual reality system, the user can select the region of interest (e.g., by clicking on the display with a mouse, or pointing to an area in virtual reality). The region of interest can include one mask or multiple masks. If the region of interest includes multiple masks, each mask in the region of interest can be corrected.
  • Correcting the mask can involve modifying at least one mask included in the region of interest. In some embodiments, removing one or more voxels from the mask data, can involve performing a logical negation of the mask, and performing substantially the same steps (or the same steps) as what is performed for adding to the mask.
  • FIG. 1 is a diagram of a system 100 for correcting three-dimensional mask data, according to an illustrative embodiment of the invention. The system 100 can include an imaging device 101, a post imaging processor 103, a user input device 111, and a display 109. In some embodiments of the invention, the display 109 is a part of user input device 111. The post imaging processor 103 may include a mask generator 105, a mask corrector 107, and a memory 113 to store imaging data, masks, and mask data. A mask may be, for example, data construct including for example Boolean (e.g., I/O, yes/no) values for each voxel or data point in a larger data set. A mask may include visual markers (e.g., color and/or patterns) that specify appearance of the voxel. The mask when applied to the larger data set may indicate that a voxel is marked or is not marked.
  • The components of system 100 can communicate either by wire or wirelessly. In some embodiments the components of system 100 can be implemented on a single computer, or across different computers.
  • The imaging device 101 may transmit image data to the post imaging processor 103. The transmitted image data can be of a 3D volume of an object. In some embodiments of the invention, the imaging device 101 is a 3D Computed Tomography (“CT”) device, a Cone Beam Computed Tomography (“CBCT”) device, a Magnetic Resonance Imaging (“MRI”) device, or any combination thereof.
  • The user input device 111 can be used to input data, and the user input device 111 can communicate with the post imaging processor 103. The user input device 111 can be a personal computer, desktop computer, mobile computer, laptop computer, and notebook computer or any other suitable device such as a cellular telephone, personal digital assistant (PDA), video game console, etc.
  • The user input device 111 and display 109 can be used to select a region of interest in the 3D volume. The region of interest in the 3D volume may be transmitted from the user input device 111 to the post imaging processor 103. In some embodiments of the invention, the region of interest is about [50×50×50] voxels. In some embodiments of the invention, the user selects the size of the region of interest.
  • The user input device 111 and display 109 can be used to select a single voxel location within a mask. In some embodiments of the invention, the single voxel location is input, via the user input device 111, e.g., by a user via clicking on an image of the mask data displayed on the display 109.
  • In some embodiments of the invention, after inputting the single voxel location via the user input device 111, the post imaging processor 103 may determine a location of the region of interest based on the single voxel location, an object type of the data and/or screen resolution. In various embodiments of the invention, the post imaging processor 103 determines the single voxel location based on regions that are expected to be present or absent in the masks, based on mask type and/or object type.
  • In some embodiments of the invention, the display 109 is a touch screen. In some embodiments of the invention, the user visualizes the 3D data via virtual reality glasses. In these embodiments, the user may input/select the region of interest in the 3D volume and/or the single voxel location in virtual reality.
  • The mask generator 105 of the post imaging processor 103 can create mask data from the image data received from the imaging device 101. For a particular object type, a mask set can be retrieved from memory and/or input by a user. The mask set can be one or more masks that can be used to represent the object type. The mask data can be created by segmenting the 3D object into the mask. The segmenting can be performed as is known in the art. The mask generator 105 of the post imaging processor 103 can transmit the mask data to the screen. The mask generator 105 can transmit all or a portion of the mask data to the mask corrector 107 of the post imaging processor 103. For example, if a region of interest is specified, the mask generator 105 can transmit the mask data to the mask corrector 107.
  • The mask corrector 107 can modify the mask data created by the mask generator 105 based on the inputs of the imaging device 101, the mask generator 105, and/or the user input device 111. In some embodiments of the invention, the mask corrector 107 modifies mask data. In some embodiments of the invention, the mask corrector 107 receives the mask and from mask generator 105. In some embodiments, the mask corrector 107 receives the 3D volume directly from the imaging device 101.
  • The mask corrector 107 can modify the mask data, based on a similarity between a single voxel location and one or more voxels in the mask data, to create a modified mask. In various embodiments of the invention, the similarity is based on one or more features values of the one or more voxels in the mask data, the single voxel location, and/or the region of interest of the 3D volume.
  • The post imaging processor 103 can store the corrected mask in the memory 113. The post imaging processor 103 can transmit the mask corrected by mask corrector 107 to the display 109 and/or virtual reality glasses. In some embodiments, the post imaging processor 103 transmits the corrected mask to a system that can convert the masks into data that can be used for 3D printing.
  • FIG. 1A shows a block diagram of a system an end-to-end system for 3D object handling (e.g., visualization, simulation, modification, creation, and/or 3D printing) according to some embodiments of the invention. The system for volume rendering shown in FIG. 1A, can be an end-to-end system that can take as input 3D objects and produce a visualization on a screen or in virtual reality, and/or produce a 3D printed object of the 3D object. The 3D object can include any data that is representative of a 3D object. The 3D objects can include 3D imaging data, mesh data, volumetric objects, polygon mesh objects, point clouds, functional representations of 3D objects, cad files, 3D pdf files, STL files, and/or any inputs that can represent a 3D object. The 3D imaging data can include medical imaging data, including Computed Tomography (“CT”) imaging data, a Cone Beam Computed Tomography (“CBCT”) imaging data, a Magnetic Resonance Imaging (“MRI”) imaging data and/or MRA imaging data (e.g., MRI with a contrast agent) or ultrasound imaging data. The 3D objects can be of anatomy (e.g., complex anatomy), industrial data, or any 3D object.
  • The system can include a project/patient management controller 121, a Digital Imagining and Communication Data (DICOM) management module 123, a segmentation module 125, a meshing module 127, a planning module 129, a design module 131, a simulation/VR/AR module 133, a preprinting module 135, a previewing module 137 and/or a 3D print module 139. The project/patient management controller 121 can receive inputs 141. The inputs 141 can be user inputs or data from a database.
  • Turning to FIG. 1B, FIG. 1B shows an example set of inputs 170 and output 173, according to illustrative embodiments of the invention. As shown in FIG. 1B, inputs 173 can include scan data, 3D objects, 2D objects, metadata, flow description, and/or printer parameters. The scan data can be 3D imaging data, and as described above, can be CT data, CBCT data, MRI data, and/or MRA data. In some embodiments, the scan data is any 3D imaging data as is known in the art. In some embodiments, the metadata includes object identification data (e.g., patient indicator), time of the scan, parameters used to take the scan and/or any other metadata as is known in the art. The flow description can be a specification of which modules in the system to use and/or the order to use the modules. The flow description can include any number of the modules in any combination. As is apparent to one of ordinary skill, some flows may not be possible (e.g., flow data that requires skipping module that creates masks before making a mesh). In some embodiments, the system can transmit, to the user, an error message based on the user's inputs. The printer parameters can include parameters that are specific to the particular 3D printer the current scan data is to be printed with. The 2D objects can be any documents input to the system, jpegs, and/or pdfs.
  • The outputs 173 can include scan data, VR/AR, metadata, case report, a 3D printed object, a design (e.g., modified input data), a plan, a simulation, 3D objects and/or 2D objects.
  • Turning back to FIG. 1A, the DICOM management module 123 can keep track of and/or manage DICOM data within the system. For example, the DICOM management module 123 can keep track of the DICOM data as it is imported, exported and/or explored within the system. The DICOM management module 123 can modify/update the DICOM metadata, export output data into a DICOM format and/or attach case data and/or other DICOM data to other metadata within the system. As is apparent to one of ordinary skill in the art, the DICOM management module 123 can be replaced with an imaging data module for 3D objects generally (e.g., industrial applications).
  • The segmentation module 125 can receive a 3D object and DICOM data and from the DICOM module 123. The DICOM data can include CT data (e.g. medical and/or industrial), MR data, CBCT data, any series of images, jpeg or tiff series, or any combination thereof. As is apparent to one of ordinary skill in the art, the DICOM data can include any data that represents a 3D object. The segmentation module 125 can segment the 3D object based on a type of the object of the 3D object and/or the DICOM data.
  • For example, if the type of object is a heart, then the segments can include a right ventricle, a left ventricle, a right atrium, a left atrium, blood vessels, and/or an aorta. The segmenting can involve segmenting the 3D object into one or more masks. The one or more masks can be stored in a mask database. The one or more masks can be input by a user. Continuing with the heart example, the one or more masks can be a right ventricle mask, a left ventricle mask, a right atrium mask, a left atrium mask, a blood vessels mask, and/or an aorta mask. The one or more masks can be populated with the 3D object data that corresponds to the particular mask type.
  • The meshing module 127 can receive the 3D object, the DICOM data and/or the one or more masks that correspond to the 3D object. The meshing module 127 can create a mesh for the 3D object. The mesh can be based on the type of object. For example, if the object type is a heart, a heart mesh can be retrieved from memory. The heart mesh that is retrieved can be modified based on the 3D object and the DICOM.
  • The planning module 129 can receive the 3D object, the DICOM data, the one or more masks and/or the mesh. The planning module 129 can receive as input one or more scenarios for which the 3D object can be visualized within. For example, patient specific 3D object data can be input into the system, and a user (e.g., doctor) can simulate the surgery on the patient's specific data. In another example, a user (e.g., doctor) can parts around to simulate and/or optimize a face reconstruction surgery.
  • The design module 131 can receive the 3D object, the DICOM data, metadata, the one or more masks, and/or the mesh. The design module 131 can allow creation of models for general use or tools such as surgical guides that are based on patient specific anatomy shown in the 3D object, for a surgeon to use during surgery. The design module 131 can include tool as are known in the art (e.g., CAD tools) for object creation.
  • The simulation/virtual reality module 133 can receive the 3D object, the DICOM data, the one or more masks and/or the mesh. The simulation/virtual reality module can transform its input into a format that is acceptable by virtual reality glasses.
  • The 3D object can be 3D printed. The preprinting 135, previewing 137 and print module 139 can receive as input a mesh corresponding to the 3D object. The preprinting module 135 can determine whether or not the input can be 3D printed (e.g., based on wall thickness analysis and/or strength analysis. In some embodiments, the preprinting module 135 can smooth, simplify, close, fix and/or add support to the data to be printed. The previewing module 137 can provide a visualized version of expected outcome of 3D printing of the data to be printed. This can allow, for example, identification and/or correction of any errors that are likely to occur based on the 3D printing of the data. The print module 139 can be actual software of the 3D printer. In this manner, the system can seamlessly integrate with current 3D printers.
  • The system can include a project/patient management module 121. The project/patient management module can receive as input a desired task of a user (e.g., view 3D object in virtual reality, 3D print a mesh of the 3D object, and/or visualize the imaging data). Based on the desired task, the project/patient management module 121 can instantiate a subset of the modules shown in FIG. 1A to accomplish the desired task. The project/patient management module 121 can evaluate a flow specified by the input and execute the flow, and/or determine modifications to the flow, based on, for example the 3D image data.
  • In some embodiments, the system is end-to-end, such that a user can input the 3D object, specify whether the data is to be visualized and/or 3D printed, and the system performs the visualization/3D print. In some embodiments, during visualization a user can modify the visualized object. The user can modify (e.g., recolor, zoom, stretch, etc.) an image of the segmented data, one or more masks, and/or the mesh. The user can select to view/modify all masks or a subset of masks.
  • FIG. 2 is a flow diagram 200 illustrating a method for correcting three-dimensional mask data, according to an illustrative embodiment of the invention.
  • The method can involve receiving a single voxel location, mask data of a 3D volume, and a region of interest of the 3D volume (Step 210) (e.g., receiving by the post imaging processor 103, as shown above in FIG. 1).
  • The single voxel location can be seed point coordinates (e.g., denoted by {right arrow over (s)}). The seed point coordinates can be coordinates on a 2D screen or a set of coordinates in a 3D virtual reality space. The seed point can indicate a location on one or more masks to be corrected. The correction can be addition of data into the one or more masks or removal of data from the one or more masks. A determination of whether the correction is to remove data or add data can be based on the seed point {right arrow over (s)}.
  • Masks can be expressed generally as B({right arrow over (x)}). The mask to be modified can have data added to it or removed from it. In some embodiments, the mask can be expressed as shown in EQNs. 1 and 2, as follows:

  • Adding to the Mask: B({right arrow over (x)})=B 0({right arrow over (x)})  EQN. 1

  • Removing from the Mask: B({right arrow over (x)})=˜B 0({right arrow over (x)})  EQN. 2
  • where B0({right arrow over (x)}) is the mask for the specific region of interest (e.g., an input mask), B({right arrow over (x)}) is either the mask B0 ({right arrow over (x)}) or the logical negation of the mask ˜B0({right arrow over (x)}), if adding to the mask or removing from the mask, respectively.
  • In some embodiments, a user selects the seed point via a user input device (e.g., the user input device 111, as shown above in FIG. 1). In some embodiments, the seed point is determined based on a type of the mask, a type of the object, or any combination thereof. For example, assume the object is a wrist and the mask to be corrected represents a vein of the wrist, if portions of the interior of the vein are missing from the mask, the seed point can be set to a location within the missing portion of the interior of the vein.
  • The method can involve determining similarity between any voxel location within the region of interest and one or more voxels in the mask data (Step 220) (e.g., determining by the mask corrector 107 as described above in FIG. 1). The similarity may be based on one or more features values of the one or more voxels in the mask data, the single voxel location, the region of interest of the 3D volume, or any combination thereof. In some embodiments, the similarity is determined as shown below in EQNs 3-13, as follows:
  • Determining the similarity can involve determining a Euclidian distance (e.g., denoted by D ({right arrow over (x)})) of each voxel from the seed point {right arrow over (s)}. The Euclidian distance can be determined as shown in EQN. 3, as follows:

  • D({right arrow over (x)})=√{square root over ((x 1 −s 1)2+(x 2 −s 2)2+(x 3 −s 3)2)}  EQN. 3
  • where x1, x2, and x3 represent the x, y, and z coordinates a voxel in the region of interest that is compared to the seed point {right arrow over (s)}, and s1, s2, and s3 represent the x, y, and z coordinates of the seed points.
  • Determining the similarity can involve determining a weight for each voxel based on the Euclidian distance between each voxel and the seed point. Each voxel's weight can be determined as shown below in EQN. 4, as follows:

  • W({right arrow over (x)})=ω(D({right arrow over (x)}))  EQN. 4
  • Where W({right arrow over (x)}) is the weight of a voxel that is compared to the seed point {right arrow over (s)} and ω is a scalar function that decays as the distance from its origin increases. For example, in some embodiments, co can be determined as shown in EQN. 5, as follows:
  • ω ( y ) = 1 y EQN . 5
  • where y is any real number larger than zero.
  • In some embodiments, the similarity measure of a feature i is denoted by Ci({right arrow over (x)}). The similarity measure C1 ({right arrow over (x)}) be determined as shown in EQNs 6 through 10, as follows;
  • T i in = Σ { x | B ( x ) = TRUE } ( w ( x ) × F i ( x ) ) Σ { x | B ( x ) = TRUE } W ( x ) + F i ( s ) 2 EQN . 6 T i out = Σ { x | B ( x ) = FALSE } ( W ( x ) × F i ( x ) ) Σ { x | B ( x ) = FALSE } W ( x ) EQN . 7 T i full = Σ { x } ( W ( x ) × F i ( x ) ) Σ { x } ( W ( x ) ) EQN . 8 λ i = ( T i in - T i out T i full ) γ 1 EQN . 9 C i ( x ) = { max ( 0 , λ i × ( T i in - F i ( x ) ) ) T i in > T i out B ( x ) = FALSE max ( 0 , λ i × ( F i ( x ) - T i in ) ) T i in T i out B ( x ) = FALSE 0 B ( x ) = TRUE EQN . 10
  • where γi is a feature contribution power with a value of 1 and Fi({right arrow over (x)}), are feature values as a function of a voxel's coordinates {right arrow over (x)} in the region of interest. Examples of feature values can include volume intensity values, a response of a 3D volume to spatial filters, a gradient magnitude of the volume intensity values, and a Frangi filter response, all of which are known in the art.
  • Determining the similarity can be based on Ci({right arrow over (x)}). The similarity can be determined as shown in EQN 11-13, as follows:

  • G 1({right arrow over (x)})=(NZi C i({right arrow over (x)})))ψ 1   EQN. 11

  • G 2({right arrow over (x)})=[(Geo(G i({right arrow over (x)}),B({right arrow over (x)})))ψ 2 ×(1+D({right arrow over (x)}))ψ 3 ]ψ 4   EQN. 12

  • G({right arrow over (x)})=G 1({right arrow over (x)}G 2({right arrow over (x)})  EQN. 13
  • where G ({right arrow over (x)}) is the similarity, ψ1 is a first feature metric power, ψ2 is a second feature metric power, ψ3 is a third feature metric power, ψ4 is a fourth feature metric power, all of the metric powers can be real numbers, NZ(ϕ) is a normalization function which may be a linear transformation which transforms the range of any scalar field to a normalized range [0,1], and Geo(Volume input, logical mask input) is a geodesic distance function as implement in MATLAB.
  • In some embodiments, ψ1 has a value of 2. In some embodiments, ψ2 has a value of 0.5. In some embodiments, ψ3 has a value of 0.5 if voxels are being added to the mask and a value of 2 if voxels are being removed from the mask. In some embodiments, ψ4 has a value of 1 if voxels are being added to the mask and a value of 2 if voxels are being removed from the mask.
  • The method can involve modifying the mask data based on the similarity (Step 230) (e.g., modifying by the mask corrector 107 as described above in FIG. 1).
  • Modifying the mask based on the similarity value can involve determining a threshold based on the seed position. The threshold can be determined as shown in EQN. 14, as follows:

  • g 0 =θ×G({right arrow over (s)})  EQN. 14
  • where g0 is a threshold, θ is a scalar value (e.g., equal to 10 when adding data to the mask and equal to 100,000 when removing data from the mask, and G ({right arrow over (s)}) is a similarity evaluated at a seed position {right arrow over (s)}.
  • The modified mask can be determined as shown in EQN. 15, as follows:

  • M 0 =G({right arrow over (x)})<g 0 Λ˜B({right arrow over (x)})  EQN. 15
  • where M0 is the modified mask containing all voxels with a similarity measure less than the threshold g0 and outside of B({right arrow over (x)}), G ({right arrow over (x)}) is the similarity as described above in EQN. 13, and ˜B({right arrow over (x)}) is an inverse of the mask B({right arrow over (x)}).
  • In some embodiments, determining the modified mask also involves dilating the modified mask by one voxel in all directions. The modified mask can be closed as shown below in EQN. 16, as follows:

  • M 1=Morphological closing with radius 1 of M 0.  EQN. 16
  • where M1 is the modified mask M0 closed.
  • In some embodiments, determining the modified mask can also involve reducing the modified mask M1 to include only data that was not included in the original mask. The modified mask M1 can be reduced as shown below in EQN. 17 as follows:

  • M 2 =M 1 Λ˜B({right arrow over (x)})  EQN. 17
  • where M2 is the portions of M1 having data that is not in the B({right arrow over (x)}).
  • Modifying the mask based on the similarity can also involve determining an output mask, as shown below in EQN. 18:
  • Output Mask = { M f B ( x ) adding to the mask ~ ( M f B ( x ) ) removing from the mask EQN . 18
  • Where Mf is the modification.
  • FIGS. 3A-5B are examples of masks that are corrected via addition or removing from the mask, according to illustrative embodiments of the invention. FIG. 3A is an example of a mask that has a seed point where there is no data 310. FIG. 3B is an example of the correction that includes adding data in a region around the seed point 310. FIG. 4A is an example of a mask that includes teeth with a seed point 410 where there is data present. FIG. 4B is an example of the correction that includes removing data in a region around the seed point 410. FIG. 5A is an example of a mask of that includes adding data in a region around a seed point 510. FIG. 5B is an example of the correction that includes adding data in a region around the seed point 510.
  • Comprise, include, and/or plural forms of each are open ended and include the listed parts and can include additional parts that are not listed. And/or is open ended and includes one or more of the listed parts and combinations of the listed parts.
  • It is apparent to one skilled in the art that the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The foregoing embodiments are therefore to be considered in all respects illustrative rather than limiting of the invention described herein. Scope of the invention is thus indicated by the appended claims, rather than by the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (17)

1. A method for correcting mask data, the method comprising:
receiving a single voxel location, mask data of a 3D volume and a region of interest of the 3D Volume;
determining similarity between any voxel in the region of interest and one or more voxels in the mask data, the similarity is based on one or more features values of the one or more voxels in the mask data, the single voxel location, and the region of interest of the 3D volume; and
modifying the mask data based on the similarity.
2. The method of claim 1 wherein modifying the mask data comprises removing one or more voxels from the mask data that are a connected component of the single voxel location and above a similarity threshold.
3. The method of claim 1 wherein modifying the mask data comprises adding one or more voxels into the mask data that are a connected component of the single voxel location and below a similarity threshold.
4. The method of claim 2 wherein the similarity threshold is based on the single voxel location.
5. The method of claim 1 wherein the one or more feature values are based on
a volume intensity value,
a response of a 3D volume to one or more spatial filters,
a gradient magnitude of the volume intensity value,
a Frangi filter response, or any combination thereof.
6. The method of claim 1 wherein determining the similarity is further based on a distance between each voxel in the mask data of the 3D volume and the single voxel location.
7. The method of claim 1 wherein the single voxel location is input by a user via clicking on a displayed image of the mask data.
8. The method of claim 1 wherein the region of interest of the 3D Volume data is from a CT scan, a CBCT scan, an MRI image, or any combination thereof.
9. A non-transient computer readable medium containing program instructions for causing a computer to perform the method of:
receiving a single voxel location, mask data of a 3D volume and a region of interest of the 3D Volume;
determining similarity between any voxel in the region of interest and one or more voxels in the mask data, the similarity is based on one or more features values of the one or more voxels in the mask data, the single voxel location, and the region of interest of the 3D volume; and
modifying the mask data based on the similarity.
10. The non-transient computer readable medium of claim 9 wherein modifying the mask data comprises removing one or more voxels from the mask data that are a connected component of the single voxel location and above a similarity threshold.
11. The non-transient computer readable medium of claim 9 modifying the mask data comprises adding one or more voxels into the mask data that are a connected component of the single voxel location and below a similarity threshold.
12. The non-transient computer readable medium of claim 9 wherein the similarity threshold is based on the single voxel location.
13. The non-transient computer readable medium of claim 9 wherein the one or more feature values are based on
a volume intensity value,
a response of a 3D volume to one or more spatial filters,
a gradient magnitude of the volume intensity value,
a Frangi filter response, or any combination thereof.
14. The non-transient computer readable medium of claim 9 determining the similarity is further based on a distance between each voxel in the mask data of the 3D volume and the single voxel location.
15. The non-transient computer readable medium of claim 9 wherein the single voxel location is input by a user via clicking on a displayed image of the mask data.
16. The non-transient computer readable medium of claim 9 wherein the region of interest of the 3D Volume data is from a CT scan, a CBCT scan, an MRI image, or any combination thereof.
17. A system for visualizing and/or 3D printing 3D objects, the system comprising:
at least one processor to receive the 3D object and a process flow for the 3D object, the at least one processor to execute the process flow on the 3D object, wherein the processor flow is any combination of:
segmentation, mask modification, meshing, planning, designing, simulation, virtual reality, augmented reality, preparing for 3D printing, previewing for 3D printing, and/or 3D printing the 3D object; and
at least one 3D printer coupled to the at least one processor to print a 3D model of the 3D object.
US15/360,313 2016-11-23 2016-11-23 Systems and methods for an integrated system for visualizing, simulating, modifying and 3D printing 3D objects Active 2037-02-04 US10275909B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/360,313 US10275909B2 (en) 2016-11-23 2016-11-23 Systems and methods for an integrated system for visualizing, simulating, modifying and 3D printing 3D objects
PCT/US2017/052690 WO2018097880A1 (en) 2016-11-23 2017-09-21 Systems and methods for an integrated system for visualizing, simulating, modifying and 3d printing 3d objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/360,313 US10275909B2 (en) 2016-11-23 2016-11-23 Systems and methods for an integrated system for visualizing, simulating, modifying and 3D printing 3D objects

Publications (2)

Publication Number Publication Date
US20180144516A1 true US20180144516A1 (en) 2018-05-24
US10275909B2 US10275909B2 (en) 2019-04-30

Family

ID=62147114

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/360,313 Active 2037-02-04 US10275909B2 (en) 2016-11-23 2016-11-23 Systems and methods for an integrated system for visualizing, simulating, modifying and 3D printing 3D objects

Country Status (2)

Country Link
US (1) US10275909B2 (en)
WO (1) WO2018097880A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190054566A1 (en) * 2017-08-15 2019-02-21 General Electric Company Selective modification of build strategy parameter(s) for additive manufacturing
CN109409274A (en) * 2018-10-18 2019-03-01 广州云从人工智能技术有限公司 A kind of facial image transform method being aligned based on face three-dimensional reconstruction and face
US10338569B2 (en) * 2017-08-15 2019-07-02 General Electric Company Selective modification of build strategy parameter(s) for additive manufacturing
US10471510B2 (en) 2017-08-15 2019-11-12 General Electric Company Selective modification of build strategy parameter(s) for additive manufacturing
US11094116B2 (en) * 2019-11-18 2021-08-17 GE Precision Healthcare LLC System and method for automatic generation of a three-dimensional polygonal model with color mapping from a volume rendering
US20210374986A1 (en) * 2019-02-20 2021-12-02 Imperial College Of Science, Technology And Medicine Image processing to determine object thickness

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11430037B2 (en) * 2019-09-11 2022-08-30 Ebay Korea Co. Ltd. Real time item listing modification
US11250635B1 (en) 2020-12-08 2022-02-15 International Business Machines Corporation Automated provisioning of three-dimensional (3D) printable images

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5647360A (en) * 1995-06-30 1997-07-15 Siemens Corporate Research, Inc. Digital subtraction angiography for 3D diagnostic imaging
AUPR358701A0 (en) * 2001-03-07 2001-04-05 University Of Queensland, The Method of predicting stroke evolution
WO2004068300A2 (en) * 2003-01-25 2004-08-12 Purdue Research Foundation Methods, systems, and data structures for performing searches on three dimensional objects
EP1638459A2 (en) 2003-06-11 2006-03-29 Case Western Reserve University Computer-aided-design of skeletal implants
US7995864B2 (en) * 2007-07-03 2011-08-09 General Electric Company Method and system for performing image registration
US8579620B2 (en) 2011-03-02 2013-11-12 Andy Wu Single-action three-dimensional model printing methods
US10734116B2 (en) 2011-10-04 2020-08-04 Quantant Technology, Inc. Remote cloud based medical image sharing and rendering semi-automated or fully automated network and/or web-based, 3D and/or 4D imaging of anatomy for training, rehearsing and/or conducting medical procedures, using multiple standard X-ray and/or other imaging projections, without a need for special hardware and/or systems and/or pre-processing/analysis of a captured image data
US8724881B2 (en) 2011-11-09 2014-05-13 Siemens Aktiengesellschaft Method and system for precise segmentation of the left atrium in C-arm computed tomography volumes
WO2015077314A2 (en) 2013-11-20 2015-05-28 Fovia, Inc Volume rendering color mapping on polygonal objects for 3-d printing
JP2017505172A (en) * 2014-01-24 2017-02-16 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. System and method for three-dimensional quantitative assessment of uterine fibroids
US9840045B2 (en) 2014-12-31 2017-12-12 X Development Llc Voxel 3D printer

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190054566A1 (en) * 2017-08-15 2019-02-21 General Electric Company Selective modification of build strategy parameter(s) for additive manufacturing
US10338569B2 (en) * 2017-08-15 2019-07-02 General Electric Company Selective modification of build strategy parameter(s) for additive manufacturing
US10406633B2 (en) * 2017-08-15 2019-09-10 General Electric Company Selective modification of build strategy parameter(s) for additive manufacturing
US10471510B2 (en) 2017-08-15 2019-11-12 General Electric Company Selective modification of build strategy parameter(s) for additive manufacturing
CN109409274A (en) * 2018-10-18 2019-03-01 广州云从人工智能技术有限公司 A kind of facial image transform method being aligned based on face three-dimensional reconstruction and face
US20210374986A1 (en) * 2019-02-20 2021-12-02 Imperial College Of Science, Technology And Medicine Image processing to determine object thickness
US11094116B2 (en) * 2019-11-18 2021-08-17 GE Precision Healthcare LLC System and method for automatic generation of a three-dimensional polygonal model with color mapping from a volume rendering

Also Published As

Publication number Publication date
US10275909B2 (en) 2019-04-30
WO2018097880A1 (en) 2018-05-31

Similar Documents

Publication Publication Date Title
US10275909B2 (en) Systems and methods for an integrated system for visualizing, simulating, modifying and 3D printing 3D objects
EP3511942B1 (en) Cross-domain image analysis using deep image-to-image networks and adversarial networks
US20180225823A1 (en) Adversarial and Dual Inverse Deep Learning Networks for Medical Image Analysis
US7397475B2 (en) Interactive atlas extracted from volume data
US8077948B2 (en) Method for editing 3D image segmentation maps
JP2017174039A (en) Image classification device, method, and program
CN112885453A (en) Method and system for identifying pathological changes in subsequent medical images
US8503741B2 (en) Workflow of a service provider based CFD business model for the risk assessment of aneurysm and respective clinical interface
JP6643821B2 (en) Image processing device
Toma et al. Thresholding segmentation errors and uncertainty with patient-specific geometries
Sonny et al. A virtual surgical environment for rehearsal of tympanomastoidectomy
CN108694007B (en) Unfolding ribs from magnetic resonance images
Wi et al. Computed tomography-based preoperative simulation system for pedicle screw fixation in spinal surgery
JP2005525863A (en) Medical inspection system and image processing for integrated visualization of medical data
WO2022163513A1 (en) Learned model generation method, machine learning system, program, and medical image processing device
EP3989172A1 (en) Method for use in generating a computer-based visualization of 3d medical image data
US20150199840A1 (en) Shape data generation method and apparatus
Wustenberg Carpal Bone Rigid-Body Kinematics by Log-Euclidean Polyrigid Estimation
US20190172577A1 (en) Dissection process estimation device and dissection process navigation system
EP4016470A1 (en) 3d morhphological or anatomical landmark detection method and device using deep reinforcement learning
Moon et al. Standardizing 3D medical imaging
US20230343026A1 (en) Method and device for three-dimensional reconstruction of brain structure, and terminal equipment
Preim et al. Visualization, Visual Analytics and Virtual Reality in Medicine: State-of-the-art Techniques and Applications
Sindhu et al. Tools to Create Synthetic Data for Brain Images
Zimeras et al. Shape analysis in radiotherapy and tumor surgical planning using segmentation techniques

Legal Events

Date Code Title Description
AS Assignment

Owner name: 3DSYSTEMS, INC., SOUTH CAROLINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PRI-TAL, DAN;PORAT, ROY;KALISMAN, OREN;REEL/FRAME:045408/0203

Effective date: 20180226

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: 3D SYSTEMS, INC., SOUTH CAROLINA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HSBC BANK USA, NATIONAL ASSOCIATION;REEL/FRAME:057651/0374

Effective date: 20210824

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: SURCHARGE FOR LATE PAYMENT, LARGE ENTITY (ORIGINAL EVENT CODE: M1554); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4