WO2016128965A2 - Imaging system of a mammal - Google Patents
Imaging system of a mammal Download PDFInfo
- Publication number
- WO2016128965A2 WO2016128965A2 PCT/IL2016/050145 IL2016050145W WO2016128965A2 WO 2016128965 A2 WO2016128965 A2 WO 2016128965A2 IL 2016050145 W IL2016050145 W IL 2016050145W WO 2016128965 A2 WO2016128965 A2 WO 2016128965A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image data
- imaging system
- data
- subject
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
- A61B5/0035—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10084—Hybrid tomography; Concurrent acquisition with multiple different tomographic modalities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the present invention generally relates to the field of imaging systems, and more particularly, to image processing systems, for generating 3D images of internal features of humans and small mammals.
- the main objective of medical image-processing is to facilitate the gathering of, or provide diagnostic information from non-processed images.
- the processing of digitally acquired images is aimed at improving pictorial information for human interpretation and (or) processing data for autonomous machine perception.
- Imaging technics such as MRI technology utilizes non-destructive imaging modalities that derive contrast from the different physiological content and chemical environment permitting the imaging of internal structures.
- the research field has long been utilizing imaging technology to assess the physiological changes of research subjects and specimens.
- research associates often have no prior experience in physiological and anatomical image interpretation such as radiology or in the practice of implicating the images received to a diagnosis.
- many research subjects are operated on during various stages of research to investigate the physiological changes of the inner organs not otherwise visible. Consequently, this requires a great amount of research subjects to be operated on in order to examine different stages of physiological change along the experiment timeline.
- 3D imaging models for example of murine subjects.
- Mouse atlas http://mouseatlas.caltech.edu
- This is a collection of pre- made, pre-interpreted MRI scans to facilitate the correct interpretation of MRI scanning of a mouse in different stages of development.
- the software provides a specific murine and does not combine real time imagery of the patient or research subject.
- Another example is a 3D Rat anatomy software, provided by Biosphera (http://www.biosphera.com.br/e-rat-anatomy.asp). This example provides a tool for learning about rat anatomy, and the 3D location of the internal organs. All the comparisons, deductions and diagnosis of MRI scanning are done manually by the user.
- 3D image processing software that can generate 3D images of such as MRI scans, like for example Mimics® provided by "Materialise” for medical image processing. Mimics® can be used for the segmentation of 3D medical images resulting 3D models of patient anatomy.
- These programs provide a 3D model based on a single image acquiring method for each model.
- these software programs usually require high computational resources, which reflects on high cost computers and specialized graphical computer cards.
- the present invention provides an imaging system (100), for presentation of at least one 3D image of an internal organ of a mammalian subject, comprising: (a) an input module (103) configured to receive a first image data (104) of the subject from a first imaging method, and at least a second image data (105) of the subject, from a second imaging method;(b) a 3D generic module comprising phantom 3D model data (106) of rendered internal organs of the subject; and, (c) a processor (102) in communication with a Computer Readable Medium, CRM, (101), for executing a set of operations received from the CRM; the set of operations comprising: (i) importing the subject image data from more than one imaging method, and the phantom model 3D data, by means of the input module; (ii) fitting the subject image data to the phantom model 3D data to provide mapping of the subject image data features; (iii) generating image parameters with reference to the subject image data mapping; and, (iv) processing and rendering at least a portion
- IMD imaging device
- the image processing means comprises at least one of: contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, image restoration from partial data, image reconstruction, contras enhancing, pattern recognition, similarity recognition, vector rendering, surface rendering, feature extraction, noise filtering, sharpening, blurring, texture filtering, detection of color level, detection of brightness level, detection of color-level or brightness-level discontinuity, aligning, scaling, rotating, skewing, resizing, transforming the angle of the image, cropping, skewing, 2D geometric transformations, 3D geometric transformations, image compression, skeletonization, measuring at least a portion of the image, changing color level, identify foreground objects and background locations, changing brightness level, Control-Point Image processing, and generating an R-set of image data.
- the input device is configured to provide image analysis of the first image data and at least the second image data by image processing means.
- image analysis comprises detection of at least one of: shapes contours, foreground shapes, background shapes, textures, color levels, brightness, noise, saturation, resolution, channels, patterns, and similarities.
- R-Set reduced-resolution data set
- a scoring module in communication with the processor, configured to generate at least one score selected from a group consisting of: the assurance score, the mapping score, the quality score, and any combination thereof.
- an export module in communication with the processor, configured to export a the generated image and/ or the image data parameters, to at least one recipient.
- a graphics processing unit GPU
- the present invention provides a method for 3D imaging an internal organ of a mammalian subject, comprising the steps of: (a) obtaining an imaging system (100) comprising: (i) an input module (103) configured to receive a first image data (104) of the subject from a first imaging method, and at least a second image data (105) of the subject, from a second imaging method; (ii) a 3D generic module comprising phantom model data of rendered internal organs of the subject; and, (iii) a processor (102) in communication with a Computer Readable Medium, CRM, (101), for executing a set of operations received from the CRM; the set of operations comprising: (1) importing the subject image data from more than one imaging method, and the phantom model 3D data, by means of the input module; (2) fitting the subject image data to phantom model 3D data to provide mapping of the subject image data features; (3) generating image parameters with reference to the subject image data mapping; and, (4) processing and rendering at least a portion of the subject image data
- IMD imaging device
- the image processing means comprising at least one of: contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, image restoration from partial data, image reconstruction, contras enhancing, pattern recognition, similarity recognition, vector rendering, surface rendering, feature extraction, noise filtering, sharpening, blurring, texture filtering, detection of color level, detection of brightness level, detection of color-level or brightness-level discontinuity, aligning, scaling, rotating, skewing, resizing, transforming the angle of the image, cropping, skewing, 2D geometric transformations, 3D geometric transformations, image compression, skeletonization, measuring at least a portion of the image, changing color level, identify foreground objects and background locations, changing brightness level, Control-Point Image processing, and generating an R-set of image data.
- R-Set reduced-resolution data set
- a graphics processing unit GPU
- the present invention provides an imaging system, for presentation of at least one 3D image of an internal organ of a mammalian subject, comprising: (a) a 3D generic module comprising phantom 3D model data of rendered internal organs of the subject;(b) an input module configured to receive one first image data from one first imaging method, and at least one second the image data from at least a second imaging method;(c) a fitting module configured to provide mapping of at least a portion of the image data by matching the image data with the 3D generic module data; (d) a scoring module configured to score the image data by evaluating and comparing one first the image data and at least one second the image data, and associate each image data portion at least one score; and, (e) a processing module configured to generate image parameters for generating a 3D image by selecting image data from a defined threshold score and rendering the 3D image; wherein the processing module is configured to integrate the image data from one first the image data and at least one second the image data to generate the image parameters, and generate the 3D image by
- IMD imaging device
- the fitting module is configured to fit at least one image data to the 3D phantom model by applying at least one selected from a group consisting of: overlaying at least a portion of the images, subtracting at least a portion of the images, incorporating at least a portion of one image in at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of the image by image processing means, applying image processing on at least a portion of the image, and any combination thereof.
- the image processing means comprises at least one of: contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, image restoration from partial data, image reconstruction, contras enhancing, pattern recognition, similarity recognition, vector rendering, surface rendering, feature extraction, noise filtering, sharpening, blurring, texture filtering, detection of color level, detection of brightness level, detection of color-level or brightness-level discontinuity, aligning, scaling, rotating, skewing, resizing, transforming the angle of the image, cropping, skewing, 2D geometric transformations, 3D geometric transformations, image compression, skeletonization, measuring at least a portion of the image, changing color level, identify foreground objects and background locations, changing brightness level, Control-Point Image processing, and generating an R-set of image data.
- R-Set reduced-resolution data set
- the processing module is configured to associate at least one score to each image parameter. It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises an export module, configured to export a the generated image and/ or the image data parameters, to at least one recipient.
- the fitting module is configured to adjusted the image data mapping according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
- a graphics processing unit GPU
- Fig. 1A is a schematic diagram of an embodiment of the present invention imaging system, for generation and presentation of at least one 3D image of an internal portion of a subject;
- Fig. IB is a schematic diagram of an embodiment of the present invention imaging system further comprising interconnecting modules.
- Fig. 2 is a schematic diagram describing an example set of operations executed by the present invention
- Fig. 3 is a schematic diagram of an embodiment of a method for generating an image of at least a portion of a subject.
- Fig. 4 is a schematic diagram of an embodiment of the present invention imaging system, for generation and presentation of at least one 3D image of at least a portion of a subject.
- the essence of the present invention is to provide an image processing system that obtains more than one image (internal, external or both) of a subject, such as a human or small mammal, imported from an imaging device (IMD), at least one first image is from at least one first imaging method and at least one second image is from a second imaging method.
- IMD imaging device
- the system maps the image data by comparison to a 3D phantom model existing module. Further, the system generates image parameters by combining multiple data sources, and by comparison and implementation of the data into an 'off- the-shelf 3D anatomy imaging model, provides 3D visualization of the subject and/ or specimen physiology and/ or anatomy, allowing mapping and rendering of at least an internal portion or organ of the subject.
- the imaging system of the present invention will increase the accuracy of medical and research oriented image interpretation, and ease the use of sophisticated imagery by unexperienced personnel.
- the present invention will lessen the amount of research subjects needed in experiments, and will shorten the time for acquiring a diagnosis.
- the present invention allows for limited computer resources as it enables extraction of specific information (a defined organ or region of interest) to be further processed and rendered.
- Imaging Device' specifically applies hereinafter to any device and/ or any other analyzing and imaging instruments, providing image data of at least an internal portion of the subject in at least one image acquiring method, including, but not limited to: Magnetic Resonance Imaging (MRI) device, Nuclear Magnetic Resonance (NMR) spectroscope, Electron Spin Resonance (ESR) spectroscope, Nuclear Quadruple Resonance (NQR), Laser Magnetic Resonance device, Rotational Field Quantum Magnetic Resonance device (cyclotron), computerized tomography (CT) device, PET-CT, PET-MRI, bone densitometry device, ultrasound (US), 3D ultrasound, Doppler ultrasound imaging, X-ray device, Fluoroscopy device, any fluorescence device, Diffusion MRI, micro-CT, Confocal Microscopy, SPECT (Single-photon emission computed tomography) device, scintigraphy device, Magnetoencephalography device, Tactile imaging device, Photoacoustic
- At least two different imaging methods are selected from these non-limiting examples: Magnetic Resonance Imaging (MRI), Nuclear Magnetic Resonance (NMR), Electron Spin Resonance (ESR), Nuclear Quadruple Resonance (NQR), Laser Magnetic Resonance, Rotational Field Quantum Magnetic Resonance, Computerized Tomography (CT/CAT scan), PET imaging, (Positron Emission Tomography), PET-CT imaging, PET-MRI imaging, bone densitometry imaging, ultrasound (US) imaging, (sonogram), 3D ultrasound imaging, Doppler ultrasound imaging, X-ray imaging, X-ray computed tomography, Fluoroscopy, foluresence image, Diffusion MRI, micro-CT, Confocal Microscopy, Magnetic Resonance Angiography (MRA), functional Magnetic Resonance Imaging (fMRI), fluorescence imaging, SPECT (Single-photon emission computed tomography), scintigaphy, Magnetoencephalography, Tactile imaging, Photoacoustic imaging, Thermography, Opti
- mammalia any human and non-human animal of the Mammalia, a large class of warm-blooded vertebrates having mammary glands in the female, a thoracic diaphragm, and a four-chambered heart, including as non-limiting examples members of the order Lagomorpha (The order Lagomorpha comprises rabbits and hares (family Leporidae) and the small rodent like pikas (family Ochotonidae), the order Rodentia, murine genus (Mus) or its subfamily (Murinae), including these non-limiting examples: a rat, mouse, hamster, guinea pig, rabbit, hare and the like.
- Lagomorpha comprises rabbits and hares (family Leporidae) and the small rodent like pikas (family Ochotonidae), the order Rodentia, murine genus (Mus) or its subfamily (Murinae), including these non-
- the term includes any animal used in research, any genetically engineered, genetically modified animal, natural animal, especially bred animal, treated animal, or any animal portion.
- the term subject further includes theses non limiting examples: amphibians, birds, fish, reptiles, and other small animals as known in the art.
- murine interchangeably refers hereinafter to any animal relating to murid genus (Mus) or its subfamily (Murinae), further including these non-limiting examples: rats, mice, rodents, laboratory murine, genetically designed murine, and etc.
- a graphics processing unit or “GPU”, interchangeably refers hereinafter with any visual processing unit (VPU), a dedicated electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display.
- VPU visual processing unit
- the GPUs utilizes a highly parallel structure making them more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel.
- a GPU can be embedded on a graphics dedicated card, on the motherboard, or as part of the processor itself.
- CPU central processing unit, or “processor” interchangeably refers hereinafter to any hardware that carries out the instructions of a program by performing the basic arithmetical, logical, and input/output operations of the system.
- a computer can have more than one CPU; this is called multiprocessing.
- the processor can also be such as microprocessors, multi-core processors a system on a chip (SoC), array processor, vector processor and etc.
- SoC system on a chip
- the CPU/processor is typically connected to a memory unit (storage unit, a unit of a computer or an independent device designed to record, store, and reproduce information) configured to store and extrude information in various forms (e.g. a database).
- Computer readable media (CRM), interchangeably refers hereinafter to any medium, e.g., a non-transitory medium, capable of storing data in a format readable by a mechanical device (automated data medium rather than human readable).
- machine-readable media include magnetic media such as magnetic disks, cards, tapes, and drums, punched cards and paper tapes, optical disks, flash memories, barcodes and magnetic ink characters.
- Common machine-readable technologies include magnetic recording, processing waveforms, electronic memory encoding, and barcodes.
- Optical character recognition (OCR) can be used to enable machines to read information available to humans. Any information retrievable by any form of energy can be machine-readable.
- control-point image processing refers herein to any method employing manual selection of at least one control point in order to label the point as an anchor for such as feature labeling or extraction, aligning of at least one more image, vector reconstruction using this point coordinates, transforming the image from this point or transforming the image whilst not moving or transforming this point.
- Another example is employing at least one control point in two images in order to align them.
- an imaging system for presentation of at least one 3D image of an internal organ of a (e.g., small) mammalian subject, comprising: (a) an input module (103) configured to receive a first image data (104) of the subject from a first imaging method, and at least a second image data (105) of the subject, from a second imaging method; (b) a 3D generic module comprising phantom 3D model data (106) of rendered internal organs of the subject; (c) a processor (102) in communication with a computer readable medium (CRM) (101), for executing a set of operations received from the CRM; the set of operations including: (i) importing the subject image data (104, 105) from more than one imaging method, and phantom model 3D data from the 3D module (106) by means of the input module (103); (ii) fitting the phantom model 3D
- CRM computer readable medium
- an imaging system as defined in any of the above is disclosed, wherein the system imports whole body image data, or partial body image data. Additionally or alternatively, the system is configured to import the image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof. Additionally or alternatively, the imported image data is selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
- Medical image processing usually includes mapping of data acquired by one imaging method.
- the subject volume is typically divided by a virtual matrix into voxels of a given size, into which image data information is allocated in order to generate a 3D image, or pixels to generate a 2D image.
- the received signals from the imaging device must be interpreted to their location and unique properties, digitized, (converted into a binary sequence), and stored in a memory module (CRM).
- CCM memory module
- the three dimensional placement of each signal is mapped to a voxel and translated to a visual form in what is known in the art as image reconstruction.
- This reconstruction procedure is based upon dedicated reconstruction algorithms, as known in the art, specifically for the signal type and image acquiring method (e.g.
- images can be produced point-by-point, line-by-line, in slices, or in slices calculated from a whole volume.
- Imagery can be formed from two-dimensional (2D) imagery methods, encoding only two spatial dimensions or from volume techniques encoding three spatial dimensions.
- the standard deviation of the comparison between different tissues during imaging can be as high as about 30%, for example, in brain MRI.
- the difference between normal tissue and a similar tumor are not easily distinguishable. This can be detrimental when deciding on a diagnosis or in planning the size of a medical implant.
- the present invention provides a system combining different image data sets originating from different image acquiring methods thereby providing additional data, lowering the signal to noise ratio and giving a higher assurance of the resulting image.
- additional information may allow a smaller voxel size leading to better resolution of the final image.
- utilizing more than one image acquiring method can result in fewer artifacts, in the final image.
- the reconstruction procedure of an image is preformed either following the integration of more than one image data, or prior to the integration of more than one data image.
- the reconstruction procedure of an image is performed by reconstructing only a portion of the image data defined by the user.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate the image parameters by combining image data from more than one imported image data.
- the image data imported consists of partial information of the internal portion.
- the processor is configured to produce the 3D image from this partial information, by completing the missing data. For example connecting the data voxels by assessing their content according to an algorithm evaluating the information from the nearest voxels and the phantom model data of the specific mapping location. Additionally or alternatively, the newly completed voxels can be presented in a different color or form, and a report summarizing these locations and analysis is generated.
- an imaging system as defined in any of the above wherein at least one imported image data provides volumetric image data.
- the main advantage of the 3D technique is that it has a signal-to-noise advantage over 2D techniques (if the voxel size is kept constant, the signal-to-noise ratio improves by the square root of the number of slices).
- Another advantage is that the slices are contiguous (which is not the case with multiple- slice techniques), therefor less information is missed. Further, any desired slice orientation can be reconstructed from the data set, and very thin slices can be obtained.
- the present invention provides a more limited usage of computer resources processing to rendering only the desired internal portion or organ at a time and providing the rest of the subject as depicted in the already rendered 3D phantom model.
- the full subject is rendered according to the subject data.
- the desired portion of interest is rendered and presented within a simpler displaying or rendering mode illustrating the rest of the figure like for example wire frame, or vector based rendering.
- tissue composition and anatomical and/ or physiological structure changes according to the subject's age. For example the relative content of the myelin in the brain may increase during the first years of infancy.
- the imaging system of the present invention is configured to adjust the data processing of the image data to tissue properties typical of the subject age. Additionally or alternatively, the system is configured to generate image parameters adapted to the subject developmental stage. Additionally or alternatively, the system is configured to adjust the fitting of the image data to the 3D phantom model further by adjusting the parameters of the 3D phantom model to the developmental stage of the imaged subject.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate adjusted image parameters according to a selected from a group consisting of: subject species, subject temperature when imaged, subject physiological condition, organ type, image data source, and any combination thereof.
- Imaging-processing and spatial referencing allows the connection of picture element data of the same location in different images with changed parameters, providing a synthetic image combining the different data sources: at least two image data sources, incorporated within a 3D phantom model.
- An important step is the identification, alignment, scaling and consequent fitting of the imported data from the subject to the 3D phantom model.
- the fitting process is automatic. Additionally or alternatively, the fitting process is preformed including at least one intervention by the user, such as utilizing a control point processing.
- This provides mapping of the data from the subject according to standard anatomy. This step further allows for rapid feature extraction and allows rapid identification of the internal portion by the user.
- the imported data from the subject is analyzed and indexed (or tagged) as to the mapping location of each portion or voxel, and optionally a tag according to organ type is added. Further the system is configured to analyze, compare and unite subject image data originating from more than one data source.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to identify a specific predefined organ in the imported image data in reference to the phantom model data.
- an imaging system as described in any of the above is disclosed, wherein the system is configured to generate at least one data associated tag for each data component.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate an assurance score, for assessing the image data content in view of at least two image sources.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate a mapping score for assessing the location of the image data in reference to the 3D phantom model.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate a quality score for assessing the combination of the mapping score and the assurance score.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate the 3D image based on image parameters above a defined threshold score.
- the system comprises a scoring module configured to generate the scoring of image data components and / or the scoring of image data.
- This indexing and/ or tagging is associated with the image parameters. Additionally or alternatively, other tags can be added with additional information including: the data origin, the time taken, , an assurance score based on an algorithm combining the data from two image acquiring sources, a mapping score based on an algorithm assessing the relatability of the fitting made to the phantom model, the quality of the information (assessed by for example a quality assessing algorithm taking into consideration for example the mapping score and the assurance score), and the like.
- the next step includes retrieving image parameters, and reconstructing a three dimensional image by incorporating a selected internal region relating image parameters into the 3D phantom model data, thereby generating a new 3D model showing the selected region from the imaged subject.
- the system is configured to modify the phantom model parameters according to at least one generated image parameters.
- Image analysis is the process of extracting meaningful information from images such as finding shapes, counting objects, identifying colors, or measuring object properties. It is further in the scope of the present invention that the processor/ processing module is configured to map the subject internal organs on at least one image data, by applying image processing means on the image data prior to fitting the image data on the 3D model data.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate at least one image parameter following one or more of: overlaying images, subtracting images, contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, contras enhancing, pattern recognition, similarity recognition, vector rendering, feature extraction, noise filtering, sharpening image, texture filtering, detection of color level, aligning, scaling, skewing, rotating, angling, and detection of brightness level.
- image processing means refers hereinafter to any image processing means as known in the art, such as these non-limiting examples: a. subtraction or overlay (superposition) of multichannel images, images from different image acquiring methods, and/or multiple images of the same source in different conditions or different times; b. contour detection, surface detection, volume detection, , feature extraction, c. detection of color level, detection of brightness level; d. geometric transformations such as aligning, scaling, skewing, rotating, angling, cropping, as well as more complex 2D geometric transformations, such as affine and projective; e. vector rendering- vector based rendering for example to fill-in blanks in image; f. Surface rendering; g.
- Image segmentation is the process of dividing an image into multiple parts. This is typically used to identify objects or other relevant information in digital images. There are many different ways to perform image segmentation, including: threshold methods, color-based or brightness based or contras based segmentation (such as K-means clustering), transform methods (such as watershed segmentation), texture methods (such as texture filters). Image segmentation further allows isolating objects of interest and gathering related statistics.
- Each of these features can be applied according to a predefined threshold; j. image reconstruction from continuous or non-continuous slices; detection of color-level or brightness-level discontinuity allows the highlighting of points, lines, and edges in an image; k. similarity detecting algorithms revealing areas of similar signal intensities using a defined threshold;
- pattern recognition algorithms defining areas of similar intensity pattern; m. utilizing control point processing, this can be done at least partially manually, having the user choose and/ or approve at least one point, or completely automatically, having the processor choose control points by an analysis and selection algorithm; n. detection and marking of specific landmark features (such as a specific and easily defined and detected bone structure, overall external surface shape, external features such as an ear, or a nose, a well-defined tendon and etc.), as anchors for the fitting process; o. image enhancement - removing noise, increasing the signal to noise ratio, and applying sharpening filters on an image, modifying the colors or intensities of an image; p.
- specific landmark features such as a specific and easily defined and detected bone structure, overall external surface shape, external features such as an ear, or a nose, a well-defined tendon and etc.
- the processor/ processing module can, for example: Filter with predefined morphological operators, de-blur and sharpen, remove noise with linear, median, or adaptive filtering, perform histogram equalization, remap the dynamic range, adjust the gamma value, adjust contrast, adjust the brightness, watershed transform (the watershed transform finds "catchment basins” and “watershed ridge lines” in an image by treating it as a surface where light pixels are high and dark pixels are low), and etc.; q. identify, or "mark,” foreground objects and background locations; and/or r. measuring objects or any defined image portions.
- multispectral images of the same body region can be overlaid to give an impression of the exact location of certain contrast-enhanced structures.
- This can be used for example when over laying PET images on MRI, or contour detecting enhancement of the same image on the original image.
- Other non- limiting examples are placing fluorescence image on an x-ray, MR angiography to highlight veins after subtraction of the CE-MRA images of the arterial phase, and any combination of any two image acquiring methods.
- At least one of the above stated image processing technics, transformations methods and/or algorithms can be used by the processor for at least one of the following:
- the system is configured to provide feature extraction.
- Feature extraction is reducing the representation of an image to a small number of components defined by the user. This process allows further viewing and /or rendering and/or manipulation and/or processing of a limited amount of data further limiting the needed computer resources. In turn, this can be used to calculate other features such as edges and textures.
- Feature extraction can also provide selective measurements for vector based image reconstruction. Segmentation is also applied in preprocessing of images for multimodality image registration
- pattern recognition systems are employed in the analysis of the image data, facilitating the mapping of the subject image data
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate 3D visualization of a user defined area.
- an imaging system as defined in any of the above is disclosed, wherein the system processor is configured to provides workflows specifically for working with large images that are difficult to process and display with standard methods, by generating at least one reduced-resolution data set (R-Set) of the image data, that divides an image into spatial tiles and resamples the image at different resolution levels without loading a large image entirely into memory. Additionally or alternatively, this the R-set configuration enables rapid image display, processing and/or navigation.
- R-Set reduced-resolution data set
- an imaging system as defined in any of the above is disclosed, wherein the system further comprises a graphics processing unit (GPU), configured to perform at least one of the operations.
- GPU graphics processing unit
- an imaging system as defined in any of the above wherein at least a portion of the image processing is preformed utilizing cloud computing.
- an imaging system as described in any of the above is disclosed, wherein the system is configured to generate at least one 2D image sliced in any plane defined by the user.
- an imaging system as described in any of the above wherein the system is configured to generate the 3D visualization by means of volume rendering, surface based rendering, or any combination of both.
- an imaging system as defined in any of the above is disclosed, wherein the set of instructions additionally comprises sending at least one 3D image to at least one recipient, or presenting the image in at least one display device (e.g. a screen, a printout). additionally or alternatively, the system is configured to provide an image format enabling export of the generated image.
- the recipient can be such as a computer, a PDA, a mobile phone, a laptop, a monitor, a screen, an e-mail, an SMS, MMS, an operating device, a printer, a 3D printer, an imaging device, a manufacturing machine, a medical analysis software, a display device, and etc.
- an imaging system as defined in any of the above is disclosed, wherein the processor is configured to generated the image comprising one or more layers; further wherein each layer comprises different image parameters of the image data. Additionally or alternatively, the user can choose to export one or more layers or further manipulate the final image by image processing tools in each layer or layer combination.
- an imaging system as defined in any of the above wherein the system is configured to generate one or more anatomical section images defined by the user.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate an image comprising at least one feature extraction.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to provide a 3D image of at least one of the following: an internal view of the patient, an external view of the patient, and any combination thereof.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate at least one animation comprising image data parameters of more than one image along a timeline defined by at least one imported image property.
- an imaging system as defined in any of the above wherein the image property is selected from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of subject present in image, and any combination thereof .
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate a database comprising differences between at least a portions of the phantom model and the imaged subject.
- An imaging system for presentation of at least one 3D image of an internal organ of a (e.g., small) mammalian subject, comprising: an input module (103) configured to receive image data (104, 105) of the subject from more than one imaging method. Additionally or alternatively the imaging data is directly streamed from at least one EVID (108, 109).
- the system further comprises a 3D module (106) comprising phantom 3D model data of rendered internal organs of the subject; and a processor (102) in communication with a computer readable medium (CRM) (101) for executing a set of operations received from the CRM; the set of operations comprising: (i) importing the subject image data (104, 105) from more than one imaging method, and phantom model 3D data from the 3D module (106) by means of the input module (103); (ii) fitting the phantom model 3D data to the subject image data (104, 105) to provide mapping of the subject features; (iii) generating image parameters with reference to the subject image data mapping; and, (iv) processing and rendering at least a portion of the subject image data according to the image parameters, coinciding with the location of the organ of interest.
- CRM computer readable medium
- the fitting is accomplished by a dedicated mapping/ fitting module (111).
- the system is configured to generate the image parameters by combining image data (104, 105) from more than one image acquiring method. Additionally or alternatively, the system further comprises an image processing module (112) for combining the image data from more than one source and generating image parameters.
- the system further comprises a GPU (107), configured for carrying out at least a portion of the image processing.
- the system further comprises an export module (110), configured to generate an export format fitted for displaying, transmitting or sending to at least one recipient.
- Fig. 2 schematically illustrating, a diagram of an embodiment of the present invention.
- This diagram presents an embodiment of an exemplary set of instructions stored in the CRM to be executed by at least one processor.
- the input module imports more than one image data of the subject (210), the data originates at least two different imaging methods. Additionally or alternatively, the images are from different times, different subject temperatures, different physiological conditions, and etc. Additionally or alternatively, the data can be obtained directly from at least one IMD. Further the system imports 3D phantom image data (220) from a 3D module.
- the next instruction (230) includes fitting of the image data into the 3D phantom model by such as finding anchors of recognition, scaling, aligning, moving, and the like. This provides mapping of the acquired data according to known anatomy details of the phantom model. This further allows stating the location and tagging at least a portion of the imported subject image information.
- the next instruction (240) comprises generating image parameters with reference to subject image data mapping by combing the data from at least two data sets received. This stage employs applying image processing technics in the joining of the datasets. Additionally or alternatively, the system can extract, unite, select, and/or apply any Boolean operations between the different data sets.
- the system can score the data on its mapping, tagging, assurance determining the similarity of the information between the data sets, and apply algorithm dedicated for choosing the most probable correct information.
- Analyzing the data sets can be done with following overlaying images means, subtracting images means, contour detecting means, surface rendering means, volume rendering means, contras enhancing means, pattern recognition means, image segmentation means, similarity recognition means, vector rendering means, feature extraction means and others as known in the art of image processing.
- the following instruction (250) includes generating at least one 3D image of at least a portion of the image data of the subject, by rendering an image. This is accomplished by incorporating generated image parameters into the already available 3D phantom model.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate at least a portion of the 3D image by at least one of the following: overlaying images means, subtracting images means, contour detecting means, surface rendering means, volume rendering means, contras enhancing means, pattern recognition means, similarity recognition means, vector rendering means, feature extraction means.
- the next instruction (260) includes exporting the image to at least one recipient and/ or displaying the image.
- a method (300) for 3D imaging an internal organ of a (e.g., small) mammalian subject including the following steps: obtaining (310) an imaging system comprising: (i) an input module (103) configured to receive a first image data (104) of the subject from a first imaging method, and at least a second image data (105) of the subject, from a second imaging method; (ii) a 3D generic module comprising phantom model data of rendered internal organs of the subject; (iii) a processor in communication with a computer readable medium (CRM); for executing a set of operations received from the CRM.
- an imaging system comprising: (i) an input module (103) configured to receive a first image data (104) of the subject from a first imaging method, and at least a second image data (105) of the subject, from a second imaging method; (ii) a 3D generic module comprising phantom model data of rendered internal organs of the subject; (iii) a processor in communication with a computer readable
- the set of operations includes importing (320) the subject image data from more than one imaging method, and the phantom model 3D data, by means of the input module. Following, fitting (330) the phantom model 3D data to the subject image data to provide mapping of the subject features. Then generating image parameters (340) with reference to the subject image data mapping.
- the method includes generating the image parameters by combining (370) image data from more than one imaging method.
- the next instruction is processing and rendering (350) at least a portion of the subject image data according to the image parameters, coinciding with the location of the organ of interest to from at least one 3D image.
- the next step (360) is executing at least the above mentioned set of operations.
- the step of generating the image parameters by the processor is configured to generating the image parameters by integrating the first image data (104) and at least the second image data (105), and processing at least one 3D image by incorporating at least one generated image parameter into the 3D phantom model data.
- a method as defined in any of the above additionally comprising the step of generating at least one image parameter by one or more of the following: overlaying images, subtracting images, contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, contras enhancing, pattern recognition, similarity recognition, vector rendering, feature extraction, noise filtering, sharpening image, texture filtering, detection of color level, and detection of brightness level.
- a method as defined in any of the above is disclosed, additionally comprising the step of providing at least one imported image data comprising volumetric image data.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating the image parameters by combining image data from more than one imported image data.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating an assurance score, for assessing the image data content in view of at least two image sources.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating a mapping score for assessing the location of the image data in reference to the 3D phantom model.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating a quality score for assessing the combination of the mapping score and the assurance score.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating the 3D image comprising image parameters above a defined threshold score.
- a method as defined in any of the above is disclosed, additionally comprising the step of importing at least one image data comprising whole body image data.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating at least one 3D image of a user defined area.
- a method as defined in any of the above is disclosed, additionally comprising the step of identifying a specific organ in the imported image data in reference to the phantom model data.
- a method as defined in any of the above is disclosed, additionally comprising the step of sending at least one 3D image to at least one recipient.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating the image comprising one or more layers; each layer comprising different image parameters of the image data.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating the image parameters adapted to the subject developmental stage.
- a method as defined in any of the above is disclosed, additionally comprising the step of adjusting image parameters according to a selected from a group consisting of: subject species, subject age, subject physiological condition, organ type, image data source, and any combination thereof.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating one or more anatomical section images defined by the user.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating a 3D image comprising at least one feature extraction.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating a 3D image of at least one of the following: an internal view of the patient, an external view of the patient, and any combination thereof.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating at least one animation comprising image data parameters of more than one image along a timeline defined by at least one imported image property.
- a method as defined in any of the above is disclosed, additionally comprising the step of selecting the image property from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of subject present in image, and any combination thereof.
- a method as defined in any of the above is disclosed, additionally comprising the step of generating a database comprising the differences between at least a portions of the phantom model and the imaged subject.
- a method as defined in any of the above is disclosed, additionally comprising the step of importing the image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
- a method as defined in any of the above is disclosed, additionally comprising the step of modifying the phantom model parameters according to at least one generated image parameters.
- a method as defined in any of the above additionally comprising the step of generating at least a portion of the 3D image by at least one of the following: overlaying images means, subtracting images means, contour detecting means, surface rendering means, volume rendering means, contras enhancing means, pattern recognition means, similarity recognition means, vector rendering means, feature extraction means.
- a method as defined in any of the above additionally comprising the step of importing the image data selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
- a method as defined in any of the above is disclosed, additionally comprising the step of processing at least a portion of the 3D image utilizing cloud computing.
- a method as defined in any of the above is disclosed, additionally comprising the step of obtaining the system further comprising a graphics processing unit (GPU), and performing at least one of the operations by the GPU.
- GPU graphics processing unit
- An imaging system (175) for presentation of at least one 3D image of an internal organ of a human or other mammalian subject, comprising: (a) a 3D generic module (106) comprising phantom 3D model data of rendered internal organs of the subject; (b) an input module (103) configured to receive image data (104, 105) of the subject from more than one imaging method; (c) a fitting module (111) configured to provide mapping of at least a portion of the image data by matching the image data with the 3D generic module data; (d) a scoring module (115) configured to score the image data by evaluating and comparing the image data received from more than one subject image data, and associate each image data portion at least one score; (e) a processing module (112) configured to generate image parameters for generating a 3D image by selecting image data with a defined score and rendering the 3D image; wherein the processing module (112) is configured to integrate the image data from one first image data and at
- the subject image data can be received following processing of raw data specific to each imaging method.
- the system is configured to receive data in several levels of processing
- an imaging system as defined in any of the above is disclosed, wherein the system further comprises at least one imaging device (IMD) configured to forward at least one subject image data to the input module.
- IMD imaging device
- an imaging system as defined in any of the above is disclosed, wherein the processing is configured to generate at least one image parameter by one or more of the following: overlaying images, subtracting images, contour detection, surface detection, volume detection, raw data manipulation, image segmentation, user interactive segmentation, contras enhancing, pattern recognition, similarity recognition, vector rendering, feature extraction, noise filtering, sharpening image, texture filtering, detection of color level, aligning, rotating, transforming the angle of the image, skewing and detection of brightness level.
- an imaging system as defined in any of the above wherein at least one image data received is 3D volumetric image data.
- an imaging system as defined in any of the above is disclosed, wherein the input module is configured to receive the image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
- the received image data is selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate the image parameters by combining image data from more than one image data.
- an imaging system as defined in any of the above wherein the scoring module is configured to generate an assurance score, for assessing the image data content by analyzing at least two image data sources.
- an imaging system as defined in any of the above wherein the scoring module is configured to generate a mapping score for assessing the location of the image data in reference to the 3D phantom model data.
- an imaging system as defined in any of the above wherein the scoring module is configured to generate a quality score for assessing the combination of the mapping score and the assurance score.
- an imaging system as defined in any of the above is disclosed, wherein the scoring module is configured to generate a score for each image parameter generated by the processing module, by evaluating the score of the relevant image data portion and the 3D phantom model data.
- an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to associate at least one score to each image parameter.
- an imaging system as defined in any of the above is disclosed, wherein the system further comprises an export module, configured to export the generated image and/ or the image data parameters, to at least one recipient.
- an imaging system as defined in any of the above is disclosed, wherein the recipient is selected from a group consisting of: at least one display device, an e-mail, a digital transmission, a printer, a 3D printer, a computer, an imaging device, a medical analysis software, a mobile phone, and any combination thereof.
- an imaging system as defined in any of the above is disclosed, wherein the input module is configured to receive image data comprising the least a portion of the subject, whole body image data, or both.
- an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate at least one 3D image of a user defined area.
- an imaging system as defined in any of the above wherein the fitting module is configured to identify a specific organ in the imported image data in reference to the phantom model data.
- an imaging system as defined in any of the above wherein the fitting module is configured to adjusted the image data mapping according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
- an imaging system as defined in any of the above wherein the fitting module is configured to map at least one image data portion by one or more of the following: overlaying images, subtracting images, contour detection, surface detection, volume detection, raw data manipulation, image segmentation, user interactive segmentation, contras enhancing, pattern recognition, similarity recognition, vector rendering, feature extraction, noise filtering, sharpening image, texture filtering, detection of color level, aligning, scaling, rotating, transforming the angle of the image, skewing, and detection of brightness level.
- an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate one or more anatomical section images defined by the user.
- an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate an image comprising at least one subject feature extraction.
- an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generated the image comprising one or more layers; further wherein each layer comprises different image parameters of the image data.
- an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate adjusted image parameters according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
- an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to provide a 3D image selected from a group consisting of: an internal view of at least a portion the patient, an external view of at least apportion of the patient, and any combination thereof.
- an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate at least one animation comprising image data parameters of more than one image along a timeline defined by at least one image property.
- the image property is selected from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of the subject present in the image, and any combination thereof.
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate a database comprising differences between at least a portions of the phantom model data and the imaged subject.
- an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to modify the phantom model parameters according to at least one generated image parameter.
- an imaging system as defined in any of the above is disclosed, wherein the system further comprises a graphics processing unit (GPU), configured to perform at least a portion of a selected from a group consisting of: the mapping, the image processing, the image rendering, and any combination thereof.
- GPU graphics processing unit
- an imaging system as defined in any of the above is disclosed, wherein the system is configured to perform at least a portion of a selected from a group consisting of: the mapping, the image processing, the image rendering, and any combination thereof, utilizing cloud computing.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The present invention provides an imaging system, comprising: a. an input module configured to receive a first and a second image data of said subject; b. a 3D generic module comprising phantom 3D model data of rendered internal organs of said subject; and, c. a processor in communication with a Computer Readable Medium, for executing a set of operations, comprising: i. importing said subject image data from more than one imaging method, and said phantom model 3D data; ii. fitting said subject image data to said phantom model 3D data to provide mapping of said subject image data features; iii. generating image parameters with reference to said subject image data mapping; and, iv. processing and rendering at least a portion of said subject image data according to said image parameters, coinciding with the location of said organ of interest, into at least one 3D image.
Description
IMAGING SYSTEM OF A MAMMAL
FIELD OF THE INVENTION
The present invention generally relates to the field of imaging systems, and more particularly, to image processing systems, for generating 3D images of internal features of humans and small mammals.
BACKGROUND OF THE INVENTION
The main objective of medical image-processing is to facilitate the gathering of, or provide diagnostic information from non-processed images. In general, the processing of digitally acquired images is aimed at improving pictorial information for human interpretation and (or) processing data for autonomous machine perception.
Imaging technics such as MRI technology utilizes non-destructive imaging modalities that derive contrast from the different physiological content and chemical environment permitting the imaging of internal structures.
While the use of MRI technology, for example, has becoming more and more the standard for pre-clinical diagnosis and research, the need for expertise in the interpretation of the acquired images is growing. Knowledge of the basic physical principles behind MRI is essential for correct image interpretation. Expertise, experience, and knowledge in radiology is needed for providing a diagnosis. This may cause the process of diagnosis to be lengthy and cumbersome. This is further intensified when a medical emergency is in need of diagnosis.
Different imaging technics offer different levels of information, and require different specialization levels. While some depict only anatomy, others mix anatomy with metabolic information. Combining the different types of information to a coherent and accurate diagnosis is typically done manually and mentally by medical and research stuff.
The research field has long been utilizing imaging technology to assess the physiological changes of research subjects and specimens. In this field research associates often have no prior experience in physiological and anatomical image interpretation such as radiology or in the practice of implicating the images received
to a diagnosis. In addition, many research subjects are operated on during various stages of research to investigate the physiological changes of the inner organs not otherwise visible. Consequently, this requires a great amount of research subjects to be operated on in order to examine different stages of physiological change along the experiment timeline.
Known in the art are many examples of 3D imaging models, for example of murine subjects. One example is Mouse atlas (http://mouseatlas.caltech.edu) which provides a μΜΙΙΙ Atlas of Mouse Development in 3D digital imagery. This is a collection of pre- made, pre-interpreted MRI scans to facilitate the correct interpretation of MRI scanning of a mouse in different stages of development. In order to correctly deduce a diagnosis, one has to maintain the exact conditions of which the samples in the pre made images where taken. Further, the software provides a specific murine and does not combine real time imagery of the patient or research subject.
Another example is a 3D Rat anatomy software, provided by Biosphera (http://www.biosphera.com.br/e-rat-anatomy.asp). This example provides a tool for learning about rat anatomy, and the 3D location of the internal organs. All the comparisons, deductions and diagnosis of MRI scanning are done manually by the user.
Known in the art are 3D image processing software that can generate 3D images of such as MRI scans, like for example Mimics® provided by "Materialise" for medical image processing. Mimics® can be used for the segmentation of 3D medical images resulting 3D models of patient anatomy. These programs provide a 3D model based on a single image acquiring method for each model. In addition, these software programs usually require high computational resources, which reflects on high cost computers and specialized graphical computer cards.
Thus, there is a long felt need for a cost effective system and method that will enable quick and accurate diagnosis of medical, research and preclinical images of different origins, thereby providing a more precise tool for the surveying of internal organs promoting rapid patient diagnosis and easy assessment of research results without unnecessary operation on research subjects.
SUMMARY OF THE INVENTION
The present invention provides an imaging system (100), for presentation of at least one 3D image of an internal organ of a mammalian subject, comprising: (a) an input module (103) configured to receive a first image data (104) of the subject from a first imaging method, and at least a second image data (105) of the subject, from a second imaging method;(b) a 3D generic module comprising phantom 3D model data (106) of rendered internal organs of the subject; and, (c) a processor (102) in communication with a Computer Readable Medium, CRM, (101), for executing a set of operations received from the CRM; the set of operations comprising: (i) importing the subject image data from more than one imaging method, and the phantom model 3D data, by means of the input module; (ii) fitting the subject image data to the phantom model 3D data to provide mapping of the subject image data features; (iii) generating image parameters with reference to the subject image data mapping; and, (iv) processing and rendering at least a portion of the subject image data according to the image parameters, coinciding with the location of the organ of interest, into at least one 3D image; wherein the processor is configured to generate the image parameters by integrating image data from the first image data (104) and at least the second image data (105), and process the 3D image by incorporating at least one generated image parameter into the 3D phantom model data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises at least one imaging device (IMD) configured to forward at least one subject image data to the input module.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein at least one imported image data provides volumetric image data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the integrating image data from at least one first the image data and at least one second the image data comprises at least one selected from a group consisting of: overlaying at least a portion of the images, subtracting at least a portion of the images, incorporating at least a portion of one image in at least one second image, connecting at least a portion of two images, preforming image
analysis of at least a portion of the image by image processing means, applying image processing on at least a portion of the image, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processor is configured to fit at least one image data to the 3D phantom model by applying at least one selected from a group consisting of: overlaying at least a portion of the images, subtracting at least a portion of the images, incorporating at least a portion of one image in at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of the image by image processing means, applying image processing on at least a portion of the image, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the image processing means comprises at least one of: contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, image restoration from partial data, image reconstruction, contras enhancing, pattern recognition, similarity recognition, vector rendering, surface rendering, feature extraction, noise filtering, sharpening, blurring, texture filtering, detection of color level, detection of brightness level, detection of color-level or brightness-level discontinuity, aligning, scaling, rotating, skewing, resizing, transforming the angle of the image, cropping, skewing, 2D geometric transformations, 3D geometric transformations, image compression, skeletonization, measuring at least a portion of the image, changing color level, identify foreground objects and background locations, changing brightness level, Control-Point Image processing, and generating an R-set of image data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processor is further configured to perform control point processing, selected from a group consisting of: manually, automatic and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the input device is configured to provide image analysis of the first image data and at least the second image data by image processing means.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the image analysis comprises detection of at least one of: shapes contours, foreground shapes, background shapes, textures, color levels, brightness, noise, saturation, resolution, channels, patterns, and similarities.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processor is configured to generate at least one reduced-resolution data set (R-Set) of the image data, that divides an image into spatial tiles and resamples the image at different resolution levels without loading a large image entirely into the CRM.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate the image parameters by combining image data from more than one imported image data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate an assurance score, for assessing the image data content by analyzing at least two image data sources.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate a mapping score for assessing the location of the image data in reference to the 3D phantom model data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate a quality score for assessing the combination of the mapping score and the assurance score.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to associate at least one score to each image parameter.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate the 3D image based on image parameters having a defined threshold score.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises a scoring module in communication with the processor, configured to generate at least one score selected
from a group consisting of: the assurance score, the mapping score, the quality score, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises a fitting module, in communication with the processor, configured to provide mapping of the image data in reference to the 3D phantom model data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises an image processing module in connection with the processor configured to process the subject image data and render at least one 3D image.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises an export module, in communication with the processor, configured to export a the generated image and/ or the image data parameters, to at least one recipient.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to import whole body image data, partial body image data, or both.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate at least one 3D image of a user defined area.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to identify a specific organ in the imported image data in reference to the phantom model data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate one or more anatomical section images defined by the user.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate an image comprising at least one subject feature extraction.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processor is configured to generated the image
comprising one or more layers; further wherein each layer comprises different image parameters of the image data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate adjusted image parameters according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to adjust the mapping according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to provide a 3D image selected from a group consisting of: an internal view of at least a portion the patient, an external view of at least a portion of the patient, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate at least one animation comprising image data parameters of more than one image along a timeline defined by at least one imported image property.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the image property is selected from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of the subject present in the image, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate a database comprising differences between at least a portions of the phantom model data and the imaged subject.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to import the image data in a
form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to modify the phantom model parameters according to at least one generated image parameter.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate at least a portion of the 3D image by at least one of the following: overlaying images means, subtracting images means, contour enhancing means, surface rendering means, volume rendering means, contras enhancing means, vector rendering means, feature extraction means.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the imported image data is selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises a graphics processing unit (GPU), configured to perform at least a portion of a selected from a group consisting of: the mapping, the image processing, the image rendering, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to perform at least a portion of a selected from a group consisting of: the mapping, the image processing, the image rendering, and any combination thereof, utilizing cloud computing.
The present invention provides a method for 3D imaging an internal organ of a mammalian subject, comprising the steps of: (a) obtaining an imaging system (100) comprising: (i) an input module (103) configured to receive a first image data (104) of the subject from a first imaging method, and at least a second image data (105) of the subject, from a second imaging method; (ii) a 3D generic module comprising phantom model data of rendered internal organs of the subject; and, (iii) a processor (102) in communication with a Computer Readable Medium, CRM, (101), for executing a set of operations received from the CRM; the set of operations comprising: (1) importing the subject image data from more than one imaging
method, and the phantom model 3D data, by means of the input module; (2) fitting the subject image data to phantom model 3D data to provide mapping of the subject image data features; (3) generating image parameters with reference to the subject image data mapping; and, (4) processing and rendering at least a portion of the subject image data according to the image parameters, coinciding with the location of the organ of interest, into at least one 3D image, and, (b) executing the set of operations, wherein the step of the generating the image parameters by the processor is configured to generating the image parameters by integrating the first image data (104) and at least the second image data (105), and processing at least one 3D image by incorporating at least one generated image parameter into the 3D phantom model data.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating
It is another object of the current invention to disclose the method as defined in any of the above, wherein the system further comprises at least one imaging device (IMD) configured to forward at least one subject image data to the input module.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of providing at least one imported image data comprising volumetric image data.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of integrating image data from at least one first the image data and at least one second the image data comprising at least one selected from a group consisting of: overlaying at least a portion of the images, subtracting at least a portion of the images, incorporating at least a portion of one image in at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of the image by image processing means, applying image processing on at least a portion of the image, and any combination thereof.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of the processor fitting at least one image data to the 3D phantom model by applying at least one selected from a group consisting of: overlaying at least a portion of the images, subtracting at least a portion
of the images, incorporating at least a portion of one image in at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of the image by image processing means, applying image processing on at least a portion of the image, and any combination thereof.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of the input device providing image analysis of the first image data and at least the second image data by image processing means.
It is another object of the current invention to disclose the method as defined in any of the above, wherein the image processing means comprising at least one of: contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, image restoration from partial data, image reconstruction, contras enhancing, pattern recognition, similarity recognition, vector rendering, surface rendering, feature extraction, noise filtering, sharpening, blurring, texture filtering, detection of color level, detection of brightness level, detection of color-level or brightness-level discontinuity, aligning, scaling, rotating, skewing, resizing, transforming the angle of the image, cropping, skewing, 2D geometric transformations, 3D geometric transformations, image compression, skeletonization, measuring at least a portion of the image, changing color level, identify foreground objects and background locations, changing brightness level, Control-Point Image processing, and generating an R-set of image data.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of the processor performing control point processing, selected from a group consisting of: manual, automatic and any combination thereof.
It is another object of the current invention to disclose the method as defined in any of the above, wherein the image analysis comprises detection of at least one of: shapes contours, foreground shapes, background shapes, textures, color levels, brightness, noise, saturation, resolution, channels, patterns, and similarities.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of the processor generating at least one reduced-resolution data set (R-Set) of the image data, thereby dividing an image into
spatial tiles and resampling the image at different resolution levels without loading a large image entirely into the CRM.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating the image parameters by combining image data from more than one imported image data.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating an assurance score, for assessing the image data content in view of at least two image sources.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating a mapping score for assessing the location of the image data in reference to the 3D phantom model.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating a quality score for assessing the combination of the mapping score and the assurance score.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating the 3D image comprising image parameters above a defined threshold score.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of providing the system further comprising an export module, and exporting at least one generated 3D image by means of the export module.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of importing at least one image data comprising whole body image data, partial body image data, or both.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one 3D image of a user defined area.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating one or more anatomical section images defined by the user.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating a 3D image comprising at least one feature extraction.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating a 3D image of at least one of the following: an internal view of the patient, an external view of the patient, and any combination thereof.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of identifying a specific organ in the imported image data in reference to the phantom model data.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating the image comprising one or more layers; further each of the layers comprising different image parameters of the image data.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating the image parameters adapted to the subject developmental stage.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of adjusting image parameters according to a selected from a group consisting of: subject species, subject age, subject physiological condition, organ type, image data source, and any combination thereof.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one animation comprising image data parameters of more than one image along a timeline defined by at least one imported image property.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of selecting the image property from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of subject present in image, and any combination thereof.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating a database comprising the differences between at least a portions of the phantom model and the imaged subject.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of importing the image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of modifying the phantom model parameters according to at least one generated image parameters.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least a portion of the 3D image by at least one of the following: overlaying images means, subtracting images means, contour detecting means, surface rendering means, volume rendering means, contras enhancing means, pattern recognition means, similarity recognition means, vector rendering means, feature extraction means.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of importing the image data selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of obtaining the system further comprising a graphics processing unit (GPU), and performing at least one selected from a group consisting of: image processing, mapping, rendering and any combination thereof, at least partially by the GPU.
It is another object of the current invention to disclose the method as defined in any of the above, additionally comprising the step of preforming at least one selected from a group consisting of: image processing, rendering, mapping and any combination thereof, at least partially by utilizing cloud computing.
The present invention provides an imaging system, for presentation of at least one 3D image of an internal organ of a mammalian subject, comprising: (a) a 3D generic module comprising phantom 3D model data of rendered internal organs of the
subject;(b) an input module configured to receive one first image data from one first imaging method, and at least one second the image data from at least a second imaging method;(c) a fitting module configured to provide mapping of at least a portion of the image data by matching the image data with the 3D generic module data; (d) a scoring module configured to score the image data by evaluating and comparing one first the image data and at least one second the image data, and associate each image data portion at least one score; and, (e) a processing module configured to generate image parameters for generating a 3D image by selecting image data from a defined threshold score and rendering the 3D image; wherein the processing module is configured to integrate the image data from one first the image data and at least one second the image data to generate the image parameters, and generate the 3D image by incorporating the image parameters into the 3D phantom model.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises at least one imaging device (IMD) configured to forward at least one subject image data to the input module.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein at least one imported image data provides volumetric image data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the integrating image data from one first the image data and at least one second the image data comprises at least one selected from a group consisting of: overlaying at least a portion of the images, subtracting at least a portion of the images, incorporating at least a portion of one image in at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of the image by image processing means, applying image processing on at least a portion of the image, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the fitting module is configured to fit at least one image data to the 3D phantom model by applying at least one selected from a group consisting of: overlaying at least a portion of the images, subtracting at least a portion of the images, incorporating at least a portion of one image in at least one second
image, connecting at least a portion of two images, preforming image analysis of at least a portion of the image by image processing means, applying image processing on at least a portion of the image, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the fitting module and/or processing module is further configured to perform control point processing, selected from a group consisting of: manually, automatic and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the input device is configured to provide image analysis of the first image data and at least the second image data by image processing means.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the image processing means comprises at least one of: contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, image restoration from partial data, image reconstruction, contras enhancing, pattern recognition, similarity recognition, vector rendering, surface rendering, feature extraction, noise filtering, sharpening, blurring, texture filtering, detection of color level, detection of brightness level, detection of color-level or brightness-level discontinuity, aligning, scaling, rotating, skewing, resizing, transforming the angle of the image, cropping, skewing, 2D geometric transformations, 3D geometric transformations, image compression, skeletonization, measuring at least a portion of the image, changing color level, identify foreground objects and background locations, changing brightness level, Control-Point Image processing, and generating an R-set of image data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the image analysis comprises detection of at least one of: shapes contours, foreground shapes, background shapes, textures, color levels, brightness, noise, saturation, resolution, channels, patterns, and similarities.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processor is configured to generate at least one reduced-resolution data set (R-Set) of the image data, that divides an image into
spatial tiles and resamples the image at different resolution levels without loading a large image entirely into the CRM.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the input module is configured to receive the image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the received image data is selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate the image parameters by combining image data from more than one image data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the scoring module is configured to generate an assurance score, for assessing the image data content by analyzing at least two image data sources.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the scoring module is configured to generate a mapping score for assessing the location of the image data in reference to the 3D phantom model data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the scoring module is configured to generate a quality score for assessing the combination of the mapping score and the assurance score.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the scoring module is configured to generate a score for each image parameter generated by the processing module, by evaluating the score of the relevant image data portion and the 3D phantom model data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processing module is configured to associate at least one score to each image parameter.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises an export module, configured to export a the generated image and/ or the image data parameters, to at least one recipient.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the recipient is selected from a group consisting of: at least one display device, an e-mail, a digital transmission, a printer, a 3D printer, a computer, an imaging device, a medical analysis software, a mobile phone, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the input module is configured to receive image data comprising at least a portion of the subject, whole body image data, or both.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processing module is configured to generate at least one 3D image of a user defined area.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the fitting module is configured to identify a specific organ in the imported image data in reference to the phantom model data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the fitting module is configured to adjusted the image data mapping according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processing module is configured to generate one or more anatomical section images defined by the user.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processing module is configured to generate an image comprising at least one subject feature extraction.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processing module is configured to generated the
image comprising one or more layers; further wherein each layer comprises different image parameters of the image data.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processing module is configured to generate adjusted image parameters according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processing module is configured to provide a 3D image selected from a group consisting of: an internal view of at least a portion the patient, an external view of at least apportion of the patient, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processing module is configured to generate at least one animation comprising image data parameters of more than one image along a timeline defined by at least one image property.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the image property is selected from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of the subject present in the image, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to generate a database comprising differences between at least a portions of the phantom model data and the imaged subject.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the processing module is configured to modify the phantom model parameters according to at least one generated image parameter.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system further comprises a graphics processing unit (GPU), configured to perform at least a portion of a selected from a group consisting
of: the mapping, the image processing, the image rendering, and any combination thereof.
It is another object of the current invention to disclose the imaging system as defined in any of the above, wherein the system is configured to perform at least a portion of a selected from a group consisting of: the mapping, the image processing, the image rendering, and any combination thereof, utilizing cloud computing.
BRIEF DESCRIPTION OF THE FIGURES
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured. In the accompanying drawing:
Fig. 1A is a schematic diagram of an embodiment of the present invention imaging system, for generation and presentation of at least one 3D image of an internal portion of a subject;
Fig. IB is a schematic diagram of an embodiment of the present invention imaging system further comprising interconnecting modules.
Fig. 2 is a schematic diagram describing an example set of operations executed by the present invention;
Fig. 3 is a schematic diagram of an embodiment of a method for generating an image of at least a portion of a subject; and
Fig. 4 is a schematic diagram of an embodiment of the present invention imaging system, for generation and presentation of at least one 3D image of at least a portion of a subject.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured.
The essence of the present invention is to provide an image processing system that obtains more than one image (internal, external or both) of a subject, such as a human or small mammal, imported from an imaging device (IMD), at least one first image is from at least one first imaging method and at least one second image is from a second imaging method. The system maps the image data by comparison to a 3D phantom model existing module. Further, the system generates image parameters by combining multiple data sources, and by comparison and implementation of the data into an 'off- the-shelf 3D anatomy imaging model, provides 3D visualization of the subject and/ or specimen physiology and/ or anatomy, allowing mapping and rendering of at least an internal portion or organ of the subject. The imaging system of the present invention will increase the accuracy of medical and research oriented image interpretation, and ease the use of sophisticated imagery by unexperienced personnel. The present invention will lessen the amount of research subjects needed in experiments, and will shorten the time for acquiring a diagnosis. The present invention allows for limited computer resources as it enables extraction of specific information (a defined organ or region of interest) to be further processed and rendered.
The term 'Imaging Device' (IMD), specifically applies hereinafter to any device and/ or any other analyzing and imaging instruments, providing image data of at least an internal portion of the subject in at least one image acquiring method, including, but not limited to: Magnetic Resonance Imaging (MRI) device, Nuclear Magnetic Resonance (NMR) spectroscope, Electron Spin Resonance (ESR) spectroscope, Nuclear Quadruple Resonance (NQR), Laser Magnetic Resonance device, Rotational
Field Quantum Magnetic Resonance device (cyclotron), computerized tomography (CT) device, PET-CT, PET-MRI, bone densitometry device, ultrasound (US), 3D ultrasound, Doppler ultrasound imaging, X-ray device, Fluoroscopy device, any fluorescence device, Diffusion MRI, micro-CT, Confocal Microscopy, SPECT (Single-photon emission computed tomography) device, scintigraphy device, Magnetoencephalography device, Tactile imaging device, Photoacoustic imaging device, Thermography device, Functional near-infrared spectroscopy device, Diffuse optical tomography device, Electrical impedance tomography device, Optical coherence tomography, Scanning laser ophthalmoscopy, and others as known in the art of medical imaging. The IMD hereby disclosed is optionally a portable MRI device, such as the ASPECT-MR Ltd. commercially available devices, an imaging device especially adapted for murine analysis or any research subject or specimen, a commercially available non-portable device.
At least two different imaging methods are selected from these non-limiting examples: Magnetic Resonance Imaging (MRI), Nuclear Magnetic Resonance (NMR), Electron Spin Resonance (ESR), Nuclear Quadruple Resonance (NQR), Laser Magnetic Resonance, Rotational Field Quantum Magnetic Resonance, Computerized Tomography (CT/CAT scan), PET imaging, (Positron Emission Tomography), PET-CT imaging, PET-MRI imaging, bone densitometry imaging, ultrasound (US) imaging, (sonogram), 3D ultrasound imaging, Doppler ultrasound imaging, X-ray imaging, X-ray computed tomography, Fluoroscopy, foluresence image, Diffusion MRI, micro-CT, Confocal Microscopy, Magnetic Resonance Angiography (MRA), functional Magnetic Resonance Imaging (fMRI), fluorescence imaging, SPECT (Single-photon emission computed tomography), scintigaphy, Magnetoencephalography, Tactile imaging, Photoacoustic imaging, Thermography, Optical coherence tomography, Scanning laser ophthalmoscopy, Functional near- infrared spectroscopy, Diffuse optical tomography, Electrical impedance tomography, and MR thermometry.
The term "about" refers hereinafter to 20% more or less than the defied value.
The term "mammal subject" interchangeably refers hereinafter to any human and non-human animal of the Mammalia, a large class of warm-blooded vertebrates having mammary glands in the female, a thoracic diaphragm, and a four-chambered heart, including as non-limiting examples members of the order Lagomorpha (The
order Lagomorpha comprises rabbits and hares (family Leporidae) and the small rodent like pikas (family Ochotonidae), the order Rodentia, murine genus (Mus) or its subfamily (Murinae), including these non-limiting examples: a rat, mouse, hamster, guinea pig, rabbit, hare and the like. Additionally or alternatively, the term includes any animal used in research, any genetically engineered, genetically modified animal, natural animal, especially bred animal, treated animal, or any animal portion. In another aspect of the invention the term subject further includes theses non limiting examples: amphibians, birds, fish, reptiles, and other small animals as known in the art.
The term "murine" interchangeably refers hereinafter to any animal relating to murid genus (Mus) or its subfamily (Murinae), further including these non-limiting examples: rats, mice, rodents, laboratory murine, genetically designed murine, and etc.
The term "plurality" interchangeably refers hereinafter to an integer a, when a > 1.
The term "a graphics processing unit" or "GPU", interchangeably refers hereinafter with any visual processing unit (VPU), a dedicated electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. As depicted in Wikipedia, the GPUs utilizes a highly parallel structure making them more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel. A GPU can be embedded on a graphics dedicated card, on the motherboard, or as part of the processor itself.
The term "CPU", central processing unit, or "processor", interchangeably refers hereinafter to any hardware that carries out the instructions of a program by performing the basic arithmetical, logical, and input/output operations of the system. A computer can have more than one CPU; this is called multiprocessing. The processor can also be such as microprocessors, multi-core processors a system on a chip (SoC), array processor, vector processor and etc. The CPU/processor is typically connected to a memory unit (storage unit, a unit of a computer or an independent device designed to record, store, and reproduce information) configured to store and extrude information in various forms (e.g. a database).
The term "Computer readable media", (CRM), interchangeably refers hereinafter to any medium, e.g., a non-transitory medium, capable of storing data in a format readable by a mechanical device (automated data medium rather than human readable). Examples of machine-readable media include magnetic media such as magnetic disks, cards, tapes, and drums, punched cards and paper tapes, optical disks, flash memories, barcodes and magnetic ink characters. Common machine-readable technologies include magnetic recording, processing waveforms, electronic memory encoding, and barcodes. Optical character recognition (OCR) can be used to enable machines to read information available to humans. Any information retrievable by any form of energy can be machine-readable.
The term "control-point image processing" refers herein to any method employing manual selection of at least one control point in order to label the point as an anchor for such as feature labeling or extraction, aligning of at least one more image, vector reconstruction using this point coordinates, transforming the image from this point or transforming the image whilst not moving or transforming this point. Another example is employing at least one control point in two images in order to align them.
Reference is now made to Fig. 1A schematically illustrating, a diagram of one embodiment of the present invention. According to one embodiment of the present invention, an imaging system (100), for presentation of at least one 3D image of an internal organ of a (e.g., small) mammalian subject, comprising: (a) an input module (103) configured to receive a first image data (104) of the subject from a first imaging method, and at least a second image data (105) of the subject, from a second imaging method; (b) a 3D generic module comprising phantom 3D model data (106) of rendered internal organs of the subject; (c) a processor (102) in communication with a computer readable medium (CRM) (101), for executing a set of operations received from the CRM; the set of operations including: (i) importing the subject image data (104, 105) from more than one imaging method, and phantom model 3D data from the 3D module (106) by means of the input module (103); (ii) fitting the phantom model 3D data to the subject image data (104, 105) to provide mapping of the subject features; (iii) generating image parameters with reference to the subject image data mapping; and, (iv) processing and rendering at least a portion of the subject image data according to the image parameters, coinciding with the location of the organ of interest, into at least one 3D image. The system processor is configured to generate
the image parameters by integrating image data from the first image data (104) and at least the second image data (105), and process the 3D image by incorporating at least one generated image parameter into the 3D phantom model.
In another aspect of the invention, an imaging system as defined in any of the above is disclosed, wherein the system imports whole body image data, or partial body image data. Additionally or alternatively, the system is configured to import the image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof. Additionally or alternatively, the imported image data is selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
Medical image processing usually includes mapping of data acquired by one imaging method. The subject volume is typically divided by a virtual matrix into voxels of a given size, into which image data information is allocated in order to generate a 3D image, or pixels to generate a 2D image. To generate an image, the received signals from the imaging device must be interpreted to their location and unique properties, digitized, (converted into a binary sequence), and stored in a memory module (CRM). The three dimensional placement of each signal is mapped to a voxel and translated to a visual form in what is known in the art as image reconstruction. This reconstruction procedure is based upon dedicated reconstruction algorithms, as known in the art, specifically for the signal type and image acquiring method (e.g. SMASH, SENSE, PILS, and ASSET for MRI parallel imaging). It is further in the scope of the present invention that images can be produced point-by-point, line-by-line, in slices, or in slices calculated from a whole volume. Imagery can be formed from two-dimensional (2D) imagery methods, encoding only two spatial dimensions or from volume techniques encoding three spatial dimensions.
It is known in the art that the standard deviation of the comparison between different tissues during imaging can be as high as about 30%, for example, in brain MRI. In addition, since many differences need to relay on a pre-scanned data reference not always comparable or available, the difference between normal tissue and a similar tumor are not easily distinguishable. This can be detrimental when deciding on a diagnosis or in planning the size of a medical implant. The present invention provides a system combining different image data sets originating from different image acquiring methods thereby providing additional data, lowering the signal to noise ratio
and giving a higher assurance of the resulting image. In principle, additional information may allow a smaller voxel size leading to better resolution of the final image. In addition utilizing more than one image acquiring method can result in fewer artifacts, in the final image.
It is further in the scope of the present invention that the reconstruction procedure of an image is preformed either following the integration of more than one image data, or prior to the integration of more than one data image.
It is further in the scope of the present invention that the reconstruction procedure of an image is performed by reconstructing only a portion of the image data defined by the user.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate the image parameters by combining image data from more than one imported image data.
In another aspect of the invention, the image data imported consists of partial information of the internal portion. The processor is configured to produce the 3D image from this partial information, by completing the missing data. For example connecting the data voxels by assessing their content according to an algorithm evaluating the information from the nearest voxels and the phantom model data of the specific mapping location. Additionally or alternatively, the newly completed voxels can be presented in a different color or form, and a report summarizing these locations and analysis is generated.
Preferably, in another aspect of the invention, an imaging system as defined in any of the above is disclosed, wherein at least one imported image data provides volumetric image data.
The main advantage of the 3D technique is that it has a signal-to-noise advantage over 2D techniques (if the voxel size is kept constant, the signal-to-noise ratio improves by the square root of the number of slices). Another advantage is that the slices are contiguous (which is not the case with multiple- slice techniques), therefor less information is missed. Further, any desired slice orientation can be reconstructed from the data set, and very thin slices can be obtained.
Among the additional problems with 3D imaging are that data processing requirements are greatly increased. The present invention provides a more limited
usage of computer resources processing to rendering only the desired internal portion or organ at a time and providing the rest of the subject as depicted in the already rendered 3D phantom model. In another aspect of the invention the full subject is rendered according to the subject data. Additionally or alternatively, the desired portion of interest is rendered and presented within a simpler displaying or rendering mode illustrating the rest of the figure like for example wire frame, or vector based rendering.
It is known in the art that tissue composition and anatomical and/ or physiological structure changes according to the subject's age. For example the relative content of the myelin in the brain may increase during the first years of infancy. The imaging system of the present invention is configured to adjust the data processing of the image data to tissue properties typical of the subject age. Additionally or alternatively, the system is configured to generate image parameters adapted to the subject developmental stage. Additionally or alternatively, the system is configured to adjust the fitting of the image data to the 3D phantom model further by adjusting the parameters of the 3D phantom model to the developmental stage of the imaged subject.
In another aspect of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate adjusted image parameters according to a selected from a group consisting of: subject species, subject temperature when imaged, subject physiological condition, organ type, image data source, and any combination thereof.
There are several ways to refer to locations in an image. Spatial referencing enables you to specify location information in relation to a world coordinate system. The Image-processing and spatial referencing (mapping) provided by the present invention allows the connection of picture element data of the same location in different images with changed parameters, providing a synthetic image combining the different data sources: at least two image data sources, incorporated within a 3D phantom model.
An important step is the identification, alignment, scaling and consequent fitting of the imported data from the subject to the 3D phantom model. In an embodiment of the invention the fitting process is automatic. Additionally or alternatively, the fitting process is preformed including at least one intervention by the user, such as utilizing a
control point processing. This provides mapping of the data from the subject according to standard anatomy. This step further allows for rapid feature extraction and allows rapid identification of the internal portion by the user. In this step, the imported data from the subject is analyzed and indexed (or tagged) as to the mapping location of each portion or voxel, and optionally a tag according to organ type is added. Further the system is configured to analyze, compare and unite subject image data originating from more than one data source.
In another aspect of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to identify a specific predefined organ in the imported image data in reference to the phantom model data.
According to another embodiment of the invention, an imaging system as described in any of the above is disclosed, wherein the system is configured to generate at least one data associated tag for each data component.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate an assurance score, for assessing the image data content in view of at least two image sources.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate a mapping score for assessing the location of the image data in reference to the 3D phantom model.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate a quality score for assessing the combination of the mapping score and the assurance score.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate the 3D image based on image parameters above a defined threshold score.
Additionally or alternatively, the system comprises a scoring module configured to generate the scoring of image data components and / or the scoring of image data.
This indexing and/ or tagging is associated with the image parameters. Additionally or alternatively, other tags can be added with additional information including: the data
origin, the time taken, , an assurance score based on an algorithm combining the data from two image acquiring sources, a mapping score based on an algorithm assessing the relatability of the fitting made to the phantom model, the quality of the information (assessed by for example a quality assessing algorithm taking into consideration for example the mapping score and the assurance score), and the like. The next step includes retrieving image parameters, and reconstructing a three dimensional image by incorporating a selected internal region relating image parameters into the 3D phantom model data, thereby generating a new 3D model showing the selected region from the imaged subject. The system is configured to modify the phantom model parameters according to at least one generated image parameters.
Image analysis is the process of extracting meaningful information from images such as finding shapes, counting objects, identifying colors, or measuring object properties. It is further in the scope of the present invention that the processor/ processing module is configured to map the subject internal organs on at least one image data, by applying image processing means on the image data prior to fitting the image data on the 3D model data.
In another aspect of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate at least one image parameter following one or more of: overlaying images, subtracting images, contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, contras enhancing, pattern recognition, similarity recognition, vector rendering, feature extraction, noise filtering, sharpening image, texture filtering, detection of color level, aligning, scaling, skewing, rotating, angling, and detection of brightness level.
The term "image processing means" refers hereinafter to any image processing means as known in the art, such as these non-limiting examples: a. subtraction or overlay (superposition) of multichannel images, images from different image acquiring methods, and/or multiple images of the same source in different conditions or different times; b. contour detection, surface detection, volume detection, , feature extraction, c. detection of color level, detection of brightness level;
d. geometric transformations such as aligning, scaling, skewing, rotating, angling, cropping, as well as more complex 2D geometric transformations, such as affine and projective; e. vector rendering- vector based rendering for example to fill-in blanks in image; f. Surface rendering; g. skeletonization of an object, such as providing wire frame view of at least a portion of an image; h. quantification and manipulation of raw image parameters specific to each imaging technic, (e.g. MR Tl, T2 and proton density) utilizing dedicated algorithms; i. image segmentation. Image segmentation is the process of dividing an image into multiple parts. This is typically used to identify objects or other relevant information in digital images. There are many different ways to perform image segmentation, including: threshold methods, color-based or brightness based or contras based segmentation (such as K-means clustering), transform methods (such as watershed segmentation), texture methods (such as texture filters). Image segmentation further allows isolating objects of interest and gathering related statistics. Each of these features can be applied according to a predefined threshold; j. image reconstruction from continuous or non-continuous slices; detection of color-level or brightness-level discontinuity allows the highlighting of points, lines, and edges in an image; k. similarity detecting algorithms revealing areas of similar signal intensities using a defined threshold;
1. pattern recognition algorithms defining areas of similar intensity pattern; m. utilizing control point processing, this can be done at least partially manually, having the user choose and/ or approve at least one point, or completely automatically, having the processor choose control points by an analysis and selection algorithm;
n. detection and marking of specific landmark features (such as a specific and easily defined and detected bone structure, overall external surface shape, external features such as an ear, or a nose, a well-defined tendon and etc.), as anchors for the fitting process; o. image enhancement - removing noise, increasing the signal to noise ratio, and applying sharpening filters on an image, modifying the colors or intensities of an image; p. using predefined filters and functions the processor/ processing module can, for example: Filter with predefined morphological operators, de-blur and sharpen, remove noise with linear, median, or adaptive filtering, perform histogram equalization, remap the dynamic range, adjust the gamma value, adjust contrast, adjust the brightness, watershed transform (the watershed transform finds "catchment basins" and "watershed ridge lines" in an image by treating it as a surface where light pixels are high and dark pixels are low), and etc.; q. identify, or "mark," foreground objects and background locations; and/or r. measuring objects or any defined image portions.
In another aspect of the invention, multispectral images of the same body region can be overlaid to give an impression of the exact location of certain contrast-enhanced structures. This can be used for example when over laying PET images on MRI, or contour detecting enhancement of the same image on the original image. Other non- limiting examples are placing fluorescence image on an x-ray, MR angiography to highlight veins after subtraction of the CE-MRA images of the arterial phase, and any combination of any two image acquiring methods.
At least one of the above stated image processing technics, transformations methods and/or algorithms can be used by the processor for at least one of the following:
a. Analyzing the first and at least a second image data from more than one imaging method, to provide mapping of each image data to internal organs as depicted by the 3D phantom model data;
b. Analyzing the first and at least a second image data from more than one imaging method, to provide scoring of at least a portion (size defined by pixel number or voxel number) of at least one image data in reference to at least a second image data;
c. Integrating the first and at least a second image data from more than one imaging method, to generate at least one image data parameter;
d. Generating at least one 3D and/ or 2D module from at least one image parameter incorporated into the 3D phantom model data;
e. Generating a graphical variation of the generated 3D image; and/or
f. Generating a graphical variation of at least one image layer.
In yet another aspect of the invention the system is configured to provide feature extraction. Feature extraction is reducing the representation of an image to a small number of components defined by the user. This process allows further viewing and /or rendering and/or manipulation and/or processing of a limited amount of data further limiting the needed computer resources. In turn, this can be used to calculate other features such as edges and textures. Feature extraction can also provide selective measurements for vector based image reconstruction. Segmentation is also applied in preprocessing of images for multimodality image registration
According to another embodiment of the invention, pattern recognition systems are employed in the analysis of the image data, facilitating the mapping of the subject image data
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate 3D visualization of a user defined area.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system processor is configured to provides workflows specifically for working with large images that are difficult to process and display with standard methods, by generating at least one reduced-resolution data set (R-Set) of the image data, that divides an image into spatial tiles and resamples the image at different resolution levels without loading a large image entirely into memory. Additionally or alternatively, this the R-set configuration enables rapid image display, processing and/or navigation. Further generating an R-set will reduce the computer resources needed to complete tasks such as image processing, rendering, mapping, labeling, scoring, navigating and displaying, by applying a function (or operation) to a distinct area block of a large image data set, rather than the entire image at a time.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system further comprises a graphics processing unit (GPU), configured to perform at least one of the operations.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein at least a portion of the image processing is preformed utilizing cloud computing.
According to another embodiment of the invention, an imaging system as described in any of the above is disclosed, wherein the system is configured to generate at least one 2D image sliced in any plane defined by the user.
According to another embodiment of the invention, an imaging system as described in any of the above is disclosed, wherein the system is configured to generate the 3D visualization by means of volume rendering, surface based rendering, or any combination of both.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the set of instructions additionally comprises sending at least one 3D image to at least one recipient, or presenting the image in at least one display device (e.g. a screen, a printout). additionally or alternatively, the system is configured to provide an image format enabling export of the generated image. The recipient can be such as a computer, a PDA, a mobile phone, a laptop, a monitor, a screen, an e-mail, an SMS, MMS, an operating device, a printer, a 3D printer, an imaging device, a manufacturing machine, a medical analysis software, a display device, and etc.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processor is configured to generated the image comprising one or more layers; further wherein each layer comprises different image parameters of the image data. Additionally or alternatively, the user can choose to export one or more layers or further manipulate the final image by image processing tools in each layer or layer combination.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate one or more anatomical section images defined by the user.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate an image comprising at least one feature extraction.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to provide a 3D image of at least one of the following: an internal view of the patient, an external view of the patient, and any combination thereof.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate at least one animation comprising image data parameters of more than one image along a timeline defined by at least one imported image property.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the image property is selected from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of subject present in image, and any combination thereof .
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate a database comprising differences between at least a portions of the phantom model and the imaged subject.
Reference is now made to Fig. IB schematically illustrating, a diagram of an embodiment of the present invention. An imaging system (150), for presentation of at least one 3D image of an internal organ of a (e.g., small) mammalian subject, comprising: an input module (103) configured to receive image data (104, 105) of the subject from more than one imaging method. Additionally or alternatively the imaging data is directly streamed from at least one EVID (108, 109). The system further comprises a 3D module (106) comprising phantom 3D model data of rendered internal organs of the subject; and a processor (102) in communication with a computer readable medium (CRM) (101) for executing a set of operations received from the CRM; the set of operations comprising: (i) importing the subject image data (104, 105) from more than one imaging method, and phantom model 3D data from the 3D module (106) by means of the input module (103); (ii) fitting the phantom model
3D data to the subject image data (104, 105) to provide mapping of the subject features; (iii) generating image parameters with reference to the subject image data mapping; and, (iv) processing and rendering at least a portion of the subject image data according to the image parameters, coinciding with the location of the organ of interest. Additionally or alternatively, the fitting is accomplished by a dedicated mapping/ fitting module (111). The system is configured to generate the image parameters by combining image data (104, 105) from more than one image acquiring method. Additionally or alternatively, the system further comprises an image processing module (112) for combining the image data from more than one source and generating image parameters. The system further comprises a GPU (107), configured for carrying out at least a portion of the image processing. The system further comprises an export module (110), configured to generate an export format fitted for displaying, transmitting or sending to at least one recipient.
Reference is now made to Fig. 2 schematically illustrating, a diagram of an embodiment of the present invention. This diagram presents an embodiment of an exemplary set of instructions stored in the CRM to be executed by at least one processor. First the input module imports more than one image data of the subject (210), the data originates at least two different imaging methods. Additionally or alternatively, the images are from different times, different subject temperatures, different physiological conditions, and etc. Additionally or alternatively, the data can be obtained directly from at least one IMD. Further the system imports 3D phantom image data (220) from a 3D module.
The next instruction (230) includes fitting of the image data into the 3D phantom model by such as finding anchors of recognition, scaling, aligning, moving, and the like. This provides mapping of the acquired data according to known anatomy details of the phantom model. This further allows stating the location and tagging at least a portion of the imported subject image information. The next instruction (240) comprises generating image parameters with reference to subject image data mapping by combing the data from at least two data sets received. This stage employs applying image processing technics in the joining of the datasets. Additionally or alternatively, the system can extract, unite, select, and/or apply any Boolean operations between the different data sets. Further the system can score the data on its mapping, tagging, assurance determining the similarity of the information between the data sets, and
apply algorithm dedicated for choosing the most probable correct information. Analyzing the data sets can be done with following overlaying images means, subtracting images means, contour detecting means, surface rendering means, volume rendering means, contras enhancing means, pattern recognition means, image segmentation means, similarity recognition means, vector rendering means, feature extraction means and others as known in the art of image processing. The following instruction (250) includes generating at least one 3D image of at least a portion of the image data of the subject, by rendering an image. This is accomplished by incorporating generated image parameters into the already available 3D phantom model. According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate at least a portion of the 3D image by at least one of the following: overlaying images means, subtracting images means, contour detecting means, surface rendering means, volume rendering means, contras enhancing means, pattern recognition means, similarity recognition means, vector rendering means, feature extraction means. The next instruction (260) includes exporting the image to at least one recipient and/ or displaying the image.
Reference is now made to Fig. 3 schematically illustrating, a diagram of a method (300) of the present invention. In this embodiment, a method (300) for 3D imaging an internal organ of a (e.g., small) mammalian subject, including the following steps: obtaining (310) an imaging system comprising: (i) an input module (103) configured to receive a first image data (104) of the subject from a first imaging method, and at least a second image data (105) of the subject, from a second imaging method; (ii) a 3D generic module comprising phantom model data of rendered internal organs of the subject; (iii) a processor in communication with a computer readable medium (CRM); for executing a set of operations received from the CRM. The set of operations includes importing (320) the subject image data from more than one imaging method, and the phantom model 3D data, by means of the input module. Following, fitting (330) the phantom model 3D data to the subject image data to provide mapping of the subject features. Then generating image parameters (340) with reference to the subject image data mapping. The method includes generating the image parameters by combining (370) image data from more than one imaging method. The next instruction is processing and rendering (350) at least a portion of the subject image
data according to the image parameters, coinciding with the location of the organ of interest to from at least one 3D image. The next step (360) is executing at least the above mentioned set of operations. The step of generating the image parameters by the processor is configured to generating the image parameters by integrating the first image data (104) and at least the second image data (105), and processing at least one 3D image by incorporating at least one generated image parameter into the 3D phantom model data.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating at least one image parameter by one or more of the following: overlaying images, subtracting images, contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, contras enhancing, pattern recognition, similarity recognition, vector rendering, feature extraction, noise filtering, sharpening image, texture filtering, detection of color level, and detection of brightness level.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of providing at least one imported image data comprising volumetric image data.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating the image parameters by combining image data from more than one imported image data.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating an assurance score, for assessing the image data content in view of at least two image sources.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating a mapping score for assessing the location of the image data in reference to the 3D phantom model.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating a quality score for assessing the combination of the mapping score and the assurance score.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating the 3D image comprising image parameters above a defined threshold score.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of importing at least one image data comprising whole body image data.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating at least one 3D image of a user defined area.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of identifying a specific organ in the imported image data in reference to the phantom model data.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of sending at least one 3D image to at least one recipient.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating the image comprising one or more layers; each layer comprising different image parameters of the image data.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating the image parameters adapted to the subject developmental stage.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of adjusting image parameters according to a selected from a group consisting of: subject species, subject age, subject physiological condition, organ type, image data source, and any combination thereof.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating one or more anatomical section images defined by the user.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating a 3D image comprising at least one feature extraction.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating a 3D image of at least one of the following: an internal view of the patient, an external view of the patient, and any combination thereof.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating at least one animation comprising image data parameters of more than one image along a timeline defined by at least one imported image property.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of selecting the image property from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of subject present in image, and any combination thereof.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating a database comprising the differences between at least a portions of the phantom model and the imaged subject.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of importing the image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of modifying the phantom model parameters according to at least one generated image parameters.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of generating at least a portion of the 3D image by at least one of the following: overlaying images means, subtracting images means, contour detecting means, surface rendering means, volume rendering means, contras enhancing means, pattern recognition means, similarity recognition means, vector rendering means, feature extraction means.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of importing the image data
selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of processing at least a portion of the 3D image utilizing cloud computing.
According to another embodiment of the invention, a method as defined in any of the above is disclosed, additionally comprising the step of obtaining the system further comprising a graphics processing unit (GPU), and performing at least one of the operations by the GPU.
Reference is now made to Fig. 4, schematically illustrating one embodiment of the present invention. An imaging system (175), for presentation of at least one 3D image of an internal organ of a human or other mammalian subject, comprising: (a) a 3D generic module (106) comprising phantom 3D model data of rendered internal organs of the subject; (b) an input module (103) configured to receive image data (104, 105) of the subject from more than one imaging method; (c) a fitting module (111) configured to provide mapping of at least a portion of the image data by matching the image data with the 3D generic module data; (d) a scoring module (115) configured to score the image data by evaluating and comparing the image data received from more than one subject image data, and associate each image data portion at least one score; (e) a processing module (112) configured to generate image parameters for generating a 3D image by selecting image data with a defined score and rendering the 3D image; wherein the processing module (112) is configured to integrate the image data from one first image data and at least one second image data to generate the image parameters, and generate the 3D image by incorporating the image parameters into the 3D phantom model.
Additionally or alternatively, the subject image data can be received following processing of raw data specific to each imaging method. The system is configured to receive data in several levels of processing
In another aspect of the invention, an imaging system as defined in any of the above is disclosed, wherein the system further comprises at least one imaging device (IMD) configured to forward at least one subject image data to the input module.
In another aspect of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing is configured to generate at least one image parameter by one or more of the following: overlaying images, subtracting images, contour detection, surface detection, volume detection, raw data manipulation, image segmentation, user interactive segmentation, contras enhancing, pattern recognition, similarity recognition, vector rendering, feature extraction, noise filtering, sharpening image, texture filtering, detection of color level, aligning, scaling, rotating, transforming the angle of the image, skewing and detection of brightness level.
In another aspect of the invention, an imaging system as defined in any of the above is disclosed, wherein at least one image data received is 3D volumetric image data.
In another aspect of the invention, an imaging system as defined in any of the above is disclosed, wherein the input module is configured to receive the image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
Additionally or alternatively, the received image data is selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate the image parameters by combining image data from more than one image data.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the scoring module is configured to generate an assurance score, for assessing the image data content by analyzing at least two image data sources.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the scoring module is configured to generate a mapping score for assessing the location of the image data in reference to the 3D phantom model data.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the scoring module is configured to generate a
quality score for assessing the combination of the mapping score and the assurance score.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the scoring module is configured to generate a score for each image parameter generated by the processing module, by evaluating the score of the relevant image data portion and the 3D phantom model data.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to associate at least one score to each image parameter.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system further comprises an export module, configured to export the generated image and/ or the image data parameters, to at least one recipient.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the recipient is selected from a group consisting of: at least one display device, an e-mail, a digital transmission, a printer, a 3D printer, a computer, an imaging device, a medical analysis software, a mobile phone, and any combination thereof.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the input module is configured to receive image data comprising the least a portion of the subject, whole body image data, or both.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate at least one 3D image of a user defined area.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the fitting module is configured to identify a specific organ in the imported image data in reference to the phantom model data.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the fitting module is configured to adjusted the
image data mapping according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the fitting module is configured to map at least one image data portion by one or more of the following: overlaying images, subtracting images, contour detection, surface detection, volume detection, raw data manipulation, image segmentation, user interactive segmentation, contras enhancing, pattern recognition, similarity recognition, vector rendering, feature extraction, noise filtering, sharpening image, texture filtering, detection of color level, aligning, scaling, rotating, transforming the angle of the image, skewing, and detection of brightness level.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate one or more anatomical section images defined by the user.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate an image comprising at least one subject feature extraction.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generated the image comprising one or more layers; further wherein each layer comprises different image parameters of the image data.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate adjusted image parameters according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to provide a 3D image selected from a group consisting of: an internal view of at least a portion
the patient, an external view of at least apportion of the patient, and any combination thereof.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to generate at least one animation comprising image data parameters of more than one image along a timeline defined by at least one image property.
Additionally or alternatively, the image property is selected from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of the subject present in the image, and any combination thereof.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to generate a database comprising differences between at least a portions of the phantom model data and the imaged subject.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the processing module is configured to modify the phantom model parameters according to at least one generated image parameter.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system further comprises a graphics processing unit (GPU), configured to perform at least a portion of a selected from a group consisting of: the mapping, the image processing, the image rendering, and any combination thereof.
According to another embodiment of the invention, an imaging system as defined in any of the above is disclosed, wherein the system is configured to perform at least a portion of a selected from a group consisting of: the mapping, the image processing, the image rendering, and any combination thereof, utilizing cloud computing.
Claims
1. An imaging system (100), for presentation of at least one 3D image of an internal organ of a mammalian subject, comprising:
a. an input module (103) configured to receive a first image data (104) of said subject from a first imaging method, and at least a second image data (105) of said subject, from a second imaging method; b. a 3D generic module comprising phantom 3D model data (106) of rendered internal organs of said subject; and,
c. a processor (102) in communication with a Computer Readable Medium, CRM, (101), for executing a set of operations received from the CRM; the set of operations comprising:
i. importing said subject image data from more than one imaging method, and said phantom model 3D data, by means of said input module;
ii. fitting said subject image data to said phantom model 3D data to provide mapping of said subject image data features;
iii. generating image parameters with reference to said subject image data mapping; and,
iv. processing and rendering at least a portion of said subject image data according to said image parameters, coinciding with the location of said organ of interest, into at least one 3D image;
wherein said processor is configured to generate said image parameters by integrating image data from said first image data (104) and at least said second image data (105), and process said 3D image by incorporating at least one said generated image parameter into said 3D phantom model data.
2. The imaging system according to claim 1, wherein said system further comprises at least one imaging device (IMD) configured to forward at least one subject image data to said input module.
3. The imaging system according to claim 1, wherein at least one imported image data provides volumetric image data.
4. The imaging system according to claim 1, wherein said integrating image data from at least one first said image data and at least one second said image data comprises at least one selected from a group consisting of: overlaying at least a portion of said images, subtracting at least a portion of said images, incorporating at least a portion of one said image in said at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of said image by image processing means, applying image processing on at least a portion of said image, and any combination thereof.
5. The imaging system according to claim 1, wherein said processor is configured to fit at least one said image data to said 3D phantom model by applying at least one selected from a group consisting of: overlaying at least a portion of said images, subtracting at least a portion of said images, incorporating at least a portion of one said image in said at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of said image by image processing means, applying image processing on at least a portion of said image, and any combination thereof.
6. The imaging system according to claim 4 and 5, wherein said image processing means comprises at least one of: contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, image restoration from partial data, image reconstruction, contras enhancing, pattern recognition, similarity recognition, vector rendering, surface rendering, feature extraction, noise filtering, sharpening, blurring, texture filtering, detection of color level, detection of brightness level, detection of color-level or brightness-level discontinuity, aligning, scaling, rotating, skewing, resizing, transforming the angle of said image, cropping, skewing, 2D geometric transformations, 3D geometric transformations, image compression, skeletonization, measuring at least a portion of said image, changing color level, identify foreground objects and background locations, changing brightness level, Control-Point Image processing, and generating an R-set of image data.
7. The imaging system according to claim 1, wherein the processor is further configured to perform control point processing, selected from a group consisting of: manually, automatic and any combination thereof.
8. The imaging system according to claim 1, wherein said input device is configured to provide image analysis of said first image data and at least said second image data by image processing means.
9. The imaging system according to claim 5, wherein said image analysis comprises detection of at least one of: shapes contours, foreground shapes, background shapes, textures, color levels, brightness, noise, saturation, resolution, channels, patterns, and similarities.
10. The imaging system according to claim 1, wherein said processor is configured to generate at least one reduced-resolution data set (R-Set) of said image data, that divides an image into spatial tiles and resamples the image at different resolution levels without loading a large image entirely into said CRM.
11. The imaging system according to claim 1, wherein said system is configured to generate said image parameters by combining image data from more than one imported image data.
12. The imaging system according to claim 1, wherein said system is configured to generate an assurance score, for assessing the image data content by analyzing at least two image data sources.
13. The imaging system according to claim 1, wherein said system is configured to generate a mapping score for assessing the location of said image data in reference to said 3D phantom model data.
14. The imaging system according to claims 6 and 7, wherein said system is configured to generate a quality score for assessing the combination of said mapping score and said assurance score.
15. The imaging system according to claim 6, 7, and 8, wherein said system is configured to associate at least one said score to each image parameter.
16. The imaging system according to claims 6, 7, 8, 9, wherein said system is configured to generate said 3D image based on image parameters having a defined threshold score.
17. The imaging system according to claim 6, 7, 8, and 9 wherein said system further comprises a scoring module in communication with said processor, configured to generate at least one score selected from a group consisting of: said assurance score, said mapping score, said quality score, and any combination thereof.
18. The imaging system according to claim 1, wherein said system further comprises a fitting module, in communication with said processor, configured to provide mapping of said image data in reference to said 3D phantom model data.
19. The imaging system according to claim 1, wherein said system further comprises an image processing module in connection with said processor configured to process said subject image data and render at least one 3D image.
20. The imaging system according to claim 1, wherein said system further comprises an export module, in communication with said processor, configured to export a said generated image and/ or said image data parameters, to at least one recipient.
21. The imaging system according to claim 1, wherein said system is configured to import whole body image data, partial body image data, or both.
22. The imaging system according to claim 1, wherein said system is configured to generate at least one 3D image of a user defined area.
23. The imaging system according to claim 1, wherein said system is configured to identify a specific organ in said imported image data in reference to said phantom model data.
24. The imaging system according to claim 1, wherein said system is configured to generate one or more anatomical section images defined by the user.
25. The imaging system according to claim 1, wherein said system is configured to generate an image comprising at least one subject feature extraction.
26. The imaging system according to claim 1, wherein said processor is configured to generated said image comprising one or more layers; further wherein each layer comprises different image parameters of said image data.
27. The imaging system according to claim 1, wherein said system is configured to generate adjusted image parameters according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
28. The imaging system according to claim 1, wherein said system is configured to adjust said mapping according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
29. The imaging system according to claim 1, wherein said system is configured to provide a 3D image selected from a group consisting of: an internal view of at least a portion said patient, an external view of at least a portion of said patient, and any combination thereof.
30. The imaging system according to claim 1, wherein said system is configured to generate at least one animation comprising image data parameters of more than one image along a timeline defined by at least one said imported image property.
31. The imaging system according to claim 24, wherein said image property is selected from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of said subject present in said image, and any combination thereof.
32. The imaging system according to claim 1, wherein said system is configured to generate a database comprising differences between at least a portions of said phantom model data and said imaged subject.
33. The imaging system according to claim 1, wherein said system is configured to import said image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
34. The imaging system according to claim 1, wherein said system is configured to modify said phantom model parameters according to at least one said generated image parameter.
35. The imaging system according to claim 1, wherein said system is configured to generate at least a portion of said 3D image by at least one of the following: overlaying images means, subtracting images means, contour enhancing means, surface rendering means, volume rendering means, contras enhancing means, vector rendering means, feature extraction means.
36. The imaging system according to claim 1, wherein said imported image data is selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
37. The imaging system according to claim 1, wherein said system further comprises a graphics processing unit (GPU), configured to perform at least a portion of a selected from a group consisting of: said mapping, said image processing, said image rendering, and any combination thereof.
38. The imaging system according to claim 1, wherein said system is configured to perform at least a portion of a selected from a group consisting of: said mapping, said image processing, said image rendering, and any combination thereof, utilizing cloud computing.
39. A method for 3D imaging an internal organ of a mammalian subject, comprising the steps of:
a. obtaining an imaging system (100) comprising:
i. an input module (103) configured to receive a first image data (104) of said subject from a first imaging method, and at least a second image data (105) of said subject, from a second imaging method; ii. a 3D generic module comprising phantom model data of rendered internal organs of said subject; and,
iii. a processor (102) in communication with a Computer Readable Medium, CRM, (101), for executing a set of operations received from the CRM; the set of operations comprising:
(1) importing said subject image data from more than one imaging method, and said phantom model 3D data, by means of said input module;
(2) fitting said subject image data to phantom model 3D data to provide mapping of said subject image data features;
(3) generating image parameters with reference to said subject image data mapping; and,
(4) processing and rendering at least a portion of said subject image data according to said image parameters, coinciding with the location of said organ of interest, into at least one 3D image, and,
b. executing said set of operations,
wherein said step of said generating said image parameters by said processor is configured to generating said image parameters by integrating said first image data (104) and at least said second image data (105), and processing at least one 3D image by incorporating at least one said generated image parameter into said 3D phantom model data.
40. The method according to claim 33, additionally comprising the step of generating The imaging system according to claim 1, wherein said system further comprises at least one imaging device (IMD) configured to forward at least one subject image data to said input module.
41. The method according to claim 33, additionally comprising the step of
providing at least one imported image data comprising volumetric image data.
42. The method according to claim 33, additionally comprising the step of integrating image data from at least one first said image data and at least one second said image data comprising at least one selected from a group consisting of: overlaying at least a portion of said images, subtracting at least a portion of said images, incorporating at least a portion of one said image in said at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of said image by image processing means, applying image processing on at least a portion of said image, and any combination thereof.
43. The method according to claim 33, additionally comprising the step of said processor fitting at least one said image data to said 3D phantom model by applying at least one selected from a group consisting of: overlaying at least a portion of said images, subtracting at least a portion of said images, incorporating at least a portion of one said image in said at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of said image by image processing means, applying image processing on at least a portion of said image, and any combination thereof.
44. The method according to claim 33, additionally comprising the step of said input device providing image analysis of said first image data and at least said second image data by image processing means.
45. The method according to claim 43, 44 and 45, wherein said image processing means comprising at least one of: contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, image restoration from partial data, image reconstruction, contras enhancing, pattern recognition, similarity recognition, vector rendering, surface rendering, feature extraction, noise filtering, sharpening, blurring, texture filtering, detection of color level, detection of brightness level, detection of color-level or brightness-level discontinuity, aligning, scaling, rotating, skewing, resizing, transforming the angle of said image, cropping, skewing, 2D geometric transformations, 3D geometric transformations, image compression, skeletonization, measuring at least a portion of said image, changing color level, identify foreground objects and background locations, changing brightness level, Control-Point Image processing, and generating an R-set of image data.
46. The method according to claim 33, additionally comprising the step of said processor performing control point processing, selected from a group consisting of: manual, automatic and any combination thereof.
47. The method according to claim 33, wherein said image analysis comprises detection of at least one of: shapes contours, foreground shapes, background shapes, textures, color levels, brightness, noise, saturation, resolution, channels, patterns, and similarities.
48. The method according to claim 33, additionally comprising the step of said processor generating at least one reduced-resolution data set (R-Set) of said image data, thereby dividing an image into spatial tiles and resampling the image at different resolution levels without loading a large image entirely into said CRM.
49. The method according to claim 33, additionally comprising the step of generating said image parameters by combining image data from more than one imported image data.
50. The method according to claim 33, additionally comprising the step of generating an assurance score, for assessing the image data content in view of at least two image sources.
51. The method according to claim 33, additionally comprising the step of generating a mapping score for assessing the location of said image data in reference to said 3D phantom model.
52. The method according to claims 37 and 38, additionally comprising the step of generating a quality score for assessing the combination of said mapping score and said assurance score.
53. The method according to claims 37, 38 and 39, additionally comprising the step of generating said 3D image comprising image parameters above a defined threshold score.
54. The method according to claim 31, additionally comprising the step of providing said system further comprising an export module, and exporting at least one said generated 3D image by means of said export module.
55. The method according to claim 31, additionally comprising the step of importing at least one image data comprising whole body image data, partial body image data, or both.
56. The method according to claim 31, additionally comprising the step of generating at least one 3D image of a user defined area.
57. The method according to claim 31, additionally comprising the step of generating one or more anatomical section images defined by the user.
58. The method according to claim 31, additionally comprising the step of generating a 3D image comprising at least one feature extraction.
59. The method according to claim 31, additionally comprising the step of generating a 3D image of at least one of the following: an internal view of said patient, an external view of said patient, and any combination thereof.
60. The method according to claim 31, additionally comprising the step of identifying a specific organ in said imported image data in reference to said phantom model data.
61. The method according to claim 31, additionally comprising the step of generating said image comprising one or more layers; further each of said layers comprising different image parameters of said image data.
62. The method according to claim 31, additionally comprising the step of generating said image parameters adapted to said subject developmental stage.
63. The method according to claim 31, additionally comprising the step of adjusting image parameters according to a selected from a group consisting of: subject species, subject age, subject physiological condition, organ type, image data source, and any combination thereof.
64. The method according to claim 31, additionally comprising the step of generating at least one animation comprising image data parameters of more than one image along a timeline defined by at least one said imported image property.
65. The method according to claim 51, additionally comprising the step of selecting said image property from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of subject present in image, and any combination thereof .
66. The method according to claim 31, additionally comprising the step of generating a database comprising the differences between at least a portions of said phantom model and said imaged subject.
67. The method according to claim 31, additionally comprising the step of importing said image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
68. The method according to claim 31, additionally comprising the step of modifying said phantom model parameters according to at least one said generated image parameters.
69. The method according to claim 31, additionally comprising the step of generating at least a portion of said 3D image by at least one of the following: overlaying images means, subtracting images means, contour detecting means, surface rendering means, volume rendering means, contras enhancing means, pattern recognition means, similarity recognition means, vector rendering means, feature extraction means.
70. The method according to claim 31, additionally comprising the step of importing said image data selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
71. The method according to claim 31, additionally comprising the step of obtaining said system further comprising a graphics processing unit (GPU), and performing at least one selected from a group consisting of: image processing, mapping, rendering and any combination thereof, at least partially by said GPU.
72. The method according to claim 31, additionally comprising the step of preforming at least one selected from a group consisting of: image processing, rendering, mapping and any combination thereof, at least partially by utilizing cloud computing.
73. An imaging system, for presentation of at least one 3D image of an internal organ of a mammalian subject, comprising: a. a 3D generic module comprising phantom 3D model data of rendered internal organs of said subject;
b. an input module configured to receive one first image data from one first imaging method, and at least one second said image data from at least a second imaging method; c. a fitting module configured to provide mapping of at least a portion of said image data by matching said image data with said 3D generic module data; d. a scoring module configured to score said image data by evaluating and comparing one first said image data and at least one second said image data, and associate each image data portion at least one score; and, e. a processing module configured to generate image parameters for generating a 3D image by selecting image data from a defined threshold score and rendering said 3D image; wherein said processing module is configured to integrate said image data from one first said image data and at least one second said image data to generate said image parameters, and generate said 3D image by incorporating said image parameters into said 3D phantom model.
74. The imaging system according to claim 61, wherein said system further
comprises at least one imaging device (IMD) configured to forward at least one subject image data to said input module.
75. The imaging system according to claim 61, wherein at least one imported image data provides volumetric image data.
76. The imaging system according to claim 61, wherein said integrating image data from one first said image data and at least one second said image data comprises at least one selected from a group consisting of: overlaying at least a portion of said images, subtracting at least a portion of said images, incorporating at least a portion of one said image in said at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of said image by image processing means, applying image processing on at least a portion of said image, and any combination thereof.
77. The imaging system according to claim 61, wherein said fitting module is configured to fit at least one said image data to said 3D phantom model by applying at least one selected from a group consisting of: overlaying at least a portion of said images, subtracting at least a portion of said images, incorporating at least a portion of one said image in said at least one second image, connecting at least a portion of two images, preforming image analysis of at least a portion of said image by image processing means, applying image processing on at least a portion of said image, and any combination thereof.
78. The imaging system according to claim 61, wherein said fitting module and/or processing module is further configured to perform control point processing, selected from a group consisting of: manually, automatic and any combination thereof.
79. The imaging system according to claim 61, wherein said input device is configured to provide image analysis of said first image data and at least said second image data by image processing means.
80. The imaging system according to claim 64, 65, 67, and 66, wherein said image processing means comprises at least one of: contour detection, surface detection, volume detection, raw data manipulation, image segmentation, interactive segmentation, image restoration from partial data, image reconstruction, contras enhancing, pattern recognition, similarity recognition, vector rendering, surface rendering, feature extraction, noise filtering, sharpening, blurring, texture filtering, detection of color level, detection of brightness level, detection of color-level or brightness-level discontinuity, aligning, scaling, rotating, skewing, resizing, transforming the angle of said image, cropping, skewing, 2D geometric transformations, 3D geometric transformations, image compression, skeletonization, measuring at least a portion of said image, changing color level, identify foreground objects and background locations, changing brightness level, Control-Point Image processing, and generating an R-set of image data.
81. The imaging system according to claim 67, wherein said image analysis comprises detection of at least one of: shapes contours, foreground shapes,
background shapes, textures, color levels, brightness, noise, saturation, resolution, channels, patterns, and similarities.
82. The imaging system according to claim 61, wherein said processor is configured to generate at least one reduced-resolution data set (R-Set) of said image data, that divides an image into spatial tiles and resamples the image at different resolution levels without loading a large image entirely into said CRM.
83. The imaging system according to claim 61, wherein said input module is configured to receive said image data in a form selected from a group consisting of: raw form, compressed form, processed form, and any combination thereof.
84. The imaging system according to claim 61, wherein said received image data is selected from a group consisting of: planar image data, volumetric image data, vector based image data, pixel based data, and any combination thereof.
85. The imaging system according to claim 61, wherein said system is configured to generate said image parameters by combining image data from more than one image data.
86. The imaging system according to claim 61, wherein said scoring module is configured to generate an assurance score, for assessing the image data content by analyzing at least two image data sources.
87. The imaging system according to claim 61, wherein said scoring module is configured to generate a mapping score for assessing the location of said image data in reference to said 3D phantom model data.
88. The imaging system according to claims 68 and 69, wherein said scoring module is configured to generate a quality score for assessing the combination of said mapping score and said assurance score.
89. The imaging system according to claim 61, wherein said scoring module is configured to generate a score for each image parameter generated by said processing module, by evaluating the score of said relevant image data portion and said 3D phantom model data.
90. The imaging system according to claim 68, 69, 70, 71, wherein said processing module is configured to associate at least one said score to each image parameter.
91. The imaging system according to claim 61, wherein said system further comprises an export module, configured to export a said generated image and/ or said image data parameters, to at least one recipient.
92. The imaging system according to claim 61, wherein said recipient is selected from a group consisting of: at least one display device, an e-mail, a digital transmission, a printer, a 3D printer, a computer, an imaging device, a medical analysis software, a mobile phone, and any combination thereof.
93. The imaging system according to claim 61, wherein said input module is configured to receive image data comprising said at least a portion of said subject, whole body image data, or both.
94. The imaging system according to claim 61, wherein said processing module is configured to generate at least one 3D image of a user defined area.
95. The imaging system according to claim 61, wherein said fitting module is configured to identify a specific organ in said imported image data in reference to said phantom model data.
96. The imaging system according to claim 61, wherein said fitting module is configured to adjusted said image data mapping according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
97. The imaging system according to claim 61, wherein said processing module is configured to generate one or more anatomical section images defined by the user.
98. The imaging system according to claim 61, wherein said processing module is configured to generate an image comprising at least one subject feature extraction.
99. The imaging system according to claim 61, wherein said processing module is configured to generated said image comprising one or more layers; further wherein each layer comprises different image parameters of said image data.
100. The imaging system according to claim 61, wherein said processing module is configured to generate adjusted image parameters according to a selected from a group consisting of: subject species, subject age, subject developmental stage, subject physiological condition, organ type, image data source, image acquiring method, and any combination thereof.
101. The imaging system according to claim 61, wherein said processing module is configured to provide a 3D image selected from a group consisting of: an internal view of at least a portion said patient, an external view of at least apportion of said patient, and any combination thereof.
102. The imaging system according to claim 61, wherein said processing module is configured to generate at least one animation comprising image data parameters of more than one image along a timeline defined by at least one said image property.
103. The imaging system according to claim 85, wherein said image property is selected from a group consisting of: data source, generation time, import time, image quality, image reconstruction algorithm type, image reconstruction algorithm parameters, portion of said subject present in said image, and any combination thereof.
104. The imaging system according to claim 61, wherein said system is configured to generate a database comprising differences between at least a portions of said phantom model data and said imaged subject.
105. The imaging system according to claim 61, wherein said processing module is configured to modify said phantom model parameters according to at least one said generated image parameter.
106. The imaging system according to claim 61, wherein said system further comprises a graphics processing unit (GPU), configured to perform at least a portion of a selected from a group consisting of: said mapping, said image processing, said image rendering, and any combination thereof.
The imaging system according to claim 61, wherein said system is configured to perform at least a portion of a selected from a group consisting of: said mapping, said image processing, said image rendering, and any combination thereof, utilizing cloud computing.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562114055P | 2015-02-09 | 2015-02-09 | |
US62/114,055 | 2015-02-09 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2016128965A2 true WO2016128965A2 (en) | 2016-08-18 |
WO2016128965A3 WO2016128965A3 (en) | 2016-09-29 |
Family
ID=56615508
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL2016/050145 WO2016128965A2 (en) | 2015-02-09 | 2016-02-09 | Imaging system of a mammal |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016128965A2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019200108A1 (en) * | 2018-04-11 | 2019-10-17 | Cornell University | Assessment of coronary function via advanced 3d printed models |
CN114680873A (en) * | 2020-12-31 | 2022-07-01 | 武汉联影生命科学仪器有限公司 | Image scanning method, device and system based on animal identification |
US11854281B2 (en) | 2019-08-16 | 2023-12-26 | The Research Foundation For The State University Of New York | System, method, and computer-accessible medium for processing brain images and extracting neuronal structures |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7327872B2 (en) * | 2004-10-13 | 2008-02-05 | General Electric Company | Method and system for registering 3D models of anatomical regions with projection images of the same |
US7996060B2 (en) * | 2006-10-09 | 2011-08-09 | Biosense Webster, Inc. | Apparatus, method, and computer software product for registration of images of an organ using anatomical features outside the organ |
US20090010507A1 (en) * | 2007-07-02 | 2009-01-08 | Zheng Jason Geng | System and method for generating a 3d model of anatomical structure using a plurality of 2d images |
US8320711B2 (en) * | 2007-12-05 | 2012-11-27 | Biosense Webster, Inc. | Anatomical modeling from a 3-D image and a surface mapping |
US8953856B2 (en) * | 2008-11-25 | 2015-02-10 | Algotec Systems Ltd. | Method and system for registering a medical image |
GB0913930D0 (en) * | 2009-08-07 | 2009-09-16 | Ucl Business Plc | Apparatus and method for registering two medical images |
US20120078088A1 (en) * | 2010-09-28 | 2012-03-29 | Point of Contact, LLC. | Medical image projection and tracking system |
-
2016
- 2016-02-09 WO PCT/IL2016/050145 patent/WO2016128965A2/en active Application Filing
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019200108A1 (en) * | 2018-04-11 | 2019-10-17 | Cornell University | Assessment of coronary function via advanced 3d printed models |
US11854281B2 (en) | 2019-08-16 | 2023-12-26 | The Research Foundation For The State University Of New York | System, method, and computer-accessible medium for processing brain images and extracting neuronal structures |
CN114680873A (en) * | 2020-12-31 | 2022-07-01 | 武汉联影生命科学仪器有限公司 | Image scanning method, device and system based on animal identification |
Also Published As
Publication number | Publication date |
---|---|
WO2016128965A3 (en) | 2016-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8355553B2 (en) | Systems, apparatus and processes for automated medical image segmentation using a statistical model | |
JP2022525198A (en) | Deep convolutional neural network for tumor segmentation using positron emission tomography | |
US10580181B2 (en) | Method and system for generating color medical image based on combined color table | |
US20080021301A1 (en) | Methods and Apparatus for Volume Computer Assisted Reading Management and Review | |
US7136516B2 (en) | Method and system for segmenting magnetic resonance images | |
US9147242B2 (en) | Processing system for medical scan images | |
US20150356733A1 (en) | Medical image processing | |
US20150003702A1 (en) | Processing and displaying a breast image | |
CN107146262B (en) | Three-dimensional visualization method and system for OCT (optical coherence tomography) image | |
KR102149369B1 (en) | Method for visualizing medical image and apparatus using the same | |
WO2021125950A1 (en) | Image data processing method, method of training a machine learning data processing model and image processing system | |
WO2016128965A2 (en) | Imaging system of a mammal | |
US10964074B2 (en) | System for harmonizing medical image presentation | |
JP6564075B2 (en) | Selection of transfer function for displaying medical images | |
US8805122B1 (en) | System, method, and computer-readable medium for interpolating spatially transformed volumetric medical image data | |
RU2565521C2 (en) | Processing set of image data | |
US8848998B1 (en) | Automated method for contrast media arrival detection for dynamic contrast enhanced MRI | |
US12014492B2 (en) | Characterizing lesions in radiology images | |
Mihaylova et al. | A brief survey of spleen segmentation in MRI and CT images | |
JP6813759B2 (en) | Projection image calculation processing device, projection image calculation processing method and projection image calculation processing program | |
US20080260220A1 (en) | Registration of optical images of small animals | |
Tina et al. | Analysis of algorithms in medical image processing | |
Abdallah | Segmentation of salivary glands in nuclear medicine images using edge detection tools | |
Patra et al. | Medical Image Processing in Nuclear Medicine and Bone Arthroplasty | |
Linh et al. | IBK–A new tool for medical image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16748827 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase in: |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16748827 Country of ref document: EP Kind code of ref document: A2 |