EP4009271A1 - Machine learning of edge restoration following contrast suppression/material substitution - Google Patents
Machine learning of edge restoration following contrast suppression/material substitution Download PDFInfo
- Publication number
- EP4009271A1 EP4009271A1 EP20211551.5A EP20211551A EP4009271A1 EP 4009271 A1 EP4009271 A1 EP 4009271A1 EP 20211551 A EP20211551 A EP 20211551A EP 4009271 A1 EP4009271 A1 EP 4009271A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- interest
- contour
- data
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 239000000463 material Substances 0.000 title claims abstract description 32
- 230000001629 suppression Effects 0.000 title claims abstract description 16
- 238000006467 substitution reaction Methods 0.000 title abstract description 8
- 238000010801 machine learning Methods 0.000 title description 7
- 238000012549 training Methods 0.000 claims description 62
- 238000000034 method Methods 0.000 claims description 37
- 238000004590 computer program Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 13
- 239000002872 contrast media Substances 0.000 claims description 10
- 238000002059 diagnostic imaging Methods 0.000 claims description 10
- 238000002052 colonoscopy Methods 0.000 claims description 7
- 239000008280 blood Substances 0.000 claims description 6
- 210000004369 blood Anatomy 0.000 claims description 6
- 238000002583 angiography Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 230000008014 freezing Effects 0.000 claims description 3
- 238000007710 freezing Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 description 13
- 210000001072 colon Anatomy 0.000 description 11
- 238000002595 magnetic resonance imaging Methods 0.000 description 9
- 238000007637 random forest analysis Methods 0.000 description 8
- 238000002609 virtual colonoscopy Methods 0.000 description 8
- 238000003708 edge detection Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 210000003608 fece Anatomy 0.000 description 5
- 230000007704 transition Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000007408 cone-beam computed tomography Methods 0.000 description 4
- 230000015654 memory Effects 0.000 description 4
- 230000001537 neural effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000002347 injection Methods 0.000 description 3
- 239000007924 injection Substances 0.000 description 3
- 230000002792 vascular Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 230000010339 dilation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000007794 irritation Effects 0.000 description 2
- 230000003252 repetitive effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 230000003936 working memory Effects 0.000 description 2
- ZCYVEMRRCGMTRW-UHFFFAOYSA-N 7553-56-2 Chemical compound [I] ZCYVEMRRCGMTRW-UHFFFAOYSA-N 0.000 description 1
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 1
- 206010009944 Colon cancer Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009118 appropriate response Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 208000029742 colonic neoplasm Diseases 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001839 endoscopy Methods 0.000 description 1
- 229910052740 iodine Inorganic materials 0.000 description 1
- 239000011630 iodine Substances 0.000 description 1
- 239000008141 laxative Substances 0.000 description 1
- 230000002475 laxative effect Effects 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 210000005166 vasculature Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
- G06T2207/30104—Vascular flow; Blood flow; Perfusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/404—Angiography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
Definitions
- the present invention generally relates to edge restoration, and in particular relates to an apparatus for processing a medical image of an object of interest, a medical imaging system, a method for processing a medical image of an object of interest, a computer program element, and a computer readable medium.
- diagnostic reading may be improved by subtracting or suppressing image content at locations which are tagged by contrast agent, e.g. stool residuals in colonoscopy, following oral administration of contrast ('electronic cleansing'), or blood following vascular injection of contrast agent ('black blood').
- contrast agent e.g. stool residuals in colonoscopy
- the tagged areas may be substituted with a virtual material (e.g. air, also known as subtraction or suppression).
- image edges with tagging-neighbourhood may have a different appearance compared to edges without tagging-neighbourhood, thus causing irritation to the user. This is because the intensity difference of the tagged material to its surroundings is in general different in sign as well as amplitude in comparison to non-tagged image locations, which causes specific transition profiles.
- an apparatus for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed.
- the apparatus comprises an input module, a contour classifier module, a suppression module, a training module, an inference module, and an output module.
- the input module is configured for receiving the medical image of the object of interest.
- the contour classifier module is configured for detecting an image contour of the object of interest and classifying the detected image contour into a first image contour and a second image contour, wherein the first image contour is representative of an image contour of the first image part of the object of interest, and the second image contour is representative of an image contour of the second image part of the object of interest.
- the suppression module is configured for suppressing the image content to be suppressed or substituting the image content to be suppressed with a virtual material to generate a cleansed image.
- the training module is configured for training a data-driven model using image data of the first image contour to learn an appearance of the image contour of the object of interest.
- the inference module is configured for applying the trained data-driven model to the cleansed image to generate a restored image of the object of interest.
- the output module is configured for providing the generated restored image of the object of interest.
- diagnostic reading may be improved by subtracting or suppressing image content of an object of interest (e.g. colon) at locations, which are tagged by contrast agent, e.g. stool residuals in colonoscopy.
- contrast agent e.g. stool residuals in colonoscopy.
- the tagged areas may be substituted with a virtual material, such as air, thereby creating artificially cleansed edges.
- an apparatus e.g. a computing device
- an image contour i.e. image edges
- digital subtraction or digital material substitution to optimally resemble image edges in unmodified locations.
- the appearance of the image edges of the object of interest is machine-learned in an unsupervised non-analytical way from the image data at unmodified edge locations different from the artificially created cleansed edges.
- the active learning for training the data-driven model is based on the unmodified edges, no manual annotations are required, thereby eliminating tedious and repetitive manual annotation work, which would require delicate sub-voxel accuracy.
- the restoration of the edge profiles is not restricted to certain discrete or analytical filters, such as Dilations, Gaussian, etc. Rather, the flexibility of machine learning may allow synthesizing and exploring a wide range of edge appearances, which are not limited by analytical functions. Further, the algorithm may adapt automatically to varying edge appearances in various image types, and replaces the tedious manual search for a certain restoration technique.
- integrated auto-encoders may be used as the data-driven model, which map image patches directly onto themselves - that is, the input type equals output type.
- the auto-encoders may include, but are not limited to, principle component auto-encoders, sparse auto-encoders, deep neural auto-encoders, variational auto-encoders, generative auto-encoders, and random forest auto-encoders.
- a multivariate regressor may be used as the data-driven model for explicitly encoding of local image patches into a latent subspace and reproducing image data from the latent subspace.
- One example is to use material-classifiers to transform the original image into material images with or without material concentrations per pixel/voxel.
- a multivariate regressor can then be trained on all image patches to reproduce the original image intensities from the material images.
- the trained data-driven model can be applied to the artificially created cleansed edges to perform a restoration.
- the proposed apparatus and method may be used with standard CT, MRI (Magnetic Resonance Imaging), or CBCT (Cone Beam Computed Tomography), but also may be extended to spectral CT or multi-parametric MRI.
- CT Magnetic Resonance Imaging
- CBCT Cone Beam Computed Tomography
- the algorithm may allow to adapt specifically to the appearance in each spectral band.
- suppression may refer to e.g. 'electronically suppressed', 'electronically cleansed', 'made transparent', or 'set to air-like appearance'.
- the data-driven model comprises an auto-encoder configured for mapping local image patches directly onto themselves.
- Each local image patch represents one or a group of pixels or voxels in the medical image.
- Examples of the auto-encoder may include, but are not limited to, principle component auto-encoders, sparse auto-encoders, deep neural auto-encoders, variational auto-encoders, generative auto-encoders, and random forest auto-encoders.
- image patch refers to a patch or group of pixels or voxels having a specific size, shape, and location corresponding to an image.
- image patches can have a predetermined pixel width/height (e.g., 7 ⁇ 7, 8 ⁇ 8, 11 ⁇ 11, or the like) and a location for each image patch can be defined based on one or more centre pixels or voxels.
- the local image patch may have any suitable shape, such as a circular shape, a rectangular shape, a square shape, etc.
- the data-driven model comprises a multivariate regressor configured for explicitly encoding of local image patches into a latent subspace and reproducing image data from the latent subspace.
- Each local image patch represents one or a group of pixels or voxels in the medical image.
- the data-driven model may use explicit encoding of the image patches into a latent subspace, followed by a decoding into the original image space.
- the apparatus further comprises a material classifier module configured for transforming the medical image of the object of interest into material images.
- the training module is configured for training the multivariate regressor for reproducing image data from the material images.
- material-classifiers may be used to transform the original image into material images with or without material concentrations per pixel/voxel.
- a multivariate regressor can then be trained on image patches to reproduce the original image intensities from the material images.
- the apparatus further comprises a tessellation module configured for tessellating the medical image of the object of interest into a plurality of local regions and replacing an image intensity of the medical image by a mean intensity of the plurality of local regions.
- the training module is configured for training the multivariate regressor for reproducing image data from the mean intensity of the plurality of local regions.
- the original image may be tessellated into local regions, also referred to as super pixels or supervoxels, and all image intensities may be replaced by the local region's mean intensity (also referred to as the subspace).
- the suppression module may then replace e.g. the mean contrast value of a region by the mean value of air.
- a multivariate regressor e.g. random forest regression or support vector regression
- the central intensity of any image patch can be estimated as a regression value from its surrounding image patch values.
- the training module is configured for training the data-driven model in a training phase and freezing the trained data-driven model.
- the inference module is configured for applying the frozen trained data-driven model in an inference phase.
- This training scheme may have a large training set and reproducible performance.
- the training module is configured for training on the fly on a new instance of a medical image of the object of interest.
- This on-the-fly training scheme may train specifically on any new image type, which may not have been seen during training.
- the apparatus further comprises a tagging module configured for detecting the image content to be suppressed.
- the tagging model is a pre-trained classifier.
- the tagging model uses thresholding for segmenting images to determine the image content to be suppressed.
- the apparatus further comprises a compositing module configured for compositing the medical image and the restored image of the object of interest.
- the image content to be suppressed comprises image content at locations, which are tagged by a contrast agent.
- the image content tagged by a contrast agent comprises at least one of:
- the contrast agent may be administered e.g by vascular injection or by oral administration of contrast.
- a medical imaging system comprising a scanner configured to scan an object of interest to acquire a medical image of the object of interest, and an apparatus according to the first aspect and any associated examples for processing the medical image of the object of interest.
- the scanner may be a standard CT, MRI, or CBCT scanner, but also may be extended to a spectral CT or multi-parametric MRI scanner.
- a computer-implemented method for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed, the computer-implemented method comprising:
- a computer program element configured, during execution, to perform the method step of the third aspect and any associated example.
- a computer readable medium comprising the computer program element.
- learning in the context of machine learning refers to the identification and training of suitable algorithms to accomplish tasks of interest.
- learning includes, but is not restricted to, association learning, classification learning, clustering, and numeric prediction.
- machine-learning refers to the field of the computer sciences that studies the design of computer programs able to induce patterns, regularities, or rules from past experiences to develop an appropriate response to future data, or describe the data in some meaningful way.
- data-driven model in the context of machine learning refers to a suitable algorithm that is learnt on the basis of appropriate training data.
- module may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logical circuit, and/or other suitable components that provide the described functionality.
- ASIC Application Specific Integrated Circuit
- processor shared, dedicated, or group
- memory shared, dedicated, or group
- VC non-invasive virtual colonoscopy
- CT-C CT-colonography
- a necessary step for the clinical inspection of the colon in VC is removing remains of stool or faeces from the colon image prior to visualization, so called Virtual Cleansing.
- the CT contrast of faeces is similar to the contrast of the tissue surrounding the colon. Therefore, this pre-processing of the image is usually supported by orally administering a laxative to remove stool followed by orally administering contrast agent containing iodine prior to CT imaging in order to tag remaining stool. This tagging helps removing the remaining faeces from the image.
- the Hounsfield values of voxels with faeces are set to the value of air.
- image edges with tagging-neighbourhood may have a different appearance compared to edges without tagging-neighbourhood, thus causing visual irritation and having a negative effect on diagnostic reading.
- FIG. 1 shows a flow chart of a computer-implemented method 200 according to some embodiments of the present disclosure.
- the computer-implemented method 200 is proposed for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed.
- the computer-implemented method 200 may be implemented as a device, module or related component in a set of logic instructions stored in a non-transitory machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality hardware logic using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof.
- a non-transitory machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc.
- configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable
- computer program code to carry out operations shown in the method 200 may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++, Python, or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
- object oriented programming language such as JAVA, SMALLTALK, C++, Python, or the like
- conventional procedural programming languages such as the "C" programming language or similar programming languages.
- step 210 i.e. step a
- a medical image 10 of the object of interest is received.
- the medical image may be a two-dimensional image comprising image pixel data or a three-dimensional image comprising image voxel data.
- the object of interest is a colon. In an example, the object of interest is a lumen of vasculature.
- FIG. 2A An exemplary medical image 10 of a colon is illustrated in Fig. 2A .
- the exemplary medical image 10 comprises a first image part 12 not comprising image content to be suppressed and a second image part 14 comprising image content to be suppressed.
- the image content to be suppressed is the tagged stool residuals in colonoscopy.
- step 220 i.e. step b
- an image contour 16 of the object of interest is detected.
- Edge detection is the name for a set of mathematical methods, which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities.
- the points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges.
- Various techniques may be used for edge detection, such as Prewitt edge detection, Laplacian edge detection, LOG edge detection, canny edge detection, etc.
- the detected image contour 16 is classified into a first image contour 18 and a second image contour 20.
- the first image contour 18 is representative of an image contour of the first image part 12 of the object of interest
- the second image contour 20 is representative of an image contour of the second image part 14 of the object of interest.
- a pre-trained classifier may be used which detects image locations in the medical image where suppression is desirable, e.g. based on the difference in image intensities between image content to be suppressed and other image content.
- the classifier may be trained in a training phase, and the frozen classifier is applied in the inference phase, i.e. deployment or application phase.
- thresholding may be used for segmenting the medical image to determine the image content to be suppressed.
- the first image contour 18 is indicated with a solid line, while the second image contour 20 is indicated with a dotted line.
- the first image contour 18 is representative of an air-tissue transition, while the second image contour 20 is representative of a stool-tissue transition.
- the second image contour 20 will become an artificial created cleansed edge 20a (see Fig. 2B ).
- step c) the image content to be suppressed is suppressed to generate a cleansed image 22.
- the image content to be suppressed is substituted with a virtual material to generate a cleansed image 22.
- suppression may refer to e.g. 'electronically suppressed', 'electronically cleansed', 'made transparent', or 'set to air-like appearance'.
- FIG. 2B An exemplary cleansed image 22 is illustrated in Fig. 2B .
- the image intensities in the second image part 14 that comprises the tagged stool residuals are substituted with a value of air, thereby creating an artificially created cleansed edge 20a.
- the artificially created cleansed edge 20a which is indicated with the dashed line, has a different appearance compared to the first image contour 18 indicated with the solid line.
- the second image contour 20 has an unsmooth edge. This is because the intensity difference of the tagged material to its surroundings is in general different in sign as well as amplitude in comparison to non-tagged image locations, and causes specific transition profiles.
- the different appearance between the artificially created cleansed edge 20a and the first image contour 18 may affect the diagnostic reading.
- step 240 of Fig. 1 i.e. step d
- a data-driven model is trained using image data of the first image contour 18 to learn an appearance of the image contour 16 of the object of interest.
- the appearance of the image contour of the object of interest is machine-learned in an unsupervised non-analytical way from unmodified locations, and then - after digital suppression or substitution - applied to the artificially created cleansed edges to perform a restoration of edges.
- Two training schemes may be used for training the data driven model.
- the data-driven model is trained in a training phase, and the frozen data-driven model applied in the inference phase, i.e. deployment or application phase.
- training mode an initial model of the data-driven model is trained based on a set of training data to produce a trained data-driven model.
- deployment mode also referred to as inference mode
- the pre-trained data-driven model is fed with non-training, new data, to operate during normal use. The advantage may be a large training set and reproducible performance.
- the data-driven model is training on the fly on a new instance of an image, after the tagging classifier inference has been applied.
- the advantage may be that the model can train specifically on any new image type, which may not have been seen during training.
- Local two-dimensional or three-dimensional image patches may be used for modelling the data-driven model, which are smaller than the overall image.
- Each local two-dimensional image patch represents one or a group of pixels in a two-dimensional medical image or one or a group of voxels in a three-dimensional medical image.
- the data driven model may comprise an auto-encoder configured for mapping local image patches directly onto themselves.
- auto-encoders may be used, such as principle component auto-encoders, sparse auto-encoders, deep neural auto-encoders, variational auto-encoders, generative auto-encoders, and random forest auto-encoders.
- the data-driven model may comprise a multivariate regressor configured for explicitly encoding of local image patches into a latent subspace and reproducing image data from the latent subspace using explicit encoding of the image patches into a latent subspace, followed by a decoding into the original image space.
- One example is to use material-classifiers to transform the original image into material images with or without material concentrations per pixel/voxel.
- a multivariate regressor can then be trained on all patches to reproduce the original image intensities from the material images.
- the original image is tessellated into local regions (also referred to as super pixels or supervoxels), and all image intensities are replaced by the local region's mean intensity (also referred to as the subspace).
- a multivariate regressor e.g. random forest regression or supper vector regression, may then be trained on all patches to reproduce the original image patch intensities from the supervoxel-mean-intensities.
- the central intensity of any image patch can be estimated as a regression value from its surrounding image patch values.
- the medical image 10 may be tessellated into local regions, such as supervoxels, e.g. using a simple learning iterative clustering (SLIC), and all image intensities are replaced by the local region's mean intensity. Then all contrast-tagged regions are replaced by the mean value of air.
- a multivariate random forest regressor may be trained on all paths from the whole image volume, which do not contain tagging to reproduce the original image patch intensities from the supervoxel-mean intensities.
- the central intensity of any 7 ⁇ 7 image patch is estimated as a random forest regression value from its surrounding image patch values (e.g. flattened as a one-dimension feature vector).
- step 250 i.e. step e
- the trained data-driven model also referred to as edge model
- the cleansed image 22 is applied to the cleansed image 22 to generate a restored image 24 of the object of interest.
- the restoration of the edge profiles is not restricted to certain discrete or analytical filters, such as Dilations, Gaussian, etc. Rather, the flexibility of machine learning may allow to synthesize and explore a wide range of edge appearances, which are not limited by analytical functions. Further, the algorithm may adapt automatically to varying edge appearances in various image types, and replaces the tedious manual search for a certain restoration technique.
- FIG. 2C An exemplary restored image is illustrated in Fig. 2C .
- the data-driven model trained on the first image contour 18 (also referred to as unmodified edges or non-tagging edges) can be applied on both the first image contour 18 and the artificially created cleansed edge 20a (also referred to as modified edges or tagging edges) in the cleansed image 22 to generate the restored image 24 of the object of interest.
- step 260 i.e. step f
- the generated restored image of the object of interest is provided to e.g. a display, or an image analyser for further processing the restored image, etc.
- the computer-implemented method 200 may further comprise the step of compositing the medical image 10 and the restored image of the object of interest 24.
- the edge-restored image 24 is not the final image presented to a user, since only edges towards air have been processed.
- the restore image 24 may be composited with the original medical image for all locations far from tagging.
- the computer-implemented method described above may be extended for spectral CT or spectral MR.
- spectral CT e.g. with the Philips "IQon Spectral CT" it is possible to create synthesized mono-energetic images at different keV values.
- the at least one three-dimensional medical image may comprise synthesized mono-energetic images acquired at different energies.
- different materials show different contrast.
- Spectral CT has the potential to better discriminate different materials (e.g. faeces and tissue) than conventional CT.
- Spectral Virtual Colonoscopy may lead to a higher specificity of the screening even without bowel preparation.
- the method described above allows specifically to the appearance in each spectral band in the multi-channel images.
- the computer-implemented method described above is also versatile for different clinical applications, such as virtual colonoscopy as described above and angiography.
- Fig. 3 schematically shows an example of an apparatus 100 for for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed.
- the apparatus 100 comprises an input module 110, a contour classifier module 120, a suppression module 130, a training module 140, an inference module 150, and an output module 160.
- Each module may be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logical circuit, and/or other suitable components that provide the described functionality.
- ASIC Application Specific Integrated Circuit
- the apparatus 100 may be any computing device, such as mobile devices, laptop and desktop computers, wearable computing devices, and other computing devices, suitable for processing image data.
- the input module 110 is configured for receiving the medical image of the object of interest.
- the medical image may be a two-dimensional image comprising image pixel data or a three-dimensional image comprising image voxel data.
- Examples of the imaging modality may include, but are not limited to, CT and MRI.
- the contour classifier module 120 is configured for detecting an image contour of the object of interest and classifying the detected image contour into a first image contour and a second image contour.
- the first image contour is representative of an image contour of the first image part of the object of interest
- the second image contour is representative of an image contour of the second image part of the object of interest.
- the suppression module 130 is configured for suppressing the image content to be suppressed or substituting the image content to be suppressed with a virtual material to generate a cleansed image.
- An exemplary operation of the suppression module 130 is described in step 230 of Fig. 1 .
- the training module 140 is configured for training a data-driven model using image data of the first image contour to learn an appearance of the image contour of the object of interest.
- step 240 of Fig. 1 An exemplary operation of the training module 140 is described in step 240 of Fig. 1 .
- the data-driven model comprises an auto-encoder configured for mapping local image patches directly onto themselves.
- Each local image patch represents one or a group of pixels or voxels in the medical image.
- the auto-encoder may include, but are not limited to, principle component auto-encoders, sparse auto-encoders, deep neural auto-encoders, variational auto-encoders, generative auto-encoders, and random forest auto-encoders.
- the data-driven model comprises a multivariate regressor configured for explicitly encoding of local image patches into a latent subspace and reproducing image data from the latent subspace.
- Each local image patch represents one or a group of pixels or voxels in the medical image.
- the apparatus 100 may further comprise a material classifier module (not shown) configured for transforming the medical image of the object of interest into material images.
- the training module is configured for training the multivariate regressor for reproducing image data from the material images.
- the apparatus 100 may further comprise a tessellation module configured for tessellating the medical image of the object of interest into a plurality of local regions and replacing an image intensity of the medical image by a mean intensity of the plurality of local regions.
- the training module is configured for training the multivariate regressor for reproducing image data from the mean intensity of the plurality of local regions.
- Two training schemes may be used for training the data-driven model.
- the training module 140 is configured for training the data-driven model in a training phase and freezing the trained data-driven model.
- the inference module is configured for applying the frozen trained data-driven model in an inference phase.
- the training module 140 is configured for training on the fly on a new instance of a medical image of the object of interest.
- the inference module 150 is configured for applying the trained data-driven model to the cleansed image to generate a restored image of the object of interest. An exemplary operation of the inference module 150 is described in step 250 of Fig. 1 .
- the output module 160 is configured for providing the generated restored image of the object of interest, e.g. to a display (for example, a built-in screen, a connected monitor or projector) or to a file storage (for example, a hard drive or a solid state drive).
- a display for example, a built-in screen, a connected monitor or projector
- a file storage for example, a hard drive or a solid state drive
- the apparatus 100 may comprise a tagging module configured for detecting the image content to be suppressed.
- the tagging module may apply a pre-trained classifier for detecting tagged locations in the image where suppression is desirable.
- the tagging module may use thresholding to segment the medical image to determine the image content to be suppressed.
- the apparatus 100 may comprise a compositing module (not shown) configured for compositing the medical image and the restored image of the object of interest.
- a compositing module (not shown) configured for compositing the medical image and the restored image of the object of interest.
- An exemplary operation of the compositing module is shown in step 260 of Fig. 1 .
- Fig. 4 schematically shows a medical imaging system 300 according to some embodiments of the present disclosure.
- the medical imaging system 300 comprises a scanner 310 configured to scan an object of interest to acquire at least one three-dimensional image of the object of interest.
- the scanner 310 may be a CT-scanner or an MRI scanner.
- the medical imaging system 300 further comprises an apparatus 100 for processing a medical image of an object of interest acquired by the scanner.
- the phrase "at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
- This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified.
- a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system.
- the computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention.
- This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above described apparatus.
- the computing unit can be adapted to operate automatically and/or to execute the orders of a user.
- a computer program may be loaded into a working memory of a data processor.
- the data processor may thus be equipped to carry out the method of the invention.
- This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
- the computer program element might be able to provide all necessary steps to fulfil the procedure of an exemplary embodiment of the method as described above.
- a computer readable medium such as a CD-ROM
- the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.
- a computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
- a suitable medium such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
- the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network.
- a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.
- inventive embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed.
- inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The present invention relates to edge restoration. In order to improve a restoration of the artificially created cleansed edges, an apparatus is proposed to automatically restore image edges after digital subtraction of digital material substitution to optimally resemble image edges in unmodified locations. The appearance of edges is machine-learned in an unsupervised non-analytical way from unmodified locations, and then, after digital suppression or digital material substitution, applied to the artificially created cleansed edges.
Description
- The present invention generally relates to edge restoration, and in particular relates to an apparatus for processing a medical image of an object of interest, a medical imaging system, a method for processing a medical image of an object of interest, a computer program element, and a computer readable medium.
- For certain clinical applications, diagnostic reading (viewing) may be improved by subtracting or suppressing image content at locations which are tagged by contrast agent, e.g. stool residuals in colonoscopy, following oral administration of contrast ('electronic cleansing'), or blood following vascular injection of contrast agent ('black blood'). The tagged areas may be substituted with a virtual material (e.g. air, also known as subtraction or suppression). After this substitution, image edges with tagging-neighbourhood may have a different appearance compared to edges without tagging-neighbourhood, thus causing irritation to the user. This is because the intensity difference of the tagged material to its surroundings is in general different in sign as well as amplitude in comparison to non-tagged image locations, which causes specific transition profiles.
- Therefore, a restoration of the artificially created cleansed edges is required to yield a visually satisfactory suppressed/cleansed image for improved diagnostic reading. However, this restoration step is tedious to develop and tune, and may be different for various image types and different reconstruction techniques. In particular, for spectral CT images and derivatives, such as virtual mono-energy images, a different restoration may be needed for each spectral band image.
- There may be a need to improve a restoration of the artificially created cleansed edges.
- The object of the present invention is solved by the subject-matter of the independent claims, wherein further embodiments are incorporated in the dependent claims. It should be noted that the following described aspects of the invention apply also for the apparatus, the medical imaging system, the method, the computer program element, and the computer readable medium.
- According to a first aspect of the present invention, there is provided an apparatus for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed. The apparatus comprises an input module, a contour classifier module, a suppression module, a training module, an inference module, and an output module. The input module is configured for receiving the medical image of the object of interest. The contour classifier module is configured for detecting an image contour of the object of interest and classifying the detected image contour into a first image contour and a second image contour, wherein the first image contour is representative of an image contour of the first image part of the object of interest, and the second image contour is representative of an image contour of the second image part of the object of interest. The suppression module is configured for suppressing the image content to be suppressed or substituting the image content to be suppressed with a virtual material to generate a cleansed image. The training module is configured for training a data-driven model using image data of the first image contour to learn an appearance of the image contour of the object of interest. The inference module is configured for applying the trained data-driven model to the cleansed image to generate a restored image of the object of interest. The output module is configured for providing the generated restored image of the object of interest.
- As noted above, diagnostic reading (viewing) may be improved by subtracting or suppressing image content of an object of interest (e.g. colon) at locations, which are tagged by contrast agent, e.g. stool residuals in colonoscopy. The tagged areas may be substituted with a virtual material, such as air, thereby creating artificially cleansed edges.
- In order to improve a restoration of the artificially created cleansed edges, an apparatus (e.g. a computing device) is proposed to automatically restore an image contour, i.e. image edges, after digital subtraction or digital material substitution to optimally resemble image edges in unmodified locations. In particular, the appearance of the image edges of the object of interest is machine-learned in an unsupervised non-analytical way from the image data at unmodified edge locations different from the artificially created cleansed edges. As the active learning for training the data-driven model is based on the unmodified edges, no manual annotations are required, thereby eliminating tedious and repetitive manual annotation work, which would require delicate sub-voxel accuracy. Additionally, the restoration of the edge profiles is not restricted to certain discrete or analytical filters, such as Dilations, Gaussian, etc. Rather, the flexibility of machine learning may allow synthesizing and exploring a wide range of edge appearances, which are not limited by analytical functions. Further, the algorithm may adapt automatically to varying edge appearances in various image types, and replaces the tedious manual search for a certain restoration technique.
- In an example, integrated auto-encoders may be used as the data-driven model, which map image patches directly onto themselves - that is, the input type equals output type. Examples of the auto-encoders may include, but are not limited to, principle component auto-encoders, sparse auto-encoders, deep neural auto-encoders, variational auto-encoders, generative auto-encoders, and random forest auto-encoders.
- In another example, a multivariate regressor may be used as the data-driven model for explicitly encoding of local image patches into a latent subspace and reproducing image data from the latent subspace. One example is to use material-classifiers to transform the original image into material images with or without material concentrations per pixel/voxel. A multivariate regressor can then be trained on all image patches to reproduce the original image intensities from the material images.
- After machine-learning of the appearance of the image edges of the object of interest at the unmodified edge locations, the trained data-driven model can be applied to the artificially created cleansed edges to perform a restoration.
- The proposed apparatus and method may be used with standard CT, MRI (Magnetic Resonance Imaging), or CBCT (Cone Beam Computed Tomography), but also may be extended to spectral CT or multi-parametric MRI. In spectral CT or multi-parametric MRI, the algorithm may allow to adapt specifically to the appearance in each spectral band.
- The term "suppression" may refer to e.g. 'electronically suppressed', 'electronically cleansed', 'made transparent', or 'set to air-like appearance'.
- According to an embodiment of the present invention, the data-driven model comprises an auto-encoder configured for mapping local image patches directly onto themselves. Each local image patch represents one or a group of pixels or voxels in the medical image.
- Examples of the auto-encoder may include, but are not limited to, principle component auto-encoders, sparse auto-encoders, deep neural auto-encoders, variational auto-encoders, generative auto-encoders, and random forest auto-encoders.
- The term "image patch" refers to a patch or group of pixels or voxels having a specific size, shape, and location corresponding to an image. For example, image patches can have a predetermined pixel width/height (e.g., 7×7, 8×8, 11×11, or the like) and a location for each image patch can be defined based on one or more centre pixels or voxels.
- The local image patch may have any suitable shape, such as a circular shape, a rectangular shape, a square shape, etc.
- According to an embodiment of the present invention, the data-driven model comprises a multivariate regressor configured for explicitly encoding of local image patches into a latent subspace and reproducing image data from the latent subspace. Each local image patch represents one or a group of pixels or voxels in the medical image.
- Instead of direct auto-encoding, the data-driven model may use explicit encoding of the image patches into a latent subspace, followed by a decoding into the original image space.
- According to an embodiment of the present invention, the apparatus further comprises a material classifier module configured for transforming the medical image of the object of interest into material images. The training module is configured for training the multivariate regressor for reproducing image data from the material images.
- In this embodiment, material-classifiers may be used to transform the original image into material images with or without material concentrations per pixel/voxel. A multivariate regressor can then be trained on image patches to reproduce the original image intensities from the material images.
- According to an embodiment of the present invention, the apparatus further comprises a tessellation module configured for tessellating the medical image of the object of interest into a plurality of local regions and replacing an image intensity of the medical image by a mean intensity of the plurality of local regions. The training module is configured for training the multivariate regressor for reproducing image data from the mean intensity of the plurality of local regions.
- In this embodiment, the original image may be tessellated into local regions, also referred to as super pixels or supervoxels, and all image intensities may be replaced by the local region's mean intensity (also referred to as the subspace). The suppression module may then replace e.g. the mean contrast value of a region by the mean value of air. A multivariate regressor (e.g. random forest regression or support vector regression) may be trained on image patches to reproduce the original image patch intensities from the supervoxel (or superpixel)-mean-intensities. Specifically, the central intensity of any image patch can be estimated as a regression value from its surrounding image patch values.
- According to an embodiment of the present invention, the training module is configured for training the data-driven model in a training phase and freezing the trained data-driven model. The inference module is configured for applying the frozen trained data-driven model in an inference phase.
- The term "inference phase" refers to deployment or application phase.
- This training scheme may have a large training set and reproducible performance.
- According to an embodiment of the present invention, the training module is configured for training on the fly on a new instance of a medical image of the object of interest.
- This on-the-fly training scheme may train specifically on any new image type, which may not have been seen during training.
- According to an embodiment of the present invention, the apparatus further comprises a tagging module configured for detecting the image content to be suppressed.
- In an example, the tagging model is a pre-trained classifier.
- In an example, the tagging model uses thresholding for segmenting images to determine the image content to be suppressed.
- According to an embodiment of the present invention, the apparatus further comprises a compositing module configured for compositing the medical image and the restored image of the object of interest.
- According to an embodiment of the present invention, the image content to be suppressed comprises image content at locations, which are tagged by a contrast agent.
- According to an embodiment of the present invention, the image content tagged by a contrast agent comprises at least one of:
- stool residuals in colonoscopy; or
- blood in angiography.
- For blood in angiography, the contrast agent may be administered e.g by vascular injection or by oral administration of contrast.
- According to a second aspect of the present invention, there is provided a medical imaging system. The medical imaging system comprising a scanner configured to scan an object of interest to acquire a medical image of the object of interest, and an apparatus according to the first aspect and any associated examples for processing the medical image of the object of interest.
- The scanner may be a standard CT, MRI, or CBCT scanner, but also may be extended to a spectral CT or multi-parametric MRI scanner.
- According to a third aspect of the present invention, there is provided a computer-implemented method for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed, the computer-implemented method comprising:
- a) receiving the medical image of the object of interest;
- b) detecting an image contour of the object of interest and classifying the detected image contour into a first image contour and a second image contour, wherein the first image contour is representative of an image contour of the first image part of the object of interest, and the second image contour is representative of an image contour of the second image part of the object of interest;
- c) suppressing the image content to be suppressed or substituting the image content to be suppressed with a virtual material to generate a cleansed image;
- d) training a data-driven model using image data of the first image contour to learn an appearance of the image contour of the object of interest;
- e) applying the trained data-driven model to the cleansed image to generate a restored image of the object of interest; and
- f) providing the generated restored image of the object of interest.
- According to another aspect of the present invention, there is provided a computer program element configured, during execution, to perform the method step of the third aspect and any associated example.
- According to a further aspect of the present invention, there is provided a computer readable medium comprising the computer program element.
- Advantageously, the benefits provided by any of the above aspects equally apply to all of the other aspects and vice versa.
- As used herein, the term "learning" in the context of machine learning refers to the identification and training of suitable algorithms to accomplish tasks of interest. The term "learning" includes, but is not restricted to, association learning, classification learning, clustering, and numeric prediction.
- As used herein, the term "machine-learning" refers to the field of the computer sciences that studies the design of computer programs able to induce patterns, regularities, or rules from past experiences to develop an appropriate response to future data, or describe the data in some meaningful way.
- As used herein, the term "data-driven model" in the context of machine learning refers to a suitable algorithm that is learnt on the basis of appropriate training data.
- As used herein, the term "module" may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logical circuit, and/or other suitable components that provide the described functionality.
- It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein.
- These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
- In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention.
-
Fig. 1 shows a flow chart of a computer-implemented method. -
Fig. 2A shows an example of a medical image of an object of interest. -
Fig. 2B shows a cleansed image of the example ofFig. 2A . -
Fig. 2C shows a restored image of the example ofFig. 2A . -
Fig. 3 shows an example of an apparatus. -
Fig. 4 shows an example of a medical imaging system. - In the following, the approach is described in relation with a medical image of a colon. Although the following detailed description is described using application to CT virtual colonoscopy for the purposes of illustration, anyone of ordinary skill in the art will appreciate that the method, apparatus, and medical imaging system described above and below can be adapted to other object of interest, e.g. blood following vascular injection of contrast agent, and to other imaging modality, e.g. MRI or CBCT. Accordingly, the following described examples are set forth without any loss of generality to, and without imposing limitations upon, the claimed invention.
- Polyps in the colon can possibly develop into colon cancer. If removed early, cancer can be prevented effectively. It is therefore recommended also for asymptomatic patients above a certain age to perform endoscopic inspection of the colon (colonoscopy) in order to detect and assess possible polyps. Unfortunately, compliance with this screening is low, mainly due to the discomfort associated with endoscopy. Therefore, non-invasive virtual colonoscopy (VC) based on CT was developed as an alternative (also known as CT-colonography, or CT-C). VC is the virtual inspection of the colon wall on a computer screen, either by standard transsectional slice viewing or optionally rendered with the help of Volume Rendering (VR) techniques. In VC, VR results in animate impression similar to the view of a real endoscope during colonoscopy. A necessary step for the clinical inspection of the colon in VC is removing remains of stool or faeces from the colon image prior to visualization, so called Virtual Cleansing. Unfortunately, the CT contrast of faeces is similar to the contrast of the tissue surrounding the colon. Therefore, this pre-processing of the image is usually supported by orally administering a laxative to remove stool followed by orally administering contrast agent containing iodine prior to CT imaging in order to tag remaining stool. This tagging helps removing the remaining faeces from the image. In order to create a visualization of a virtual empty colon, the Hounsfield values of voxels with faeces are set to the value of air.
- However, after this substitution, image edges with tagging-neighbourhood may have a different appearance compared to edges without tagging-neighbourhood, thus causing visual irritation and having a negative effect on diagnostic reading. This is because the intensity difference of the tagged material to its surroundings is in general different in sign as well as amplitude in comparison to non-tagged image locations, and causes specific transition profiles.
- In order to improve a restoration of the artificially created cleansed edges,
Fig. 1 shows a flow chart of a computer-implementedmethod 200 according to some embodiments of the present disclosure. The computer-implementedmethod 200 is proposed for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed. - The computer-implemented
method 200 may be implemented as a device, module or related component in a set of logic instructions stored in a non-transitory machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality hardware logic using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof. For example, computer program code to carry out operations shown in themethod 200 may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++, Python, or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. - In
step 210, i.e. step a), amedical image 10 of the object of interest is received. - The medical image may be a two-dimensional image comprising image pixel data or a three-dimensional image comprising image voxel data.
- In an example, the object of interest is a colon. In an example, the object of interest is a lumen of vasculature.
- An exemplary
medical image 10 of a colon is illustrated inFig. 2A . The exemplarymedical image 10 comprises afirst image part 12 not comprising image content to be suppressed and asecond image part 14 comprising image content to be suppressed. In this example, the image content to be suppressed is the tagged stool residuals in colonoscopy. - Turning to
Fig. 1 , instep 220, i.e. step b), animage contour 16 of the object of interest is detected. Edge detection is the name for a set of mathematical methods, which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. Various techniques may be used for edge detection, such as Prewitt edge detection, Laplacian edge detection, LOG edge detection, canny edge detection, etc. - The detected
image contour 16 is classified into afirst image contour 18 and asecond image contour 20. Thefirst image contour 18 is representative of an image contour of thefirst image part 12 of the object of interest, and thesecond image contour 20 is representative of an image contour of thesecond image part 14 of the object of interest. - In an example, a pre-trained classifier may be used which detects image locations in the medical image where suppression is desirable, e.g. based on the difference in image intensities between image content to be suppressed and other image content. The classifier may be trained in a training phase, and the frozen classifier is applied in the inference phase, i.e. deployment or application phase.
- In another example, thresholding may be used for segmenting the medical image to determine the image content to be suppressed.
- In the example illustrated in
Fig. 2A , thefirst image contour 18 is indicated with a solid line, while thesecond image contour 20 is indicated with a dotted line. Thefirst image contour 18 is representative of an air-tissue transition, while thesecond image contour 20 is representative of a stool-tissue transition. As the tagged stool residuals are the image content to be suppressed, thesecond image contour 20, after suppression or substitution, will become an artificial created cleansededge 20a (seeFig. 2B ). - Turning to
Fig. 1 , in step c), the image content to be suppressed is suppressed to generate a cleansedimage 22. Alternatively, the image content to be suppressed is substituted with a virtual material to generate a cleansedimage 22. - The term "suppression" may refer to e.g. 'electronically suppressed', 'electronically cleansed', 'made transparent', or 'set to air-like appearance'.
- An exemplary cleansed
image 22 is illustrated inFig. 2B . In this example, the image intensities in thesecond image part 14 that comprises the tagged stool residuals are substituted with a value of air, thereby creating an artificially created cleansededge 20a. The artificially created cleansededge 20a, which is indicated with the dashed line, has a different appearance compared to thefirst image contour 18 indicated with the solid line. For example, thesecond image contour 20 has an unsmooth edge. This is because the intensity difference of the tagged material to its surroundings is in general different in sign as well as amplitude in comparison to non-tagged image locations, and causes specific transition profiles. The different appearance between the artificially created cleansededge 20a and the first image contour 18 (i.e. the unmodified image contour of the object of interest) may affect the diagnostic reading. - To improve edge restoration, in
step 240 ofFig. 1 , i.e. step d), a data-driven model is trained using image data of thefirst image contour 18 to learn an appearance of theimage contour 16 of the object of interest. In other words, the appearance of the image contour of the object of interest is machine-learned in an unsupervised non-analytical way from unmodified locations, and then - after digital suppression or substitution - applied to the artificially created cleansed edges to perform a restoration of edges. - Two training schemes may be used for training the data driven model.
- In a first training scheme, the data-driven model is trained in a training phase, and the frozen data-driven model applied in the inference phase, i.e. deployment or application phase. In training mode, an initial model of the data-driven model is trained based on a set of training data to produce a trained data-driven model. In deployment mode, also referred to as inference mode, the pre-trained data-driven model is fed with non-training, new data, to operate during normal use. The advantage may be a large training set and reproducible performance.
- In a second training scheme, the data-driven model is training on the fly on a new instance of an image, after the tagging classifier inference has been applied. The advantage may be that the model can train specifically on any new image type, which may not have been seen during training.
- Local two-dimensional or three-dimensional image patches may be used for modelling the data-driven model, which are smaller than the overall image. Each local two-dimensional image patch represents one or a group of pixels in a two-dimensional medical image or one or a group of voxels in a three-dimensional medical image.
- In an example, the data driven model may comprise an auto-encoder configured for mapping local image patches directly onto themselves. Various types of auto-encoders may be used, such as principle component auto-encoders, sparse auto-encoders, deep neural auto-encoders, variational auto-encoders, generative auto-encoders, and random forest auto-encoders.
- Instead of direct auto-encoding, the data-driven model may comprise a multivariate regressor configured for explicitly encoding of local image patches into a latent subspace and reproducing image data from the latent subspace using explicit encoding of the image patches into a latent subspace, followed by a decoding into the original image space.
- One example is to use material-classifiers to transform the original image into material images with or without material concentrations per pixel/voxel. A multivariate regressor can then be trained on all patches to reproduce the original image intensities from the material images.
- In a further example, the original image is tessellated into local regions (also referred to as super pixels or supervoxels), and all image intensities are replaced by the local region's mean intensity (also referred to as the subspace). A multivariate regressor, e.g. random forest regression or supper vector regression, may then be trained on all patches to reproduce the original image patch intensities from the supervoxel-mean-intensities. Specifically, the central intensity of any image patch can be estimated as a regression value from its surrounding image patch values.
- In the example of
Fig. 2B , themedical image 10 may be tessellated into local regions, such as supervoxels, e.g. using a simple learning iterative clustering (SLIC), and all image intensities are replaced by the local region's mean intensity. Then all contrast-tagged regions are replaced by the mean value of air. A multivariate random forest regressor may be trained on all paths from the whole image volume, which do not contain tagging to reproduce the original image patch intensities from the supervoxel-mean intensities. In this example, the central intensity of any 7×7 image patch is estimated as a random forest regression value from its surrounding image patch values (e.g. flattened as a one-dimension feature vector). - Turning to
Fig. 1 , instep 250, i.e. step e), the trained data-driven model, also referred to as edge model, is applied to the cleansedimage 22 to generate a restoredimage 24 of the object of interest. - As the active learning for training the data-driven model is based on the unmodified edges, i.e. the
first image contour 18, no manual annotations are required, thereby eliminating tedious and repetitive manual annotation work. Additionally, the restoration of the edge profiles is not restricted to certain discrete or analytical filters, such as Dilations, Gaussian, etc. Rather, the flexibility of machine learning may allow to synthesize and explore a wide range of edge appearances, which are not limited by analytical functions. Further, the algorithm may adapt automatically to varying edge appearances in various image types, and replaces the tedious manual search for a certain restoration technique. - An exemplary restored image is illustrated in
Fig. 2C . In this example, the data-driven model trained on the first image contour 18 (also referred to as unmodified edges or non-tagging edges) can be applied on both thefirst image contour 18 and the artificially created cleansededge 20a (also referred to as modified edges or tagging edges) in the cleansedimage 22 to generate the restoredimage 24 of the object of interest. - Turning to
Fig. 1 , instep 260, i.e. step f), the generated restored image of the object of interest is provided to e.g. a display, or an image analyser for further processing the restored image, etc. - Optionally, as shown in
Fig. 1 , the computer-implementedmethod 200 may further comprise the step of compositing themedical image 10 and the restored image of the object ofinterest 24. In other words, the edge-restoredimage 24 is not the final image presented to a user, since only edges towards air have been processed. The restoreimage 24 may be composited with the original medical image for all locations far from tagging. - It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited. For example, in
Fig. 1 ,steps - The computer-implemented method described above may be extended for spectral CT or spectral MR. For example, in spectral CT, e.g. with the Philips "IQon Spectral CT" it is possible to create synthesized mono-energetic images at different keV values. In other words, the at least one three-dimensional medical image may comprise synthesized mono-energetic images acquired at different energies. In these mono-energetic images, different materials show different contrast. Spectral CT has the potential to better discriminate different materials (e.g. faeces and tissue) than conventional CT. Spectral Virtual Colonoscopy may lead to a higher specificity of the screening even without bowel preparation. The method described above allows specifically to the appearance in each spectral band in the multi-channel images.
- The computer-implemented method described above is also versatile for different clinical applications, such as virtual colonoscopy as described above and angiography.
-
Fig. 3 schematically shows an example of anapparatus 100 for for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed. - The
apparatus 100 comprises aninput module 110, acontour classifier module 120, asuppression module 130, atraining module 140, aninference module 150, and anoutput module 160. Each module may be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logical circuit, and/or other suitable components that provide the described functionality. - The
apparatus 100 may be any computing device, such as mobile devices, laptop and desktop computers, wearable computing devices, and other computing devices, suitable for processing image data. - The
input module 110 is configured for receiving the medical image of the object of interest. The medical image may be a two-dimensional image comprising image pixel data or a three-dimensional image comprising image voxel data. Examples of the imaging modality may include, but are not limited to, CT and MRI. - The
contour classifier module 120 is configured for detecting an image contour of the object of interest and classifying the detected image contour into a first image contour and a second image contour. The first image contour is representative of an image contour of the first image part of the object of interest, and the second image contour is representative of an image contour of the second image part of the object of interest. An exemplary of operation of thecontour classifier module 120 is described instep 220 ofFig. 1 . - The
suppression module 130 is configured for suppressing the image content to be suppressed or substituting the image content to be suppressed with a virtual material to generate a cleansed image. An exemplary operation of thesuppression module 130 is described instep 230 ofFig. 1 . - The
training module 140 is configured for training a data-driven model using image data of the first image contour to learn an appearance of the image contour of the object of interest. - An exemplary operation of the
training module 140 is described instep 240 ofFig. 1 . - In an example, the data-driven model comprises an auto-encoder configured for mapping local image patches directly onto themselves. Each local image patch represents one or a group of pixels or voxels in the medical image. Examples of the auto-encoder may include, but are not limited to, principle component auto-encoders, sparse auto-encoders, deep neural auto-encoders, variational auto-encoders, generative auto-encoders, and random forest auto-encoders.
- In an example, the data-driven model comprises a multivariate regressor configured for explicitly encoding of local image patches into a latent subspace and reproducing image data from the latent subspace. Each local image patch represents one or a group of pixels or voxels in the medical image.
- In an example, the
apparatus 100 may further comprise a material classifier module (not shown) configured for transforming the medical image of the object of interest into material images. The training module is configured for training the multivariate regressor for reproducing image data from the material images. - In an example, the
apparatus 100 may further comprise a tessellation module configured for tessellating the medical image of the object of interest into a plurality of local regions and replacing an image intensity of the medical image by a mean intensity of the plurality of local regions. The training module is configured for training the multivariate regressor for reproducing image data from the mean intensity of the plurality of local regions. - Two training schemes may be used for training the data-driven model.
- In a first training scheme, the
training module 140 is configured for training the data-driven model in a training phase and freezing the trained data-driven model. The inference module is configured for applying the frozen trained data-driven model in an inference phase. - In a second training scheme, the
training module 140 is configured for training on the fly on a new instance of a medical image of the object of interest. - The
inference module 150 is configured for applying the trained data-driven model to the cleansed image to generate a restored image of the object of interest. An exemplary operation of theinference module 150 is described instep 250 ofFig. 1 . - The
output module 160 is configured for providing the generated restored image of the object of interest, e.g. to a display (for example, a built-in screen, a connected monitor or projector) or to a file storage (for example, a hard drive or a solid state drive). - Optionally, the
apparatus 100 may comprise a tagging module configured for detecting the image content to be suppressed. In an example, the tagging module may apply a pre-trained classifier for detecting tagged locations in the image where suppression is desirable. In an example, the tagging module may use thresholding to segment the medical image to determine the image content to be suppressed. - Optionally, the
apparatus 100 may comprise a compositing module (not shown) configured for compositing the medical image and the restored image of the object of interest. An exemplary operation of the compositing module is shown instep 260 ofFig. 1 . -
Fig. 4 schematically shows amedical imaging system 300 according to some embodiments of the present disclosure. - The
medical imaging system 300 comprises ascanner 310 configured to scan an object of interest to acquire at least one three-dimensional image of the object of interest. Thescanner 310 may be a CT-scanner or an MRI scanner. - The
medical imaging system 300 further comprises anapparatus 100 for processing a medical image of an object of interest acquired by the scanner. - All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
- The indefinite articles "a" and "an," as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean "at least one."
- The phrase "and/or," as used herein in the specification and in the claims, should be understood to mean "either or both" of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with "and/or" should be construed in the same fashion, i.e., "one or more" of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the "and/or" clause, whether related or unrelated to those elements specifically identified.
- As used herein in the specification and in the claims, "or" should be understood to have the same meaning as "and/or" as defined above. For example, when separating items in a list, "or" or "and/or" shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as "only one of' or "exactly one of," or, when used in the claims, "consisting of," will refer to the inclusion of exactly one element of a number or list of elements. In general, the term "or" as used herein shall only be interpreted as indicating exclusive alternatives (i.e. "one or the other but not both") when preceded by terms of exclusivity, such as "either," "one of," "only one of," or "exactly one of."
- As used herein in the specification and in the claims, the phrase "at least one," in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified.
- In the claims, as well as in the specification above, all transitional phrases such as "comprising," "including," "carrying," "having," "containing," "involving," "holding," "composed of," and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases "consisting of' and "consisting essentially of' shall be closed or semi-closed transitional phrases, respectively.
- In another exemplary embodiment of the present invention, a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system.
- The computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention. This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above described apparatus. The computing unit can be adapted to operate automatically and/or to execute the orders of a user. A computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out the method of the invention.
- This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
- Further on, the computer program element might be able to provide all necessary steps to fulfil the procedure of an exemplary embodiment of the method as described above.
- According to a further exemplary embodiment of the present invention, a computer readable medium, such as a CD-ROM, is presented wherein the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.
- A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
- However, the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the present invention, a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.
- While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
Claims (15)
- An apparatus (100) for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed, the apparatus comprising:- an input module (110) configured for receiving the medical image of the object of interest;- a contour classifier module (120) configured for detecting an image contour of the object of interest and classifying the detected image contour into a first image contour and a second image contour, wherein the first image contour is representative of an image contour of the first image part of the object of interest, and the second image contour is representative of an image contour of the second image part of the object of interest;- a suppression module (130) configured for suppressing the image content to be suppressed or substituting the image content to be suppressed with a virtual material to generate a cleansed image;- a training module (140) configured for training a data-driven model using image data of the first image contour to learn an appearance of the image contour of the object of interest;- an inference module (150) configured for applying the trained data-driven model to the cleansed image to generate a restored image of the object of interest; and- an output module (160) configured for providing the generated restored image of the object of interest.
- Apparatus according to claim 1,
wherein the data-driven model comprises an auto-encoder configured for mapping local image patches directly onto themselves, wherein each local image patch represents one or a group of pixels or voxels in the medical image. - Apparatus according to claim 1,
wherein the data-driven model comprises a multivariate regressor configured for explicitly encoding of local image patches into a latent subspace and reproducing image data from the latent subspace, wherein each local image patch represents one or a group of pixels or voxels in the medical image. - Apparatus according to claim 3, further comprising:- a material classifier module configured for transforming the medical image of the object of interest into material images,
wherein the training module is configured for training the multivariate regressor for reproducing image data from the material images. - Apparatus according to claim 3, further comprising:- a tessellation module configured for tessellating the medical image of the object of interest into a plurality of local regions and replacing an image intensity of the medical image by a mean intensity of the plurality of local regions,
wherein the training module is configured for training the multivariate regressor for reproducing image data from the mean intensity of the plurality of local regions. - Apparatus according to any one of the preceding claims,
wherein the training module is configured for training the data-driven model in a training phase and freezing the trained data-driven model; and
wherein the inference module is configured for applying the frozen trained data-driven model in an inference phase. - Apparatus according to any one of claims 1 to 5,
wherein the training module is configured for training on the fly on a new instance of a medical image of the object of interest. - Apparatus according to any one of the preceding claims, further comprising:- a tagging module configured for detecting the image content to be suppressed.
- Apparatus according to any one of the preceding claims, further comprising:- a compositing module configured for compositing the medical image and the restored image of the object of interest.
- Apparatus according to any one of the preceding claims,
wherein the image content to be suppressed comprises image content at locations which are tagged by a contrast agent. - Apparatus according to claim 10,
wherein the image content tagged by a contrast agent comprises at least one of:- stool residuals in colonoscopy; or- blood in angiography. - A medical imaging system, comprising:- a scanner configured to scan an object of interest to acquire a medical image of the object of interest; and- an apparatus according to any one of the preceding claims for processing the medical image of the object of interest.
- A computer-implemented method for processing a medical image of an object of interest that comprises a first image part not comprising image content to be suppressed and a second image part comprising image content to be suppressed, the computer-implemented method comprising:a) receiving (210) the medical image of the object of interest;b) detecting (220) an image contour of the object of interest and classifying the detected image contour into a first image contour and a second image contour, wherein the first image contour is representative of an image contour of the first image part of the object of interest, and the second image contour is representative of an image contour of the second image part of the object of interest;c) suppressing (230) the image content to be suppressed or substituting the image content to be suppressed with a virtual material to generate a cleansed image;d) training (240) a data-driven model using image data of the first image contour to learn an appearance of the image contour of the object of interest;e) applying (250) the trained data-driven model to the cleansed image to generate a restored image of the object of interest; andf) providing (260) the generated restored image of the object of interest.
- A computer program element configured, during execution, to perform the method step of claim 13.
- A computer readable medium comprising the computer program element of claim 14.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20211551.5A EP4009271A1 (en) | 2020-12-03 | 2020-12-03 | Machine learning of edge restoration following contrast suppression/material substitution |
PCT/EP2021/083264 WO2022117470A1 (en) | 2020-12-03 | 2021-11-28 | Machine learning of edge restoration following contrast suppression/material substitution |
CN202180080680.0A CN116547697A (en) | 2020-12-03 | 2021-11-28 | Machine learning for edge recovery after contrast suppression/material replacement |
US18/038,562 US20240005455A1 (en) | 2020-12-03 | 2021-11-28 | Machine learning of edge restoration following contrast suppression/material substitution |
EP21820565.6A EP4256512B1 (en) | 2020-12-03 | 2021-11-28 | Machine learning of edge restoration following contrast suppression/material substitution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20211551.5A EP4009271A1 (en) | 2020-12-03 | 2020-12-03 | Machine learning of edge restoration following contrast suppression/material substitution |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4009271A1 true EP4009271A1 (en) | 2022-06-08 |
Family
ID=73698641
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20211551.5A Withdrawn EP4009271A1 (en) | 2020-12-03 | 2020-12-03 | Machine learning of edge restoration following contrast suppression/material substitution |
EP21820565.6A Active EP4256512B1 (en) | 2020-12-03 | 2021-11-28 | Machine learning of edge restoration following contrast suppression/material substitution |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21820565.6A Active EP4256512B1 (en) | 2020-12-03 | 2021-11-28 | Machine learning of edge restoration following contrast suppression/material substitution |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240005455A1 (en) |
EP (2) | EP4009271A1 (en) |
CN (1) | CN116547697A (en) |
WO (1) | WO2022117470A1 (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020172188A1 (en) * | 2019-02-19 | 2020-08-27 | Cedars-Sinai Medical Center | Systems and methods for calcium-free computed tomography angiography |
-
2020
- 2020-12-03 EP EP20211551.5A patent/EP4009271A1/en not_active Withdrawn
-
2021
- 2021-11-28 US US18/038,562 patent/US20240005455A1/en active Pending
- 2021-11-28 EP EP21820565.6A patent/EP4256512B1/en active Active
- 2021-11-28 CN CN202180080680.0A patent/CN116547697A/en active Pending
- 2021-11-28 WO PCT/EP2021/083264 patent/WO2022117470A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020172188A1 (en) * | 2019-02-19 | 2020-08-27 | Cedars-Sinai Medical Center | Systems and methods for calcium-free computed tomography angiography |
Non-Patent Citations (3)
Title |
---|
"LECTURE NOTES IN COMPUTER SCIENCE", vol. 2879, 1 January 2003, SPRINGER BERLIN HEIDELBERG, Berlin, Heidelberg, ISBN: 978-3-54-045234-8, ISSN: 0302-9743, article IWO SERLIE ET AL: "Computed Cleansing for Virtual Colonoscopy Using a Three-Material Transition Model", pages: 175 - 183, XP055096939, DOI: 10.1007/978-3-540-39903-2_22 * |
TACHIBANA RIE ET AL: "Deep Learning Electronic Cleansing for Single- and Dual-Energy CT Colonography", RADIOGRAPHICS, vol. 38, no. 7, 1 November 2018 (2018-11-01), US, pages 2034 - 2050, XP055802252, ISSN: 0271-5333, Retrieved from the Internet <URL:https://pubs.rsna.org/doi/pdf/10.1148/rg.2018170173> DOI: 10.1148/rg.2018170173 * |
ZALIS M E ET AL: "DIGITAL SUBSTRACTION BOWEL CLEANSING FOR CT COLONOGRAPHY USING MORPHOLOGICAL AND LINEAR FILTRATION METHODS", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 23, no. 11, 1 November 2004 (2004-11-01), pages 1335 - 1343, XP001237727, ISSN: 0278-0062, DOI: 10.1109/TMI.2004.826050 * |
Also Published As
Publication number | Publication date |
---|---|
EP4256512A1 (en) | 2023-10-11 |
EP4256512B1 (en) | 2024-07-10 |
WO2022117470A1 (en) | 2022-06-09 |
CN116547697A (en) | 2023-08-04 |
US20240005455A1 (en) | 2024-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11062449B2 (en) | Method and system for extracting vasculature | |
Yang et al. | Efficient and robust instrument segmentation in 3D ultrasound using patch-of-interest-FuseNet with hybrid loss | |
Serlie et al. | Electronic cleansing for computed tomography (CT) colonography using a scale-invariant three-material model | |
Kubisch et al. | Vessel visualization with volume rendering | |
Hammon et al. | Model-based pancreas segmentation in portal venous phase contrast-enhanced CT images | |
CN111462115A (en) | Medical image display method and device and computer equipment | |
Wang et al. | Low-dose CT denoising using a progressive wasserstein generative adversarial network | |
Kayser et al. | Understanding the effects of artifacts on automated polyp detection and incorporating that knowledge via learning without forgetting | |
US20220138936A1 (en) | Systems and methods for calcium-free computed tomography angiography | |
Lee et al. | No-reference perceptual CT image quality assessment based on a self-supervised learning framework | |
Jafari et al. | LMISA: A lightweight multi-modality image segmentation network via domain adaptation using gradient magnitude and shape constraint | |
EP4256512B1 (en) | Machine learning of edge restoration following contrast suppression/material substitution | |
EP4009227A1 (en) | Local spectral-covariance computation and display | |
CN114882163A (en) | Volume rendering method, system, apparatus and storage medium | |
Preim et al. | Visualization, Visual Analytics and Virtual Reality in Medicine: State-of-the-art Techniques and Applications | |
Zheng et al. | SR-CycleGAN: super-resolution of clinical CT to micro-CT level with multi-modality super-resolution loss | |
Zannah et al. | Semantic Segmentation on Panoramic X-ray Images Using U-Net Architectures | |
Manshadi et al. | Colorectal Polyp Localization: From Image Restoration to Real-time Detection with Deep Learning | |
Kiraly | 3D image analysis and visualization of tubular structures | |
US20240013355A1 (en) | Suppression of tagged elements in medical images | |
EP3889896A1 (en) | Model-based virtual cleansing for spectral virtual colonoscopy | |
Applegate et al. | Self-supervised denoising of Nyquist-sampled volumetric images via deep learning | |
EP4350629A1 (en) | Artifact-driven data synthesis in computed tomography | |
US20070106402A1 (en) | Calcium cleansing for vascular visualization | |
Ferreira | 3D Lung Computed Tomography Synthesis using Generative Adversarial Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20221209 |