US12299876B2 - Deep learning based blob detection systems and methods - Google Patents
Deep learning based blob detection systems and methods Download PDFInfo
- Publication number
- US12299876B2 US12299876B2 US17/698,750 US202217698750A US12299876B2 US 12299876 B2 US12299876 B2 US 12299876B2 US 202217698750 A US202217698750 A US 202217698750A US 12299876 B2 US12299876 B2 US 12299876B2
- Authority
- US
- United States
- Prior art keywords
- blob
- net
- dog
- probability map
- instructions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30084—Kidney; Renal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/031—Recognition of patterns in medical or anatomical images of internal organs
Definitions
- the present disclosure relates to image processing and, in particular, systems and methods for blob detection using deep learning.
- Imaging biomarkers play a significant role in medical diagnostics and in monitoring disease progression and response to therapy.
- Development and validation of imaging biomarkers involves detection, segmentation, and classification of imaging features, and various conventional deep learning tools have been developed to perform these functions.
- these deep learning tools are strongly affected by image quality.
- Conventional imaging tools were developed to, e.g., precisely map and measure individual glomeruli in kidneys, which would allow detection of kidney pathology.
- the system may include a non-transitory computer-readable storage medium configured to store a plurality of instructions thereon which, when executed by a processor, cause the system to: train a U-Net and generate a probability map including a plurality of centroids of a plurality of corresponding blobs; derive, from the U-Net, two distance maps with bounded probabilities; apply Difference of Gaussian (DoG) with an adaptive scale constrained by the two distance maps with the bounded probabilities; and apply Hessian analysis and perform a blob segmentation.
- DoG Difference of Gaussian
- the system may include a multi-threshold, multi-scale small blob detector.
- the two distance maps may include binarized maps of distances between the plurality of centroids of the plurality of corresponding blobs utilized to bound a search space for scales of the DoG.
- the plurality of instructions may be further configured to cause the system to generate a Hessian convexity map using the adaptive scale.
- the plurality of instructions may be further configured to cause the system to eliminate an under-segmentation of the U-Net.
- the system may include a Bi-Threshold Constrained Adaptive Scale (BTCAS) blob detector configured to perform the plurality of instructions.
- BTCAS Bi-Threshold Constrained Adaptive Scale
- the plurality of instructions may include an implementation of a modified fully convolutional network including one or more concatenation paths.
- a method for blob detection using deep learning may include: obtaining an image for detecting a plurality of blobs; pre-training a U-Net to generate a probability map to detect a plurality of corresponding centroids of the plurality of blobs; deriving, from the U-Net, two distance maps including bounded probabilities; deriving, from the two distance maps, a plurality of bounded scales; smoothing each window of each centroid of the plurality of centroids with a Difference of Gaussian (DoG) filter, wherein the DoG filter includes an adaptive optimum scale constrained by the bounded scales; conducting a Hessian analysis on the smoothed window of the each centroid; and identifying a plurality of final segmented voxels sets corresponding to the plurality of blobs.
- DoG Difference of Gaussian
- the image may include an image of kidney glomeruli.
- the deriving the two distance maps may include minimizing a global loss function
- the binary cross entropy loss function may be defined as
- the method may further include outputting, to a computer in communicative connection with the U-Net, a count of the plurality of blobs in the image.
- the conducting the Hessian analysis may include eliminating an under-segmentation of the U-Net.
- an apparatus for blob detection using deep learning may include a non-transitory computer-readable storage medium configured to store a plurality of instructions thereon which, when executed by a processor, cause the apparatus to: train a U-Net and generate a probability map including a plurality of centroids of a plurality of corresponding blobs; derive, from the U-Net, two distance maps with bounded probabilities; apply Difference of Gaussian (DoG) with an adaptive scale constrained by the two distance maps with the bounded probabilities; and apply Hessian analysis and perform a blob segmentation.
- DoG Difference of Gaussian
- the apparatus may include a multi-threshold, multi-scale small blob detector.
- the plurality of instructions may be further configured to cause the apparatus to generate a Hessian convexity map using the adaptive scale.
- the plurality of instructions may be further configured to cause the apparatus to eliminate an under-segmentation of the U-Net.
- the apparatus may include a Bi-Threshold Constrained Adaptive Scale (BTCAS) blob detector configured to perform the plurality of instructions.
- BTCAS Bi-Threshold Constrained Adaptive Scale
- FIGS. 2 A and 2 B illustrate a process of deriving a distance map from a U-Net in accordance with various exemplary embodiments
- FIG. 3 are images related to training dataset of U-Net for an experiment conducted in accordance with various exemplary embodiments
- “electronic communication” means communication of at least a portion of the electronic signals with physical coupling (e.g., “electrical communication” or “electrically coupled”) and/or without physical coupling and via an electromagnetic field (e.g., “inductive communication” or “inductively coupled” or “inductive coupling”).
- “transmit” may include sending at least a portion of the electronic data from one system component to another (e.g., over a network connection).
- “data,” “information,” or the like may include encompassing information such as commands, queries, files, messages, data for storage, and/or the like in digital or any other form.
- “satisfy,” “meet,” “match,” “associated with,” or similar phrases may include an identical match, a partial match, meeting certain criteria, matching a subset of data, a correlation, satisfying certain criteria, a correspondence, an association, an algorithmic relationship, and/or the like.
- “authenticate” or similar terms may include an exact authentication, a partial authentication, authenticating a subset of data, a correspondence, satisfying certain criteria, an association, an algorithmic relationship, and/or the like.
- references to “various embodiments,” “one embodiment,” “an embodiment,” “an example embodiment,” etc. indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. After reading the description, it will be apparent to one skilled in the relevant art(s) how to implement the disclosure in alternative embodiments.
- Principles of the present disclosure contemplate use of advanced techniques for detecting and enumerating blobs, for example, kidney glomeruli.
- the systems and methods disclosed herein may be utilized to process and/or evaluate images, such as medical images of kidneys.
- BTCAS Bi-Threshold Constrained Adaptive Scale
- DoG Difference of Gaussian
- the BTCAS systems and methods were compared against four other methods—HDoG (Hessian-based Difference of Gaussians), U-Net with standard thresholding, U-Net with optimal thresholding, and UH-DoG (U-Net and Hessian-based Difference of Gaussians)—using Precision, Recall, F-score, Dice coefficient, and IoU (Intersection over Union), and the BTCAS systems and methods were found to statistically outperform the compared detectors.
- HDoG Hessian-based Difference of Gaussians
- U-Net with standard thresholding U-Net with optimal thresholding
- UH-DoG U-Net and Hessian-based Difference of Gaussians
- CFE-MRI cationic ferritin-enhanced magnetic resonance imaging
- BTCAS Bi-Threshold Constrained Adaptive Scale
- blob detector disclosed herein include two steps to detect blobs (e.g., glomeruli) from CFE-MRI of kidneys: (1) training U-Net to generate a probability map (e.g., denoising a raw image) to detect the centroids of the blobs, and then deriving two distance maps with bounded probabilities; and (2) applying the Difference of Gaussian (DoG) with an adaptive scale constrained by the bounded distance maps, followed by Hessian analysis for final blob segmentation.
- DoG Difference of Gaussian
- U-Net 100 (a modified fully convolutional network) includes an encoding path 102 and a decoding path 104 .
- the encoding path 102 has four blocks. Within each block, there are two 3 ⁇ 3 convolutional layers (Conv 3 ⁇ 3), a rectified linear unit (ReLU) layer, and a 2 ⁇ 2 max-pooling layer (Max pool 2 ⁇ 2). After each max-pooling layer, the resolution of the feature maps is halved, and the channel is doubled as shown in FIG. 1 .
- the input images are compressed by layer through the encoding path 102 .
- a corresponding decoding path 104 performs an inverse operation to generate an output including a reconstructed probability map of the same size as the input images.
- the resolution is increased by layer through the decoding path 104 .
- concatenation paths are added between them, marked by solid black arrows as shown in FIG. 1 .
- the final layer is a 1 ⁇ 1 convolutional layer (Conv 1 ⁇ 1), followed by a sigmoid function as shown. This sigmoid function ensures that the resultant output is a probability map.
- U-Net 100 may be directly used as a model for segmentation. In some embodiments wherein the output labeling is unknown, U-Net 100 may be used to process and denoise the images. Here, since the ground truth is unknown, the denoising capabilities of U-Net 100 are investigated. It is noted that, for CFE-MRI of kidneys, the glomeruli are extremely small—similar to noise that can be potentially removed by autoencoders.
- U-Net may advantageously (e.g., over autoencoder model) remove background noise from the MR images while simultaneously enhancing the glomerular detection.
- U-Net 100 is to obtain a function ( ⁇ ) mapping X to Y by learning and optimizing the parameters ⁇ of convolutional and deconvolutional kernels. This may be achieved by minimizing the global loss function:
- glomeruli in CFE-MR images are roughly spherical in shape, with varying image magnitudes. Based on this observation, Proposition 1 may be developed. Proposition 1 as well as Proposition 2 discussed herein are described with more detail in a separate section of the present disclosure.
- a first use of Proposition 1 may be to identify the centroid of any blob. From Proposition 1, the centroid of any bright blob may reach maximum probability. Therefore, a regional maximum function RM may be applied to the probability map U(x, y, z) to find voxels with maximum probability from the connected neighborhood voxels as blob centroids:
- RM ⁇ ( U ) max u ⁇ U ⁇ ( x , y , z ) ⁇ ⁇ u ⁇ ( - k , k ) U ⁇ ( u + ⁇ ⁇ u ) , ( 5 ) where k is the Euclidean distance between each voxel and its neighborhood voxels.
- Each blob centroid C i ⁇ C may have maximum probability within 6-connected neighborhood voxels.
- a second use of Proposition 1 may be to binarize the probability map with a confidence level. Otsu's thresholding may be used first to remove noise and voxels in the blob centroids, and to extract the probability distribution of blob voxels. Next, instead of using single threshold, the two-sigma rule may be applied to the distribution to identify the lower probability ⁇ L and higher probability ⁇ H covering 95% range of the probabilities. As a result, the probability map may then be binarized to B L (x, y, z) ⁇ 0,1 ⁇ I 1 ⁇ I 2 ⁇ I 3 and B H (x, y, z) ⁇ 0,1 ⁇ I 1 ⁇ I 2 ⁇ I 3 .
- B L (x, y, z) may approximate a blob with larger size and B H (X, y, z) may approximate a blob with smaller size.
- B(x, y, z) 1 ⁇ may be defined as the set of blob voxels and ⁇ the set of boundary voxels.
- d( ⁇ ) is the Euclidean distance function of any two voxels. The Euclidean distance between each voxel and the nearest boundary voxels may be:
- two distance maps may be derived, D L (x,y,z) ⁇ R I 1 ⁇ I 2 ⁇ I 3 and D H (x, y, z) ⁇ R I 1 ⁇ I 2 ⁇ I 3 respectively.
- the plot (a) in FIG. 2 A illustrates a probability distribution of a probability map.
- the image (b) in FIG. 2 A shows a visualization of the probability map.
- the plot (c) in FIG. 2 A shows a probability distribution after applying Otsu's thresholding.
- the image (d) in FIG. 2 A shows a visualization of a blob's probability.
- the image (e) in FIG. 2 B shows a binarized probability map B L under low threshold ⁇ L .
- the image (f) in FIG. 2 B shows a binarized probability map B H under high threshold ⁇ H .
- the image (g) in FIG. 2 B shows a distance map D L derived from B L .
- the image (h) in FIG. 2 B shows a distance map D H derived from B H .
- radius r i of blob i may be approximated as: r i ⁇ ( D H ( C i ), D L ( C i )).
- the smoothing scale in DoG is positively correlated with the blob radius.
- the bounded radius information in (9) may be used to constrain the adaptive scales in DoG imaging smoothing, as described further herein.
- the DoG filter smooths the image more efficiently in 3D than the Laplace of Gaussian (LoG) filter does.
- a normalized DoG nor (x,y,z;s i ) with multi-scale s i ⁇ (s i L , s i H ) may be applied on a small 3D window with size N ⁇ N ⁇ N (N>2*D L (C i )) and window center being the blob centroid C i ⁇ C.
- the Hessian matrix for this voxel may be:
- each voxel of a transformed bright or dark blob may have a negative or positive definite Hessian.
- the Hessian convexity window, HW (x, y, z; s i ) may be defined as a binary indicator matrix:
- H ⁇ W ⁇ ( x , y , z ; s i ) ⁇ 1 , if ⁇ H ⁇ ( D ⁇ o ⁇ G nor ⁇ ( x , y , z ; s i ) ) ⁇ is ⁇ negative ⁇ definite 0 , otherwise ( 14 )
- the optimum scale s i * for each blob may be determined if BW DoG (s i *) is maximum with s i ⁇ (s i L , s i H ).
- ( x,y,z ) ⁇ DoG nor ( x,y,z;s i ), HW ( x,y,z;s i *) 1 ⁇ .
- U-Net optical images of cell nuclei
- U-Net 100 This dataset contained 141 pathology images (2,000 ⁇ 2,000 pixels).
- images (a)-(c) are original images
- images (d)-(f) are ground truth labeled images for image (a)-(c)
- images (g)-(i) are synthetic training images based on images (d)-(f).
- Data were augmented to increase the invariance and robustness of U-Net.
- the augmented data were generated by a combination of rotation shift, width shift, height shift, shear, zoom, and horizontal flip.
- the trained model was validated using 3D synthetic image data and 3D MR image data.
- ⁇ noise 2 ⁇ image 2 10 S ⁇ N ⁇ R 10 . ( 17 )
- FIGS. 4 A and 4 B show the 3D synthetic images dataset utilized in Experiment I (slice 100 (of 256) from simulated 3D blob images with different parameter settings on the number of blobs and signal-to-noise ratio (SNR) (dB)).
- the ratio of overlap (O) of blobs in the 3D image was derived as:
- the threshold for the U-Net probability map in UH-DoG was set to 0.5.
- U-Net was implemented on a NVIDIA TITAN XP GPU with 12 GB of memory.
- a 2D (two-dimensional) U-Net was used, and 2D probability maps were rendered on each slice then stacked together to form a 3D probability map.
- OT U-Net used Otsu's thresholding to find the optimal threshold to reduce under-segmentation. With Hessian analysis, under-segmentation may be eliminated.
- the UH-DoG and BTCAS outperformed both U-Net and OT U-Net.
- the error rate of BTCAS slowly increased when the number of blobs increased from 5,000 to 50,000 with low noise and from 5,000 to 40,000 with high noise.
- the error rate of BTCAS increased when the number of blobs increased from 40,000 to 50,000 under high noise, this error rate was significantly lower than that for UH-DoG.
- the BTCAS system showed much more robustness in the presence of noise compared to the other four methods.
- a candidate was considered as a true positive if the centroid of its magnitude was in a detection pair j) for which the nearest ground truth center j had not been paired and the Euclidian distance D ij between ground truth center j and blob candidate i was less than or equal to d.
- the number (#) of true positives TP was calculated by (19).
- Precision, recall, and F-score were calculated by (20), (21), and (22), respectively.
- d was set to the average diameter of the blobs:
- blob segmentation applied to 3D CFE-MR images was investigated to measure number (Nglom) and apparent volume (aVglom) of glomeruli in healthy and diseased human donor kidneys that were not accepted for transplant.
- Three human kidneys were obtained at autopsy through a donor network (The International Institute for the Advancement of Medicine, Edison, NJ) after receiving Institutional Review Board (IRB) approval and informed consent from Arizona State University, and they were imaged by CFE-MRI.
- Each human MR image had pixel dimensions of 896 ⁇ 512 ⁇ 512.
- HDoG, UH-DoG, and BTCAS blob detector were utilized to segment glomeruli.
- First, 14,336 2D patches were generated with each patch being 128 ⁇ 128 in size, and each patch was then fed into U-Net.
- the threshold for the U-Net probability map in UH-DoG was 0.5.
- Quality control was performed by visually checking the identified glomeruli, visible as black spots in the images. For illustration, example results from CF2 which had more heterogenous pattern are shown in FIG. 6 .
- FIG. 6 For illustration, example results from CF2 which had more heterogenous pattern are shown in FIG. 6 .
- image (a) is the original magnitude image
- image (b) shows glomerular segmentation results of HDoG
- image (c) shows glomerular segmentation results of UH-DoG
- image (d) shows glomerular segmentation results of BTCAS blob detector
- images (e)-(h) show magnified regions from images (a)-(d) indicated by boxes shown in images (a)-(d).
- the BTCAS blob detector performed better than HDoG and UH-DoG in segmentation.
- Several example glomeruli are marked with various circles in images (e)-(h). In images (e)-(h) of FIG.
- UH-DoG identified fewer glomeruli due to under-segmentation when using the single thresholding (0.5) on the probability map of U-Net combined with the Hessian convexity map.
- BTCAS provided the most accurate measurements of Nglom and mean aVglom when compared to the other two methods.
- Each Mill image had pixel dimensions of 256 ⁇ 256 ⁇ 256.
- HDoG, HDoG with VBGMM (Variational Bayesian Gaussian Mixture Model), UH-DoG, and BTCAS blob detector were utilized to segment glomeruli.
- To denoise the 3D blob images by using trained U-Net each slice was first resized to 512 ⁇ 512, and then each slice was fed into U-Net.
- the threshold for the U-Net probability map in UH-DoG was 0.5.
- Nglom and mean aVglom are shown in Table VI and Table VII, where HDoG, UH-DoG, and BTCAS blob detector described herein are compared to HDoG with VBGMM. The differences between the results are also listed in Tables VI and VII. Compared to HDoG with VBGMM, HDoG identified more glomeruli, and the difference from HDoG with VBGMM for HDoG was much larger than for the other two methods, indicating over-detection under the single optimal scale of the DoG and lower mean aVglom than HDoG with VBGMM.
- UH-DoG identified fewer glomeruli and larger mean aVglom due to under-segmentation when using the single thresholding (0.5) on the probability map of U-Net combined with the Hessian convexity map.
- BTCAS provided the most accurate measurements of Nglom and mean aVglom compared to the other two methods.
- U-Net for pre-processing, followed by DoG where the scales vary depending on sizes of blobs (e.g., glomeruli).
- the computational time of U-Net was satisfactory. For example, it took less than 5 minutes for training and less than 1 second per slice or per patch for testing.
- the computational complexity of HDoG is O(N 1 N 2 N 3 (r 1 +r 2 +r 3 )).
- N S being the number of scales searched (N S >1)
- the computational complexity is O(N s N 1 N 2 N 3 (r 1 +r 2 +r 3 )).
- BTCAS may involve more computing effort compared to HDoG since N s >1—however, for HDoG, the single scale approach suffers from performances as shown in the comparison experiments (see FIG.
- Table VIII summarizes the computational time for DoG under exhaustive search on scales (noting the scale ranges [0, 1.5] using stereology knowledge) for each glomerulus and that for BTCAS. As shown, BTCAS saves about 30% computing time.
- I b ( x , y , z ) 1 2 ⁇ ⁇ 2 ⁇ exp ⁇ ( - ( x - ⁇ x ) 2 + ( y - ⁇ y ) 2 + ( z - ⁇ z ) 2 2 ⁇ ⁇ 2 ) . ( 25 )
- the probability predicted by U-Net increases or decreases monotonically from the centroid to the boundary of the dark or bright blob.
- the probability map from U-Net may be defined as U(x, y, z) ⁇ [0,1] I 1 ⁇ I 2 ⁇ I 3 which indicates the probability of each voxel belonging to any blob.
- a blob may be identified with a radius r.
- B L (x, y, z) and B H (x, y, z) r ⁇ L and r ⁇ H may be obtained respectively.
- B L (x, y, z) marks a larger blob region extending to the boundaries with low probability
- B H (x, y, z) marks a smaller blob region extending the boundary with high probability, that is r ⁇ L >r ⁇ H .
- the distance between the thresholding pixel (x ⁇ , y ⁇ , z ⁇ ) and the centroid of blob may be approximated by the radius of the blob: r ( ⁇ ) ⁇ square root over (( x ⁇ ⁇ x ) 2 +( y ⁇ ⁇ y ) 2 +( z ⁇ ⁇ z ) 2 ) ⁇ .
- the exemplary BTCAS systems and methods described herein provide an adaptive and effective tuning-free detector for blob detection and segmentation, which may be utilized for kidney biomarker identification for clinical use.
- a BTCAS blob detector includes two steps to detect blobs (for example, kidney glomeruli) from CFE-MRI.
- Step one may include training a U-Net to generate a probability map to detect the centroids of the blobs (step 702 ), and then deriving two distance maps with bounded probabilities (step 704 ).
- Step two may include applying a DoG filter with an adaptive scale constrained by the bounded distance maps (step 706 ), followed by Hessian analysis for final blob segmentation (step 708 ).
- exemplary systems and methods offer various advantages and improvements over prior approaches.
- an exemplary system including a U-Net reduces over-detection when used in an initial denoising step. This results in a probability map with the identified centroid of blob candidates.
- distance maps may be rendered with lower and upper probability bounds, which may be used as the constraints for local scale search for DoG.
- a local optimum DoG scale may be adapted to the range of blob sizes to better separate touching blobs.
- an adaptive scale based on deep learning greatly decreased under-segmentation by U-Net with over 80% increase in Dice and IoU and decreased over-detection by DoG with over 100% decrease in error rate of blob detection.
- the DoG and the Hessian analysis may be integrated as layers of an overall deep learning network for comprehensive blob (e.g., glomerular) segmentation.
- a 3D U-Net may be utilized instead of a 2D U-Net.
- a semi-supervised learning may be utilized by, e.g., incorporating domain knowledge of glomeruli to further improve glomerular detection and segmentation.
- the BTCAS systems and methods described herein were shown to be an adaptive and effective tuning-free detector for blob detection and segmentation and may be utilized for, e.g., kidney biomarker identification for clinical use.
- a blob detection system may include software operating on a general-purpose processor.
- a blob detection system may include an application-specific integrated circuit (ASIC).
- ASIC application-specific integrated circuit
- a blob detection system may include instructions operative on a reconfigurable computing device, for example a field-programmable gate array (FPGA).
- FPGA field-programmable gate array
- a blob detection system and methods thereof may be implemented as distributed software operative on multiple processors.
- references to “various exemplary embodiments”, “one embodiment”, “an embodiment”, “an exemplary embodiment”, etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment.
- the word “exemplary” is used herein to mean “serving as an example, instance or illustration”. Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
- the terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
- the terms “coupled,” “coupling,” or any other variation thereof are intended to cover a physical connection, an electrical connection, a magnetic connection, an optical connection, a communicative connection, a functional connection, and/or any other connection.
- non-transitory computer-readable medium and “non-transitory computer-readable storage medium” should be construed to exclude only those types of transitory computer-readable media which were found in In re Nuijten to fall outside the scope of patentable subject matter under 35 U.S.C. ⁇ 101.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
Abstract
Description
wherein: the X is an input image; the Y is a denoised image; the Nis a sample size; the (X; Θ) is a probability map mapping the X to the Y by learning and optimizing the parameters Θ of convolutional and deconvolutional kernels, followed by a sigmoid activation function; and the loss(·) is a binary cross entropy loss function.
wherein: the I1, I2, and I3 are dimensions of the image; the yk is a true label; and the k(X; Θ) is a predicted probability for a voxel k.
X=Y+ε,ε˜(0,σ2 I). (1)
where N is a sample size, (X; Θ)∈[0,1]I
where yk∈{0,1} is the true label and k(X; Θ)∈[0,1] is the predicted probability for voxel k. After denoising, the output of Y( ) may approximate Y:
(X;Θ)≈Y. (4)
where k is the Euclidean distance between each voxel and its neighborhood voxels. The blob centroid set C={Ci}i=1 N may be defined as:
C={(x,y,z)|(x,y,z)∈arg RM(U(x,y,z))}. (6)
r i∈(D H(C i),D L(C i)). (9)
where s is a scale value, * is convolution operator, and Gaussian kernel
The DoG filter smooths the image more efficiently in 3D than the Laplace of Gaussian (LoG) filter does. In various embodiments addressing challenge(s) in determining the optimum DoG scale in blob detection, the distance maps (DL and DH) from U-Net 100 may be applied to constrain the DoG scale for scale inference. Specifically, for d-dimensional images, the DoG may reach a maximum response under scale s=r/√{square root over (d)}. In a 3D image, the range of scale for each blob may be si∈(si L,si H), and substituting r with (9) results in:
s i l =D H(C i)/√{square root over (3)} (11)
s i H =D L(C i)/√{square root over (3)} (12)
S blob={(x,y,z)|(x,y,z)∈DoG nor(x,y,z;s i),HW(x,y,z;s i*)=1}. (16)
| TABLE I |
| PSEUDOCODE FOR |
| 1. Use a pretrained model to generate a probability map of blobs from |
| original image. |
| 2. Initialize probability range (δL, δH) and thresholding probability map to |
| get binarized map B(x, y, z) and distance map D(x, y, z) |
| 3. Calculate the blob centroids set C from probability map U(x, y, z). For |
| each blob with centroid Ci ∈ C, get the scale range (si L, si H). |
| 4. For each blob with centroid Ci ∈ C, transform raw image window of |
| blob to multi-scale DoG space with scale si ∈ (si L, si H). |
| 5. Calculate the Hessian matrix based on normalized DoG smoothed |
| window and generate the Hessian convexity window HW(x, y, z; si). |
|
|
| 7. Get the optimum Hessian convexity window HW(x, y, z; si*) under |
| scale si*. |
| 8. Identify the final segmented blob voxels set Sblob. |
where m is the number of true glomeruli and n is the number of blob candidates, and d is a thresholding parameter set to a positive value (0, +∞). If d is small, fewer blob candidates may be counted since the distance between the blob candidate centroid and ground-truth would be small. If d is too large, more blob candidates are counted. Here, since local intensity extremes may be anywhere within a small blob with an irregular shape, d was set to the average diameter of the blobs:
The Dice coefficient and IoU were calculated by comparing the segmented blob mask and ground truth mask by (23) and (24).
-
- where B is the binary mask for segmentation result and G is the binary mask for the ground truth.
| TABLE II |
| COMPARISON (AVG ± STD) AND ANOVA USING TUKEY’S HSD PAIRWISE TEST |
| OF BTCAS, HDOG, UH-DOG, U-NET, OT U-NET ON 3D SYNTHETIC IMAGES |
| UNDER SNR = 5 DB (LOW NOISE) (*SIGNIFICANCE p < 0.05) |
| METRICS | BTCAS | HDOG | U-NET | OT U-NET | UH-DOG |
| PRECISION | 1.00 ± 0.00 | 0.10 ± 0.07 | 0.98 ± 0.01 | 1.00 ± 0.00 | 1.00 ± 0.00 |
| (*<0.0001) | (*<0.0001) | (*<0.0001) | (0.172) | ||
| RECALL | 0.99 ± 0.00 | 0.99 ± 0.01 | 0.76 ± 0.12 | 0.81 ± 0.09 | 0.93 ± 0.04 |
| (* 0.041) | (*<0.001) | (*<0.0001) | (*<0.001) | ||
| F-SCORE | 1.00 ± 0.00 | 0.18 ± 0.11 | 0.85 ± 0.08 | 0.89 ± 0.06 | 0.96 ± 0.02 |
| (*<0.0001) | (*<0.001) | (*<0.001) | (*<0.001) | ||
| DICE | 0.96 ± 0.03 | 0.26 ± 0.14 | 0.52 ± 0.00 | 0.60 ± 0.04 | 0.97 ± 0.02 |
| (*<0.0001) | (*<0.0001) | (*<0.0001) | (*<0.0001) | ||
| IOU | 0.92 ± 0.05 | 0.16 ± 0.09 | 0.35 ± 0.00 | 0.43 ± 0.04 | 0.94 ± 0.04 |
| (*<0.0001) | (*<0.0001) | (*<0.0001) | (*<0.0001) | ||
| TABLE III |
| COMPARISON (AVG ± STD) AND ANOVA USING TUKEY’S HSD PAIRWISE TEST |
| OF BTCAS, HDOG, UH-DOG, U-NET, OT U-NET ON 3D SYNTHETIC IMAGES |
| UNDER SNR = 1 DB (HIGH NOISE) (*SIGNIFICANCE p < 0.05) |
| METRICS | BTCAS | HDOG | U-NET | OT U-NET | UH-DOG |
| PRECISION | 0.98 ± 0.03 | 0.09 ± 0.06 | 0.98 ± 0.01 | 1.00 ± 0.00 | 1.00 ± 0.00 |
| (*<0.0001) | (0.338) | (0.063) | (* 0.035) | ||
| RECALL | 0.99 ± 0.00 | 0.99 ± 0.01 | 0.76 ± 0.12 | 0.81 ± 0.10 | 0.93 ± 0.04 |
| (* 0.026) | (*<0.001) | (*<0.001) | (*<0.001) | ||
| F-SCORE | 0.99 ± 0.02 | 0.17 ± 0.10 | 0.85 ± 0.08 | 0.89 ± 0.06 | 0.96 ± 0.02 |
| (*<0.0001) | (*<0.001) | (*<0.0001) | (*<0.001) | ||
| DICE | 0.92 ± 0.08 | 0.26 ± 0.13 | 0.51 ± 0.01 | 0.61 ± 0.03 | 0.94 ± 0.04 |
| (*<0.0001) | (*<0.0001) | (*<0.0001) | (0.063) | ||
| IOU | 0.85 ± 0.13 | 0.15 ± 0.09 | 0.34 ± 0.00 | 0.44 ± 0.03 | 0.89 ± 0.07 |
| (*<0.0001) | (*<0.0001) | (*<0.0001) | (0.061) | ||
| TABLE IV |
| HUMAN KIDNEY GLOMERULAR SEGMENTATION (NGLOM) FROM CFE-MRI |
| USING HDoG, UH-DoG AND THE BTCAS BLOB DETECTORS COMPARED TO |
| DISSECTOR-FRACTIONATOR STEREOLOGY |
| Nglom | Nglom | Difference | Nglom | Difference | Nglom | Difference | |
| Human | (×106) | (×106) | Ratio | (×106) | Ratio | (×106) | Ratio |
| Kidney | (Stereology) | (BTCAS) | (%) | (UH-DoG) | (%) | (HDoG) | (%) |
| |
1.13 | 1.16 | 2.65 | 0.66 | 41.60 | 2.95 | >100 |
| |
0.74 | 0.86 | 16.22 | 0.48 | 35.14 | 1.21 | 63.51 |
| |
1.46 | 1.50 | 2.74 | 0.85 | 41.78 | 3.93 | >100 |
| TABLE V |
| HUMAN KIDNEY GLOMERULAR SEGMENTATION FROM CFE-MRI (MEAN AVGLOM) |
| USING HDoG, UH-DoG AND THE BTCAS BLOB DETECTORS COMPARED TO |
| DISSECTOR-FRACTIONATOR STEREOLOGY |
| Mean | Mean | Mean | Mean | ||||
| aVglom | aVglom | Difference | aVglom | Difference | aVglom | Difference | |
| Human | (×10−3 mm3) | (×10−3 mm3) | Ratio | (×10−3 mm3 | Ratio | (×10−3 mm3 | Ratio |
| Kidney | (Stereology) | (BTCAS) | (%) | (UH-DoG) | (%) | (HDoG) | (%) |
| |
5.01 | 5.32 | 6.19 | 7.36 | 46.91 | 4.8 | 4.19 |
| |
4.68 | 4.78 | 2.14 | 5.62 | 20.09 | 3.2 | 31.62 |
| |
2.82 | 2.55 | 9.57 | 3.73 | 32.37 | 3.2 | 13.48 |
| TABLE VI |
| MOUSE KIDNEY GLOMERULAR SEGMENTATION (Nglom) FROM CFE-MRI |
| USING HDoG, UH-DoG AND THE BTCAS COMPARED TO HDoG WITH VBGMM |
| METHOD |
| Nglom | ||||||||
| (HDoG | Difference | Nglom | Difference | Difference |
| Mouse | with | Nglom | Ratio | (UH- | Ratio | Nglom | Ratio |
| kidney | VBGMM) | (BTCAS) | (%) | DoG) | (%) | (HDoG) | (%) |
| CKD | ID | 7,656 | 7,719 | 0.82 | 7,346 | 4.05 | 10,923 | 42.67 |
| 429 | ||||||||
| ID | 8,665 | 8,228 | 5.04 | 8,138 | 6.08 | 9,512 | 9.77 | |
| 466 | ||||||||
| ID | 8,549 | 8,595 | 0.54 | 8,663 | 1.33 | 12,755 | 49.20 | |
| 467 | ||||||||
| Avg | 8,290 | 8,181 | 2.13 | 8,049 | 2.91 | 11,063 | 33.88 | |
| Std | 552 | 440 | 663 | 1626 | ||||
| Control | ID | 12,724 | 12,008 | 5.63 | 12,701 | 0.18 | 15,515 | 21.93 |
| for CKD | 427 | |||||||
| ID | 10,829 | 11,048 | 2.02 | 11,347 | 4.78 | 15,698 | 44.96 | |
| 469 | ||||||||
| ID | 10,704 | 10,969 | 2.48 | 11,309 | 5.65 | 13,559 | 26.67 | |
| 470 | ||||||||
| ID | 11,943 | 12,058 | 0.96 | 12,279 | 2.81 | 16,230 | 35.90 | |
| 471 | ||||||||
| ID | 12,569 | 13,418 | 6.75 | 12,526 | 0.34 | 17,174 | 36.64 | |
| 472 | ||||||||
| ID | 12,245 | 12,318 | 0.60 | 11,853 | 3.20 | 15,350 | 25.36 | |
| 473 | ||||||||
| Avg | 11,836 | 11,970 | 3.07 | 12,003 | 1.41 | 15,588 | 31.91 | |
| Std | 872 | 903 | 595 | 1193 | ||||
| AKI | ID | 11,046 | 10,752 | 2.66 | 11,033 | 0.12 | 12,315 | 11.49 |
| 433 | ||||||||
| ID | 11,292 | 10,646 | 5.72 | 10,779 | 4.54 | 17,634 | 56.16 | |
| 462 | ||||||||
| ID | 11,542 | 11,820 | 2.41 | 10,873 | 5.80 | 20,458 | 77.25 | |
| 463 | ||||||||
| ID | 11,906 | 12,422 | 4.33 | 11,340 | 4.75 | 25,233 | >100 | |
| 464 | ||||||||
| Avg | 11,447 | 11,410 | 3.78 | 11,006 | 3.85 | 18,910 | 64.21 | |
| Std | 367 | 858 | 246 | 5401 | ||||
| Control | ID | 10,336 | 10,393 | 0.55 | 10,115 | 2.14 | 13,473 | 30.35 |
| for AKI | 465 | |||||||
| ID | 10,874 | 11,034 | 1.47 | 11,157 | 2.60 | 16,934 | 55.73 | |
| 474 | ||||||||
| ID | 10,292 | 9,985 | 2.98 | 10,132 | 1.55 | 12,095 | 17.52 | |
| 475 | ||||||||
| ID | 10,954 | 11,567 | 5.60 | 10,892 | 0.57 | 15,846 | 44.66 | |
| 476 | ||||||||
| ID | ||||||||
| 477 | 10,885 | 11,143 | 2.37 | 11,335 | 4.13 | 14,455 | 32.80 | |
| Avg | 10,668 | 10,824 | 2.59 | 10,726 | 0.54 | 14,561 | 36.21 | |
| Std | 325 | 630 | 572 | 1908 | ||||
| TABLE VII |
| MOUSE KIDNEY GLOMERULAR SEGMENTATION (MEAN aVglom) FROM CFE-MRI |
| USING HDoG, UH-DoG AND THE BTCAS COMPARED TO HDoG WITH VBGMM |
| METHOD |
| Mean | ||||||||
| aVglom | Mean | |||||||
| (HDoG | Mean | Difference | aVglom | Difference | Mean | Difference |
| Mouse | with | aVglom | Ratio | (UH- | Ratio | aVglom | Ratio |
| kidney | VBGMM) | (BTCAS) | (%) | DoG) | (%) | (HDoG) | (%) |
| CKD | ID | 2.57 | 2.63 | 2.33 | 2.92 | 11.99 | 2.46 | 4.28 |
| 429 | ||||||||
| ID | 2.01 | 2.01 | 0.00 | 2.06 | 2.43 | 1.75 | 12.94 | |
| 466 | ||||||||
| ID | 2.16 | 2.20 | 1.85 | 2.32 | 6.90 | 1.9 | 12.04 | |
| 467 | ||||||||
| Avg | 2.25 | 2.28 | 1.40 | 2.43 | 7.67 | 2.04 | 9.75 | |
| Std | 0.29 | 0.32 | 0.44 | 0.37 | ||||
| Control | ID | 1.49 | 1.57 | 5.37 | 1.61 | 7.45 | 1.49 | 0.00 |
| for | 427 | |||||||
| CKD | ID | 1.91 | 1.95 | 2.09 | 2.20 | 13.18 | 1.76 | 7.85 |
| 469 | ||||||||
| ID | 1.98 | 2.05 | 3.54 | 2.04 | 2.94 | 1.73 | 12.63 | |
| 470 | ||||||||
| ID | 1.5 | 1.58 | 5.33 | 1.56 | 3.85 | 1.4 | 6.67 | |
| 471 | ||||||||
| ID | 1.35 | 1.36 | 0.74 | 1.49 | 9.40 | 1.35 | 0.00 | |
| 472 | ||||||||
| ID | 1.5 | 1.56 | 4.00 | 1.58 | 5.06 | 1.39 | 7.33 | |
| 473 | ||||||||
| Avg | 1.62 | 1.68 | 3.51 | 1.75 | 7.16 | 1.52 | 5.75 | |
| Std | 0.26 | 0.26 | 0.30 | 0.18 | ||||
| AKI | ID | 1.53 | 1.64 | 7.19 | 1.63 | 6.13 | 1.38 | 9.80 |
| 433 | ||||||||
| ID | 1.34 | 1.41 | 5.22 | 1.48 | 9.46 | 1.3 | 2.99 | |
| 462 | ||||||||
| ID | 2.35 | 2.4 | 2.13 | 2.61 | 9.96 | 1.94 | 17.45 | |
| 463 | ||||||||
| ID | 2.31 | 2.36 | 2.16 | 2.40 | 3.75 | 1.78 | 22.94 | |
| 464 | ||||||||
| Avg | 1.88 | 1.95 | 4.18 | 2.03 | 7.27 | 1.60 | 13.29 | |
| Std | 0.52 | 0.50 | 0.56 | 0.31 | ||||
| Control | ID | 2.3 | 2.46 | 6.96 | 2.40 | 4.17 | 2.11 | 8.26 |
| for | 465 | |||||||
| AKI | ID | 2.44 | 2.34 | 4.10 | 2.52 | 3.17 | 2.14 | 12.30 |
| 474 | ||||||||
| ID | 1.74 | 1.86 | 6.90 | 1.70 | 2.35 | 1.58 | 9.20 | |
| 475 | ||||||||
| ID | 1.53 | 1.57 | 2.61 | 1.62 | 5.56 | 1.49 | 2.61 | |
| 476 | ||||||||
| ID | 1.67 | 1.68 | 0.60 | 1.70 | 1.76 | 1.61 | 3.59 | |
| 477 | ||||||||
| Avg | 1.94 | 1.98 | 4.23 | 1.99 | 2.62 | 1.79 | 7.19 | |
| Std | 0.41 | 0.40 | 0.43 | 0.31 | ||||
Discussion of Computation Time
| TABLE VIII |
| COMPARISON OF COMPUTATION TIME BETWEEN DOG UNDER |
| GLOMERULUS-SPECIFIC OPTIMAL SCALE AND BTCAS METHOD |
| DOG under | |||
| glomerulus-specific | Difference | ||
| Human | optimal scale | BTCAS | Ratio |
| Kidney | (second) | (second) | (%) |
| | 51,238 | 34,792 | 32.10 |
| | 39,616 | 28,156 | 28.93 |
| | 59,703 | 41,425 | 30.61 |
| AVG ± STD | 50,186 ± 10,085 | 34,791 ± 6,635 | 30.55 ± 1.59 |
Discussion of
I N =I b+ε,ε˜(0,σ2 I). (26)
U b(x,y,z)= b(I N;Θ)= b(I b+ε;Θ)≈I b(x,y,z). (27)
r(δ)≈√{square root over ((x δ−μx)2+(y δ−μy)2+(z δ−μz)2)}. (29)
U b(x δ
and
U b(x δ
U b(x δ
and
r(δL)>r(δH)>r(U b(μx,μy,μz))=0. (33)
Description of Methods
Claims (19)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/698,750 US12299876B2 (en) | 2021-03-23 | 2022-03-18 | Deep learning based blob detection systems and methods |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202163164699P | 2021-03-23 | 2021-03-23 | |
| US17/698,750 US12299876B2 (en) | 2021-03-23 | 2022-03-18 | Deep learning based blob detection systems and methods |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220318999A1 US20220318999A1 (en) | 2022-10-06 |
| US12299876B2 true US12299876B2 (en) | 2025-05-13 |
Family
ID=83449937
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/698,750 Active 2043-06-20 US12299876B2 (en) | 2021-03-23 | 2022-03-18 | Deep learning based blob detection systems and methods |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US12299876B2 (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116503839A (en) * | 2023-04-28 | 2023-07-28 | 智道网联科技(北京)有限公司 | Road Arrow Recognition Method, Device, Electronic Equipment, and Computer-Readable Storage Medium |
| CN116728157B (en) * | 2023-06-12 | 2025-09-23 | 安徽工业大学 | A slab burr location method incorporating deep learning |
Citations (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040151356A1 (en) | 2003-01-31 | 2004-08-05 | University Of Chicago | Method, system, and computer program product for computer-aided detection of nodules with three dimensional shape enhancement filters |
| US20050119829A1 (en) | 2003-11-28 | 2005-06-02 | Bishop Christopher M. | Robust bayesian mixture modeling |
| US20050244042A1 (en) | 2004-04-29 | 2005-11-03 | General Electric Company | Filtering and visualization of a multidimensional volumetric dataset |
| US20070036409A1 (en) | 2005-08-02 | 2007-02-15 | Valadez Gerardo H | System and method for automatic segmentation of vessels in breast MR sequences |
| US20070133894A1 (en) | 2005-12-07 | 2007-06-14 | Siemens Corporate Research, Inc. | Fissure Detection Methods For Lung Lobe Segmentation |
| US20080100612A1 (en) | 2006-10-27 | 2008-05-01 | Dastmalchi Shahram S | User interface for efficiently displaying relevant oct imaging data |
| US20090005693A1 (en) | 2004-12-22 | 2009-01-01 | Biotree Systems, Inc. | Medical Imaging Methods and Apparatus for Diagnosis and Monitoring of Diseases and Uses Therefor |
| US20090122060A1 (en) | 2005-03-17 | 2009-05-14 | Algotec Systems Ltd | Bone Segmentation |
| US20110044524A1 (en) | 2008-04-28 | 2011-02-24 | Cornell University | Tool for accurate quantification in molecular mri |
| US20110293157A1 (en) | 2008-07-03 | 2011-12-01 | Medicsight Plc | Medical Image Segmentation |
| US20120155734A1 (en) | 2009-08-07 | 2012-06-21 | Ucl Business Plc | Apparatus and method for registering two medical images |
| US20120294502A1 (en) | 2010-02-11 | 2012-11-22 | Heang-Ping Chan | Methods for Microalification Detection of Breast Cancer on Digital Tomosynthesis Mammograms |
| US20130022548A1 (en) | 2009-12-14 | 2013-01-24 | Bennett Kevin M | Methods and Compositions Relating to Reporter Gels for Use in MRI Techniques |
| US20130230230A1 (en) * | 2010-07-30 | 2013-09-05 | Fundação D. Anna Sommer Champalimaud e Dr. Carlos Montez Champalimaud | Systems and methods for segmentation and processing of tissue images and feature extraction from same for treating, diagnosing, or predicting medical conditions |
| US20130329972A1 (en) | 2012-06-08 | 2013-12-12 | Advanced Micro Devices, Inc. | Biomedical data analysis on heterogeneous platform |
| US20140270436A1 (en) | 2013-03-12 | 2014-09-18 | Lightlab Imaging, Inc. | Vascular Data Processing and Image Registration Systems, Methods, and Apparatuses |
| US20160189373A1 (en) | 2013-08-01 | 2016-06-30 | Seoul National University R&Db Foundation | Method for Extracting Airways and Pulmonary Lobes and Apparatus Therefor |
| US20190139216A1 (en) * | 2017-11-03 | 2019-05-09 | Siemens Healthcare Gmbh | Medical Image Object Detection with Dense Feature Pyramid Network Architecture in Machine Learning |
-
2022
- 2022-03-18 US US17/698,750 patent/US12299876B2/en active Active
Patent Citations (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040151356A1 (en) | 2003-01-31 | 2004-08-05 | University Of Chicago | Method, system, and computer program product for computer-aided detection of nodules with three dimensional shape enhancement filters |
| US20050119829A1 (en) | 2003-11-28 | 2005-06-02 | Bishop Christopher M. | Robust bayesian mixture modeling |
| US20050244042A1 (en) | 2004-04-29 | 2005-11-03 | General Electric Company | Filtering and visualization of a multidimensional volumetric dataset |
| US20090005693A1 (en) | 2004-12-22 | 2009-01-01 | Biotree Systems, Inc. | Medical Imaging Methods and Apparatus for Diagnosis and Monitoring of Diseases and Uses Therefor |
| US20090122060A1 (en) | 2005-03-17 | 2009-05-14 | Algotec Systems Ltd | Bone Segmentation |
| US20070036409A1 (en) | 2005-08-02 | 2007-02-15 | Valadez Gerardo H | System and method for automatic segmentation of vessels in breast MR sequences |
| US20070133894A1 (en) | 2005-12-07 | 2007-06-14 | Siemens Corporate Research, Inc. | Fissure Detection Methods For Lung Lobe Segmentation |
| US20080100612A1 (en) | 2006-10-27 | 2008-05-01 | Dastmalchi Shahram S | User interface for efficiently displaying relevant oct imaging data |
| US20110044524A1 (en) | 2008-04-28 | 2011-02-24 | Cornell University | Tool for accurate quantification in molecular mri |
| US20110293157A1 (en) | 2008-07-03 | 2011-12-01 | Medicsight Plc | Medical Image Segmentation |
| US20120155734A1 (en) | 2009-08-07 | 2012-06-21 | Ucl Business Plc | Apparatus and method for registering two medical images |
| US20130022548A1 (en) | 2009-12-14 | 2013-01-24 | Bennett Kevin M | Methods and Compositions Relating to Reporter Gels for Use in MRI Techniques |
| US20120294502A1 (en) | 2010-02-11 | 2012-11-22 | Heang-Ping Chan | Methods for Microalification Detection of Breast Cancer on Digital Tomosynthesis Mammograms |
| US20130230230A1 (en) * | 2010-07-30 | 2013-09-05 | Fundação D. Anna Sommer Champalimaud e Dr. Carlos Montez Champalimaud | Systems and methods for segmentation and processing of tissue images and feature extraction from same for treating, diagnosing, or predicting medical conditions |
| US20130329972A1 (en) | 2012-06-08 | 2013-12-12 | Advanced Micro Devices, Inc. | Biomedical data analysis on heterogeneous platform |
| US20140270436A1 (en) | 2013-03-12 | 2014-09-18 | Lightlab Imaging, Inc. | Vascular Data Processing and Image Registration Systems, Methods, and Apparatuses |
| US20160189373A1 (en) | 2013-08-01 | 2016-06-30 | Seoul National University R&Db Foundation | Method for Extracting Airways and Pulmonary Lobes and Apparatus Therefor |
| US20190139216A1 (en) * | 2017-11-03 | 2019-05-09 | Siemens Healthcare Gmbh | Medical Image Object Detection with Dense Feature Pyramid Network Architecture in Machine Learning |
Non-Patent Citations (105)
| Title |
|---|
| A. Corduneanu, et al., "Variational Bayesian model selection for mixture distributions." Artificial intelligence and Statistics. vol. 2001. Waltham, MA: Morgan Kaufmann, 2001. |
| A. Esteva, et al., "Dermatologist-level classification of skin cancer with deep neural networks.," Nature, vol. 542, No. 7639, pp. 115-118, 2017. |
| A. Janowczyk, et al., "Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases," J. Pathol. Inform., vol. 7, No. 1, pp. 29, 2016. |
| A. Lord, et al., "Brain parcellation choice affects disease-related topology differences increasingly from global to local network levels", Psychiatry Research: Neuroimaging, vol. 249, pp. 12-19, 2016. |
| A.F. Frangi, et al. "Multiscale vessel enhancement filtering." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer Berlin Heidelberg, 1998. |
| C. Couprie, et al., "Power Watersheds: A New Image Segmentation Framework Extending Graph Cuts, Random Walker and Optimal Spanning Forest", in Proceedings of the IEEE 12th International Conference on Computer Vision (ICCV), pp. 731-738, 2009. |
| C. F. Koyuncu, et al., "Smart Markers for Watershed-Based Cell Segmentation," PLOS ONE, vol. 7, Issue 11, e48664, Nov. 2012. |
| C. Russell, et al., "Using the Pn Potts Model with Learning Methods to Segment Live Cell Images," in Proceedings of the IEEE 11th International Conference on Computer Vision (ICCV), 2007, pp. 1-8, 2007. |
| C. Xu, et al., "Snakes, Shapes, and Gradient Vector Flow," IEEE Transactions on Image Processing, vol. 7, No. 3, pp. 359-369, Mar. 1998. |
| D. C. Ciresan, et al., "Mitosis detection in breast cancer histology images with deep neural networks," Int. Conf. Med. Image Comput. Comput. Interv. Springer, Berlin, Heidelb., pp. 411-418, 2013. |
| D. Danon, et al., "Use of Cationized Ferritin as a Label of Negative Charges on Cell Surfaces," Journal of Ultrastructure Research, vol. 38, No. 5, pp. 500-510, 1972. |
| D. J. Ho, et al., "Nuclei detection and segmentation of fluorescence microscopy images using three dimensional convolutional neural networks," in Proceedings—International Symposium on Biomedical Imaging, pp. 418-422, 2018. |
| D. Tomar, et al., "Nucleo-Cytophasmic Trafficking of TRIM8, a Novel Oncogene, is Involved in Positive Regulation of TNF Induced NF-kB Pathway," PLOS One, vol. 7, Issue 11, e48662, Nov. 2012. |
| E. Bernardis et al., "Finding Dots: Segmentation as Popping Out Regions from Boundaries", in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 199-206, 2010. |
| E. Bernardis et al., "Pop Out Many Small Structures from a Very Large Microscopic Image," Medical Image Analysis, vol. 15, No. 5, pp. 690-707, 2011. |
| E. Bernardis et al., "Segmentation Subject to Stitching Constraints: Finding Many Small Structures in a Large Image," in Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention, pp. 119-126, Sep. 2010. |
| E. Eikefjord, et al., "Use of 3D DCE-MRI for the Estimation of Renal Perfusion and Glomerular Filtration Rate: An Intrasubject Comparison of Flash and KWIC With a Comprehensive Framework for Evaluation", American Journal of Roentgenology, vol. 204, No. 3, pp. W273-W281, Mar. 2015. |
| E. J. Baldelomar et al., "Phenotyping by magnetic resonance imaging nondestructively measures glomerular number and volume distribution in mice with and without nephron reduction," Kidney Int., vol. 89, No. 2, pp. 498-505, 2016. |
| E. J. Baldelomar, et al., "In vivo measurements of kidney glomerular number and size in healthy and Os/+ mice using MRI," Am. J. Physiol.—Ren. Physiol., vol. 317, No. 4, pp. F865-F873, 2019. |
| E. J. Baldelomar, et al., "Measuring rat kidney glomerular number and size in vivo with MRI," Am. J. Physiol. Renal Physiol., vol. 314, No. 3, pp. F399-F406, 2018. |
| E. Shelhamer, et al., "Fully Convolutional Networks for Semantic Segmentation," IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, No. 4, pp. 640-651, Apr. 2017. |
| F. Gao et al., "SD-CNN: A shallow-deep CNN for improved breast cancer diagnosis," Comput. Med. Imaging Graph., vol. 70, pp. 53-62, 2018. |
| F. Gao, et al., "AD-Net: Age-adjust neural network for improved MCI to AD conversion prediction," NeuroImage Clin., pp. 102290, 2020. |
| F. Gao, et al., "Deep Residual Inception Encoder-Decoder Network for Medical Imaging Synthesis," IEEE J. Biomed. Heal. Informatics, vol. 24, No. 1, pp. 39-49, Jan. 2020. |
| F. Gao, et al., "MR efficiency using automated MRI-desktop eProtocol," in Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, vol. 10138, pp. 101380Z, 2017. |
| F. Mahmood, et al., "Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images," IEEE Trans. Med. Imaging, vol. 39, No. 11, pp. 3257-3267, Nov. 2020. |
| F. Meyer, "Topographic Distance and Watershed Lines", Signal processing, vol. 38, No. 1, pp. 113-125, 1994. |
| F. Xing, et al., "An automatic learning-based framework for robust nucleus segmentation," IEEE Trans. Med. Imaging, vol. 35, No. 2, pp. 550-566, Feb. 2016. |
| F. Xing, et al., "Pixel-to-Pixel Learning with Weak Supervision for Single-Stage Nucleus Recognition in Ki67 Images," IEEE Trans. Biomed. Eng., vol. 66, No. 11, pp. 3088-3097, Nov. 2019. |
| F. Yi, et al., "White Blood Cell Image Segmentation Using On-line Trained Neural Network," in Proceedings of the 2005 IEEE, Engineering in Medicine and Biology 27th Annual Conference, pp. 6476-6479, Sep. 2005. |
| G. Bertrand, "On Topological Watersheds," Journal of Mathematical Imaging and Vision, vol. 22, No. 2-3, pp. 217-230, 2005. |
| G. Kindlmann, et al., "Curvature-Based Transfer Functions for Direct Volume Rendering: Methods and Applications", in Proceedings of the 14th IEEE Visualization Conference (VIS'03), pp. 513-520, 2003. |
| G. Lee, et al., "Predicting Alzheimer's disease progression using multi-modal deep learning approach," Sci. Rep., vol. 9, No. 1, pp. 1-12, 2019. |
| G. Litjens, et al., "Computer-aided detection of prostate cancer in MRI," IEEE Trans. Med. Imaging, vol. 33, No. 5, pp. 1083-1092, 2014. |
| H. Kong, et al., "A Generalized Laplacian of Gaussian Filter for Blob Detection and its Applications," IEEE Transactions on Cybernetics, vol. 43, No. 6, pp. 1719-1733, 2013. |
| H. Kong, et al., "Partitioning Histopathological Images: An Integrated Framework for Supervised Color-Texture Segmentation and Cell Splitting," IEEE Transactions on Medical Imaging, vol. 30, No. 9, pp. 1661-1677, Sep. 2011. |
| H. T. Chiang, et al., "Noise Reduction in ECG Signals Using Fully Convolutional Denoising Autoencoders," IEEE Access, vol. 7, pp. 60806-60813, 2019. |
| H. T. H. Phan, et al., "Optimizing contextual feature learning for mitosis detection with convolutional recurrent neural networks," in Proceedings—International Symposium on Biomedical Imaging, pp. 240-243, 2019. |
| H. Wang, et al., "Clump Splitting via Bottleneck Detection and Shape Classification," Pattern Recognition, Elsevier Ltd., vol. 45, No. 7, pp. 2780-2787, 7, 2012. |
| H.-H. Lin, et al., "Cell Segmentation and NC Ratio Analysis of Third Harmonic Generation Virtual Biopsy Images Based on Marker-Controlled Gradient Watershed Algorithm," in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), pp. 101-104, 2012. |
| I. Mendichovszky, et al., "How Accurate is Dynamic Contrast-Enhanced MRI in the Assessment of Renal Glomerular Filtration Rate? A Critical Appraisal", Journal of Magnetic Resonance Imaging, vol. 27, No. 4, pp. 925-931, 2008. |
| J. Batson et al., "Noise2Seif: Blind denoising by self-supervision," in 36th International Conference on Machine Learning, ICML, pp. 524-533, 2019. |
| J. Cousty, et al., "Watershed Cuts: Minimum Spanning Forests and the Drop of Water Principle," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, No. 8, pp. 1362-1374, Aug. 2009. |
| J. F. Bertram, "Analyzing Renal Glomeruli with the New Stereology", International Review of Cytology, W. J. Kwang and J. Jonathan, eds., pp. 111-172: Academic Press, 1995. |
| J. Kong, et al., "Computer-Aided Evaluation of Neuroblastoma on Whole-Slide Histology Images: Classifying Grade of Neuroblastic Differentiation," Pattern Recognition, vol. 42, No. 6, pp. 1080-1092, 2009. |
| J. M. Sharif, et al., "Red Blood Cell Segmentation Using Masking and Watershed Algorithm: A Preliminary Study," in Proceedings of the 2012 International Conference on Biomedical Engineering (ICoBE), pp. 258-262, Feb. 2012. |
| J. P. Bergeest et al., "Efficient Globally Optimal Segmentation of Cells in Fluorescence Microscopy Images Using Level Sets and Convex Energy Functionals," Medical Image Analysis, Elsevier Ltd., vol. 16, No. 7, pp. 1436-1444, 2012. |
| J. R. Charlton, et al., "Magnetic resonance imaging accurately tracks kidney pathology and heterogeneity in the transition from acute kidney injury to chronic kidney disease," Kidney Int., 2020, doi: 10.1016/j.kint.2020.08.021. |
| J.-P. Bonvalet, et al., "Compensatory Renal Hypertrophy in Young Rats: Increase in the Number of Nephrons," Kidney International, vol. 1, No. 6, pp. 391-396, 1972. |
| K. Bennett et al., "MRI Quantification of Single Glomerular Function", Meeting Abstract, American Journal of Kidney Diseases, vol. 53, No. 4, 25, p. A29, 2009. |
| K. M. Bennett et al., "MRI of the basement membrane using charged nanoparticles as contrast agents," Magn. Reson. Med., vol. 60, No. 3, pp. 564-574, 2008. |
| K. Mikolajczyk, et al. "Scale & Affine Invariant Interest Point Detectors," International Journal of Computer Vision, vol. 60, No. 1, pp. 63-86, 2004. |
| K. Nandy, et al., "Automatic Segmentation and Supervised Learning-based Selection of Nuclei in Cancer Tissue Images," Cytometry Part A, vol. 81A, No. 9, pp. 743-754, 2012. |
| K.M. Bennett et al., "The Emerging Role of MRI in Quantitative Renal Glomerular Morphology" American Journal of Physiology Renal Physiology, vol. 304, No. 10, pp. F1252-F1257, 2013. |
| L. A. Cullen-McEwen, et al., "Estimating Total Nephron Number in the Adult Kidney Using the Physical Disector/Fractionator Combination", Methods in Molecular Biology, vol. 886, pp. 333-350, 2012. |
| L. Annet et al., "Glomerular Filtration Rate: Assessment with Dynamic Contrast-Enhanced MRI and a Cortical-Compartment Model in the Rabbit Kidney", Journal of Magnetic Resonance Imaging, vol. 20, No. 5, pp. 843-849, 2004. |
| L. Hermoye, et al., "Calculation of the Renal Perfusion and Glomerular Filtration Rate From the Renal Impulse Response Obtained With MRI", Magnetic Resonance in Medicine, vol. 51, No. 5, pp. 1017-1025, 2004. |
| L. Xie, et al., "Magnetic Resonance Histology of Age-Related Nephropathy in the Sprague Dawley Rat," Toxicologic Pathology, vol. 40, No. 5, pp. 764-778, 2012. |
| M. G. Uberti, et al., "A Semi-Automatic Image Segmentation Method for Extraction of Brain Volume from In Vivo Mouse Head Magnetic Resonance Imaging Using Constraint Level Sets," Journal of Neuroscience Methods, Elsevier, Ltd., vol. 179, No. 2, pp. 338-344, 2009. |
| M. Heilmann, et al., "Quantification of Glomerular Number and Size Distribution in Normal Rat Kidneys Using Magnetic Resonance Imaging," Nephrology Dialysis Transplantation, vol. 27, No. 1, pp. 100-107, Jan. 1, 2012. |
| M. Khoshdeli, et al., "Feature-Based Representation Improves Color Decomposition and Nuclear Detection Using a Convolutional Neural Network," IEEE Trans. Biomed. Eng., vol. 65, No. 3, pp. 625-634, Mar. 2018. |
| M. Liu, et al., "Classification of alzheimer's disease by combination of convolutional and recurrent neural networks using FDG-PET images," Front. Neuroinform., vol. 12, pp. 35, 2018. |
| M. N. Kashif, et al., "Handcrafted features with convolutional neural networks for detection of tumor cells in histology images," in Proceedings—International Symposium on Biomedical Imaging, pp. 1029-1032, 2016. |
| M. Zeng, et al., "Measurement of Single-Kidney Glomerular Filtration Function from Magnetic Resonance Perfusion Renography", European Journal of Radiology, vol. 84, No. 8, pp. 1419-1423, 2015. |
| M. Zhang, "Small Blob Detection in Medical Images", PhD Thesis, Arizona State University, May 2015. |
| M. Zhang, et al., "Efficient Small Blob Detection Based on Local Convexity, Intensity and Shape Information," IEEE Trans. Med. Imaging, vol. 35, No. 4, pp. 1127-1137, Apr. 2016. |
| M. Zhang, et al., "Small Blob Identification in Medical Images Using Regional Features From Optimum Scale," IEEE Trans. Biomed. Eng., vol. 62, No. 4, pp. 1051-1062, Apr. 2015. |
| N. Badshah, et al., "Multigrid Method for the Chan-Vese Model in Variational Segmentation," Communications in Computational Physics, vol. 4, No. 2, pp. 294-316, 2008. |
| N. Harder, et al., "Automated Recognition of Mitotic Patterns in Fluorescence Microscopy Images of Human Cells," in Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, pp. 1016-1019, 2006. |
| N. Otsu, "A threshold selection method from gray-level histograms," IEEE Trans. Syst. Man Cybern., vol. 9, No. 1, pp. 62-66, 1979. |
| N. Wahab, et al., "Transfer learning based deep CNN for segmentation and detection of mitoses in breast cancer histopathological images," Microscopy, vol. 68, No. 3, pp. 216-233, 2019. |
| N. Xu, et al., "Object Segmentation Using Graph Cuts Based Active Contours," Science Direct, Computer Vision and Image Understanding, vol. 107, No. 3, pp. 210-224, 2007. |
| O. Dzyubachyk, et al., "Advanced Level-Set-Based Cell Tracking in Time-Lapse Fluorescence Microscopy," IEEE Transactions on Medical Imaging, vol. 29, No. 3, pp. 852-867, Mar. 2010. |
| P. Esser, et al., "A Variational U-Net for Conditional Appearance and Shape Generation," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 8857-8866, 2018. |
| P. Szeptycki, et al., "Conformal Mapping-Based 3D Face Recognition," in Proceedings of the Fifth International Symposium on 3D Data Processing, Visualization and Transmission, pp. 1-8, 2010. |
| P.S. Tofts, et al., "Precise Measurement of Renal Filtration and Vascular Parameters Using a Two-Compartment Model for Dynamic Contrast-Enhanced MRI of the Kidney Gives Realistic Normal Values", European Radiology, vol. 22, No. 6, pp. 1320-1330, 2012. |
| PCT; International Search Report & Written Opinion dated Oct. 7, 2014, in PCT Application No. US2014/059545. |
| Q. Dou, et al., "Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection," IEEE Trans. Biomed. Eng., vol. 64, No. 7, pp. 1558-1567, Jun. 2017. |
| Q. Wen, et al., "A Delaunay Triangulation Approach for Segmenting Clumps of Nuclei," in Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 9-12, 2009. |
| R. Abramson et al., "Methods and Challenges in Quantitative Imaging Biomarker Development", Academic Radiology, vol. 22, No. 1, pp. 25-32, 2015. |
| R. Komatsu, et al., "Effectiveness of U-Net in Denoising RGB Images," Comput. Sci. Inf. Techn, pp. 1-10, 2019. |
| R. Rouhi, et al., "Benign and malignant breast tumors classification based on region growing and CNN segmentation," Expert Syst. Appl., vol. 42, No. 3, pp. 990-1002, 2015. |
| S. C. Beeman et al., "Measuring glomerular number and size in perfused kidneys using MRI," AJP Ren. Physiol., vol. 300, No. 6, pp. F1454-F1457, 2011. |
| S. C. Beeman et al., "MRI-based glomerular morphology and pathology in whole human kidneys," AJP Ren. Physiol., vol. 306, No. 11, pp. F1381-F1390, 2014. |
| S. C. Beeman et al., "Toxicity, Biodistribution, and Ex Vivo MRI Detection of Intravenously Injected Cationized Ferritin," Magnetic Resonance in Medicine, vol. 69, 584 No. 3, pp. 853-861, 2013. |
| S. E. A. Raza, et al., "Deconvolving convolutional neural network for cell detection," in Proceedings—International Symposium on Biomedical Imaging, pp. 891-894, 2019. |
| S. K. Nath, et al., "Cell Segmentation Using Coupled Level Sets and Graph-Vertex Coloring," in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 101-108, 2006. |
| S. Liu, et al., "Multimodal Neuroimaging Feature Learning for Multiclass Diagnosis of Alzheimer's Disease," IEEE Trans. Biomed. Eng., vol. 62, No. 4, pp. 1132-1140, Apr. 2015. |
| S.P. Sourbron, et al., "MRI-Measurement of Perfusion and Glomerular Filtration in the Human Kidney with a Separable Compartment Model", Investigative Radiology, vol. 43 No. 1, pp. 40-48, Jan. 2008. |
| T. C. Chiang, et al., "Tumor detection in automated breast ultrasound using 3-D CNN and prioritized candidate aggregation," IEEE Trans. Med. Imaging, vol. 38, No. 1, pp. 240-249, Jan. 2019. |
| T. F. Chan, et al., "Active Contours Without Edges," IEEE Transactions on Image Processing, vol. 10, No. 2, pp. 266-277, Feb. 2001. |
| T. Falk, et al., "U-Net: deep learning for cell counting, detection, and morphometry," Nat. Methods, vol. 16, No. 1, pp. 67-70, 2019. |
| T. Lindeberg, "Feature Detection with Automatic Scale Selection," Int. J. Comput. Vis., vol. 30, No. 2, pp. 79-116, 1998. |
| T. Wu, et al., "Quantitative Imaging System for Cancer Diagnosis and Treatment Planning: An Interdisciplinary Approach," in the Operations Research Revolution, pp. 153-177, 2017. |
| USPTO, Non-Final Rejection, dated Jan. 4, 2018, in U.S. Appl. No. 15/082,095. |
| USPTO, Notice of Allowance, date Apr. 26, 2018, in U.S. Appl. No. 15/082,095. |
| W. E. Hoy, et al., "Nephron Number, Glomerular Volume Renal Disease and Hypertension," Current Opinion in Nephrology and Hypertension, vol. 17, No. 3, pp. 258-265, 2008. |
| Y. Al-Kofahi, et al., "Improved automatic detection and segmentation of cell nuclei in histopathology images," IEEE Trans. Biomed. Eng., vol. 57, No. 4, pp. 841-852, Apr. 2010. |
| Y. Song, et al., "Dynamic residual dense network for image denoising," Sensors (Switzerland), vol. 19, No. 17, pp. 3809, 2019. |
| Y. Wang, et al., "Shape Analysis with Conformal Invariants for Multiply Connected Domains and its Application to Analyzing Brain Morphology," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 202-209, 2009. |
| Y. Xu, et al, "Improved small blob detection in 3D images using jointly constrained deep learning and Hessian analysis," Sci. Rep., vol. 10, No. 1, pp. 1-12, 2020. |
| Y. Xu, et al., "Small Blob Detector Using Bi-Threshold Constrained Adaptive Scales." IEEE transactions on bio-medical engineering vol. 68,9 (2021): 2654-2665. |
| Y. Xu, et al., "U-net with optimal thresholding for small blob detection in medical images," in IEEE International Conference on Automation Science and Engineering, pp. 1761-1767, 2019. |
| Y. Xue, et al., "Training Convolutional Neural Networks and Compressed Sensing End-to-End for Microscopy Cell Detection," IEEE Trans. Med. Imaging, vol. 38, No. 11, pp. 2632-2641, Nov. 2019. |
| Y.-D. Zhang, et al., "Feasibility Study of High-Resolution DCE-MRI for Glomerular Filtration Rate (GFR) Measurement in a Routine Clinical Modal", Magnetic Resonance Imaging, vol. 33, No. 8, pp. 978-983, 2015. |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220318999A1 (en) | 2022-10-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Ramola et al. | Study of statistical methods for texture analysis and their modern evolutions | |
| Kong et al. | A generalized Laplacian of Gaussian filter for blob detection and its applications | |
| CN114600155B (en) | Weak supervised multitasking learning for cell detection and segmentation | |
| US11200667B2 (en) | Detection of prostate cancer in multi-parametric MRI using random forest with instance weighting and MR prostate segmentation by deep learning with holistically-nested networks | |
| Xu et al. | Automatic nuclei detection based on generalized laplacian of gaussian filters | |
| Yuan et al. | Hybrid-feature-guided lung nodule type classification on CT images | |
| Sheba et al. | An approach for automatic lesion detection in mammograms | |
| US7346209B2 (en) | Three-dimensional pattern recognition method to detect shapes in medical images | |
| Seff et al. | 2D view aggregation for lymph node detection using a shallow hierarchy of linear classifiers | |
| US20170231550A1 (en) | Method and device for analysing an image | |
| US20110075920A1 (en) | Multi-Level Contextual Learning of Data | |
| Taha et al. | Automatic polyp detection in endoscopy videos: A survey | |
| Al-Karawi et al. | An evaluation of the effectiveness of image-based texture features extracted from static B-mode ultrasound images in distinguishing between benign and malignant ovarian masses | |
| US12299876B2 (en) | Deep learning based blob detection systems and methods | |
| Matos et al. | Diagnosis of breast tissue in mammography images based local feature descriptors | |
| Hapsari et al. | Modified Gray‐Level Haralick Texture Features for Early Detection of Diabetes Mellitus and High Cholesterol with Iris Image | |
| Bhavani et al. | Image registration for varicose ulcer classification using KNN classifier | |
| Djunaidi et al. | Gray level co-occurrence matrix feature extraction and histogram in breast cancer classification with ultrasonographic imagery | |
| Kim et al. | Detection and weak segmentation of masses in gray-scale breast mammogram images using deep learning | |
| Savitha et al. | A fully-automated system for identification and classification of subsolid nodules in lung computed tomographic scans | |
| Wang et al. | Improved classifier for computer‐aided polyp detection in CT Colonography by nonlinear dimensionality reduction | |
| Huang et al. | Learning to segment key clinical anatomical structures in fetal neurosonography informed by a region-based descriptor | |
| Karale et al. | A screening CAD tool for the detection of microcalcification clusters in mammograms | |
| Sao Khue et al. | Improving brain tumor multiclass classification with semantic features | |
| Shakoor | Lung tumour detection by fusing extended local binary patterns and weighted orientation of difference from computed tomography |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY, ARIZONA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XU, YANZHE;WU, TERESA;GAO, FEI;REEL/FRAME:059451/0173 Effective date: 20210323 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO MICRO (ORIGINAL EVENT CODE: MICR); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: NATIONAL INSTITUTES OF HEALTH (NIH), U.S. DEPT. OF HEALTH AND HUMAN SERVICES (DHHS), U.S. GOVERNMENT, MARYLAND Free format text: CONFIRMATORY LICENSE;ASSIGNOR:ARIZONA STATE UNIVERSITY-TEMPE CAMPUS;REEL/FRAME:064657/0300 Effective date: 20220523 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |