WO2014158345A1 - Procédés et systèmes permettant de détecter la bifurcation d'un récipient - Google Patents

Procédés et systèmes permettant de détecter la bifurcation d'un récipient Download PDF

Info

Publication number
WO2014158345A1
WO2014158345A1 PCT/US2014/014298 US2014014298W WO2014158345A1 WO 2014158345 A1 WO2014158345 A1 WO 2014158345A1 US 2014014298 W US2014014298 W US 2014014298W WO 2014158345 A1 WO2014158345 A1 WO 2014158345A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
feature
samples
filters
Prior art date
Application number
PCT/US2014/014298
Other languages
English (en)
Inventor
Michael Abramoff
Gwenole Quellec
Original Assignee
University Of Iowa Research Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Iowa Research Foundation filed Critical University Of Iowa Research Foundation
Priority to US14/764,926 priority Critical patent/US20150379708A1/en
Publication of WO2014158345A1 publication Critical patent/WO2014158345A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • a primary step in automated detection of DR is the detection of microaneurysms, which are highly pathognomic and often the first sign of DR, although it has been shown that detecting other lesions can improve DR detection performance.
  • current methods fail to sufficiently differentiate lesions from retinal blood vessels: when a more sensitive setting was used, false positives occurred on the vessels, while when a specific setting was used, lesions connected to or close to the vasculature were missed.
  • Negative lesion confounders are typically structures that are similar looking, but not the target lesion such as vessel portions in the microaneurysm case and exudates, cotton-wool spots and Stargardt flecks in the drusen case.
  • Positive lesion confounders are typically a subset of target lesions that are easily missed because positive lesions have specific properties that are substantially different from the standard case, but should still be detected. In the case of microaneurysm detection, these are microaneurysms connected to the vasculature, for example.
  • negative lesion confounders are the false positives of simple detectors and positive lesion confounders are the false negatives of simple detectors.
  • lesions are hardly detected if the lesions are not specifically modeled. In both cases, lesions that are proximate to other lesions, including being fused with these other lesions, also are positive lesion confounders.
  • retinal vessel bifurcations can be landmarks in retinal images.
  • Bifurcation metrics can aid clinical evaluation and can serve as input for downstream image processing, such as retinal image registration and branching analysis. Accordingly, system and methods for detecting and/or differentiating objects of interest such as lesions, bifurcations, abnormalities, and/or other identifiable objects or collections of pixels, are desirable.
  • a method of identifying an object of interest in digital images can comprise obtaining first samples of an intensity distribution of one or more objects of interest in one or more of the digital images based upon one or more wavelength bands; obtaining second samples of an intensity distribution of confounder objects in one or more of the digital images, at a predetermined frequency; transforming the first and second samples into an appropriate first space; performing dimension reduction on the transformed first and second samples, whereby the dimension reduction of the transformed first and second samples generates an object detector; transforming one or more of the digital images into the first space; performing dimension reduction on the transformed digital images, whereby the dimension reduction of the transformed digital images generates one or more reduced images; classifying one or more pixels of the one or more reduced images based on a comparison with the object detector; and identifying one or more objects of interest in the reduced digital images from the classified pixels.
  • a method of identifying an object of interest in a plurality of digital images configured as an image group can comprise: obtaining first samples of an intensity distribution of one or more objects of interest in the image group based upon one or more wavelength bands; obtaining second samples of the intensity distribution of confounder objects in the image group, at a frequency high enough to affect a pre-defined performance metric; transforming the first and second samples into an appropriate first space; performing dimension reduction on the transformed first and second samples, whereby the dimension reduction of the transformed first and second samples generates an object detector; transforming the image group into the first space; projecting the transformed image group into the reduced dimension space to generate a reduced image group; classifying each pixel in the reduced image group with an appropriate neighborhood based on a comparison with the object detector; and automatically identifying one or more objects of interest in the reduced image group from abnormal pixels based upon the comparison with the object detector.
  • a system can comprise a memory for storing digital images; and a processor in communication with the memory, the processor configured to: obtain first samples of an intensity distribution of one or more objects of interest in one or more of the digital images based upon one or more wavelength bands; obtain second samples of an intensity distribution of confounder objects in one or more of the digital images, at a predetermined frequency; transform the first and second samples into an appropriate first space; perform dimension reduction on the transformed first and second samples, whereby the dimension reduction of the transformed first and second samples generates an object detector; transform one or more of the digital images into the first space; perform dimension reduction on the transformed digital images, whereby the dimension reduction of the transformed digital images generates one or more reduced images; classify one or more pixels of the one or more reduced images based on a comparison with the object detector; and identify one or more objects of interest in the reduced digital images from the classified pixels.
  • exemplary methods can comprise receiving at least one image having one or more annotations indicating a feature.
  • Methods can comprise generating training images from the at least one image. Each training image can be based on a respective section of the at least one image.
  • the training images can comprise positive images having the feature and negative images without the feature.
  • Methods can comprise generating a feature space based on the positive images and the negative images.
  • Methods can further comprise identifying the feature in one or more unclassified images based upon the feature space.
  • Figure 1 is a flow diagram of an exemplary method
  • Figure 2 is an exemplary representation of image samples
  • Figure 3 is a flow diagram of an exemplary method
  • Figure 4 is a flow diagram of an exemplary method
  • Figure 5 is a flow diagram of an exemplary method
  • Figure 6 is a block diagram of an exemplary computing system
  • Figures 7A-7D are exemplary image samples
  • Figure 8 is a representation of an exemplary image determination
  • Figure 9 is a graph of FROC of microaneurysm detection
  • Figures 10A-10D are exemplary graphs of time complexity assessments
  • Figure 1 1 is an exemplary graph of ROC of microaneurysm detection
  • Figure 12 is an exemplary graph of ROC of drusen versus flecks classification
  • Figure 13 A illustrates an overview of an exemplary filter framework comprising two stages
  • Figure 13B illustrates an example fundus image with annotated bifurcations labeled by dark circles
  • Figure 13C illustrates bifurcation samples, and in red, green, and blue channels
  • Figure 14 illustrates example filters based on the example data
  • Figure 15 is an example plot of the cumulative variance of red, green, and blue channels with respect to the number of principal components
  • Figure 16A illustrates an AUC curve of test images with 10 selected features
  • Figure 16B illustrates an example probability map of example data
  • FIG. 16C illustrates vein bifurcations detected by the methods described herein
  • Figure 17 is a flowchart illustrating an exemplary method of analyzing an image.
  • Figure 18 is a flowchart illustrating an exemplary method of analyzing an image.
  • An "object” can be, for example, a regular or irregular region in an image with defined borders contained within a digital image.
  • An "object of interest” can be, for example, an object that most closely resembles the concept that the user(s) are interested in detecting, for example, in a background.
  • objects of interest can be lesions or abnormalities.
  • objects of interest can be, but not limited to, microaneurysms, dot hemorrhages, flame-shaped hemorrhages, sub-intimal hemorrhages, sub-retinal hemorrhages, pre-retinal hemorrhages, micro-infarctions, cotton-wool spots, yellow exudates, and the like.
  • a “feature” can be, for example, a group of one or more descriptive
  • a "set of features" can be, for example, a customized group of one or more descriptive characteristics of objects of interest.
  • a “threshold” can be, for example, a level, point, or value above which
  • Level, points, or values include probabilities, sizes in pixels, and values representing pixel brightness, or similarity between features.
  • Background can be, for example, some or all the regions in all images that are not objects of interest.
  • a "supervised procedure” can be, for example, a computer programming method wherein a program learns a function from examples of inputs and outputs.
  • the methods and systems may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
  • the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. More particularly, the present methods and systems may take the form of web- implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
  • These computer program instructions may also be stored in a computer- readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer- readable instructions for implementing the function specified in the flowchart block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
  • the methods and systems disclosed can provide a way to detect objects in images, where the detectors for these objects are optimized.
  • the methods can be used in a variety of applications, such as to detect abnormalities and other structures in retinal images, in other medical images, and in any other images, including robotic and computer vision as well as in online search.
  • results can be achieved in images of the retina, of blood cells, of fruit trees, and in satellite images of helicopter landing pads.
  • an optimal filter framework is provided to solve lesion
  • the optimal filter framework can provide detection that is as close to instantaneous as possible.
  • the optimal filter framework can allow instantaneous feedback to the camera operator, as well as immediate feedback of the detection results to the patient.
  • the optimal filter framework can automatically generate features adapted for differentiating target lesions, including atypical target lesions, from negative lesion confounders.
  • the optimal filter framework can rely on the generation of a set of filters matched both to the lesions and to common positive as well as negative lesion confounders.
  • modeling lesions mathematically is convenient in the sense that no training images are required: the human brain is able to generalize out verbal descriptions by clinicians, though a limited number of samples can serve the modeler well. Moreover, no additional annotations are required each time the method is applied to a new dataset. On the other hand, deciding how to model the target lesions can be subjective and biased, and a good description may not (yet) be available. Therefore, fully automating the design is attractive, as it makes the disclosed approach immediately applicable to a wider range of lesion detection problems, and any modeling bias is avoided.
  • FIG. 1 is a block diagram illustrating an exemplary method. This exemplary method is only an example of an operative method and is not intended to suggest any limitation as to the scope of use or functionality of operating environment architecture. Neither should the operating environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment.
  • the method 100 can comprise generating an optimal set of filters, at 101, to differentiate objects of interest such as lesions and positive and negative lesion confounders, for example.
  • an optimal filter framework for detecting target lesions can comprise obtaining a representative set of samples from one or more of lesions, positive lesion confounders, and negative lesion confounders, at 102.
  • a set of filters, or a basis can be derived from the samples to optimally represent the samples.
  • the representative set of samples can uniformly describe the space of all possible samples (e.g., lesion samples), as well as the space of possible confounders (e.g., negative lesion).
  • the most common lesion confounders can be sampled.
  • the design of an exemplary filter can be defined as one of the following:
  • a mathematical model can be designed by a modeler for both lesions and lesion confounders, and the model parameter space can subsequently be traversed to generate (different) artificial image samples;
  • the method of FIG. 1 can comprise a training stage and a training stage
  • a set of images e.g., digital images with expert-annotated objects
  • object filters can be used to build one or more object filters, and to optimize the classification.
  • the training set of images can be divided into a first subset and a second subset.
  • the first subset can be used to build an optimal set of object filters.
  • the second subset can be used to perform feature selection on the optimal set of object filter, resulting in selected filters.
  • a classifier can be trained based on the selected filters, resulting in a trained vision machine for which the performance can be determined.
  • the objects in a test images set can be detected and localized by the system and can be compared to the annotation of an expert.
  • a test images set e.g., retinal images
  • one or more of the first and second subsets and the filters can be representative samples from the real data that can be kept entirely separated.
  • a set of filters can be obtained to form a feature space and reference points in the feature space.
  • an unknown fundus image under detection can be first transferred into the feature space and each pixel can be assigned the probability of being a bifurcation, thus a soft probability map of bifurcations can be generated.
  • modeling 104) and/or data-driven design (e.g., direct sampling 106) approach can be used in order to obtain samples representative of the objects (e.g., lesion) of interest, as well as samples from positive lesions confounders, negative lesions confounders, or any combination of these.
  • a modeler can be configured to translate verbal descriptions with clinicians, as well as intensity distributions in a limited number of samples selected by the clinicians, into mathematical equations.
  • the mathematical equations can be based upon a continuous model of the intensity distribution of the modeled lesions, and were defined for typical lesions, for positive lesion confounders if any are identified, and for negative lesion confounders, if any are identified.
  • the typical intensity distribution for both microaneurysms and drusen can be modeled as:
  • FIG. 2 a plurality of image samples and samples derived from the models are provided in FIG. 2.
  • Other images in various domains can be processed in a similar manner.
  • image samples first row 202
  • similar samples derived from mathematical models in the spatial domain second row 204
  • after convolution with Haar filters third row 206: vertical Haar filter
  • the first six samples can come from the microaneurysm detection problem: a typical lesion (a), positive lesion confounders (b), (c) and negative lesion confounders (d), (e), (f).
  • the last two samples come from the drusen (g) versus flecks (h) classification problem.
  • a negative microaneurysm confounder can be defined by a linear vessel portion.
  • a negative microaneurysm confounder can be defined by a vessel crossing: two vessels, modeled by iine rVessel(x. //: n . 8 ⁇ )— profile • 1
  • the confounder can be empirically expressed as:
  • positive lesion confounders for microaneurysm detection can be modeled by a translation of one or more of the following functions:
  • negative lesion confounders can be modeled for drusen
  • a pisciform intensity distribution can be used.
  • the pisciform intensity distribution can be used to model Stargardt's disease flecks, as illustrated below: o , 2 ⁇ ⁇ , 0 ⁇ — prof Ue ⁇ rT: S 6 ⁇
  • samples can be generated from each intensity distribution model according to the exemplary method illustrated in FIG. 3.
  • lesion detection is used as an example, other objects of interest can be used in various domains.
  • a scale for the generated samples can be selected based on an estimation of the typical size of the lesions and lesions confounders in images.
  • a wide ratio R can be selected since most negative lesion confounders look similar to the lesions at the center of the patch.
  • lesions can be discriminated from negative lesion confounders by the periphery of the patch.
  • vessel crossings may look similar to a microaneurysm at the center, but, as opposed to microaneurysms, there will be linear structures, oriented toward the center of the patch, at the periphery.
  • step 304 for each parameter 73 ⁇ 4 of a model (the standard deviations ai and a 2 of the lesions, the angle between vessel segments, the distance between two neighboring lesions, etc.), a range [m min; m max] of variation can be estimated from the training set.
  • image patches can be generated from each model by uniformly sampling the product space
  • a color model can be used to improve further the presented graylevel modeling of the lesions.
  • gray levels in the generated samples are mapped to colors, using a look-up table (e.g. intensity 0— the background— can be mapped to pale orange and intensity -1— the center of vessels and microaneurysms— can be mapped to dark red, for example). If the scale range for the target lesion is very important, then different sizes of patches can be generated.
  • an advanced physical model of the lesion transform e.g. the effect the local pathology has on overlying and surrounding tissue
  • a model would be highly complex, and more importantly, the model's design would require substantial domain knowledge, as well as a substantial amount of experiments to perfect the model for each new application.
  • Modeling is even more challenging if color is involved: a direct sampling color model simply requires sampling in the entire color space, instead of complex modeling of the interaction of color channel intensities.
  • a data- driven or direct sampling approach has many advantages.
  • the target lesions e.g., typical lesions and positive lesion confounders
  • the annotation can comprise an indication of the center of the lesions, or segment the lesions.
  • a candidate lesion detector can be used to find a center of the lesion within the segmented region.
  • a set of sample images can represent all images and all local image variations that the disclosed filter framework may encounter to optimize the framework.
  • one or more initial steps can be performed, at 108.
  • the presence of low-frequency noise e.g., slow background intensity variation
  • high-frequency noise e.g., salt-and-pepper noise, compression artifacts, etc.
  • the noise-free samples S and/or noisy samples S + N can be projected into a transformed space where the signal-to-noise ratio is high, such as
  • T(S) ⁇ ' « T(S + ) ' a T(S 4- Af ) ' T(S) ⁇ ' « T(S + ) ' a T(S 4- Af ) '
  • T . is t k he transform operator
  • N and N' are two realizations of a random noise.
  • representative samples characterize the range of variations for the shape, the intensity and the color of the target lesions and lesion confounders.
  • the intensity of the object can easily be normalized across samples, at 108. Normalizing this parameter reduces the dimensionality of the problem, without significantly affecting the quality of the image sample
  • the reference samples for lesions and lesion confounders can be projected in the transformed space of high signal-to-noise ratio and subsequently normalized.
  • the reference samples can be maximally representative of all lesions and lesion confounders, provided that these are also projected into the transformed space and normalized.
  • classifying a new sample can be reduced to finding the most similar reference samples, in a least square sense.
  • the normalized transformed space described above can be used as feature space for classification with a k-NN classifier. Because it can be desirable to achieve close to instantaneous detection, the dimensionality of the feature space can be configured to be as low as possible.
  • the reference samples can be relied upon to find the filters generating the optimal feature space for classifying new samples: a reduced feature space preserving information in the least square sense.
  • the optimal projection for given samples can be provided by the Principal Component Analysis (PCA) of these samples.
  • PCA is an orthogonal linear transformation that transforms the set of samples into a new coordinate system such that the greatest variance, by any projection of the data, comes to lie on the first coordinate, the second greatest variance on the second coordinate, etc.
  • each sample can be considered as a single observation, through applying PCA on the set of samples, principal components can be obtained which can be the same size as the samples.
  • These principal components can be the optimal filters because the linear combination of these filters represents the entire set of samples, and the coefficients of the linear combination can be obtained by the convolution of original samples with corresponding filters.
  • the optimal filters thus found can span the initial, high dimensional feature space into which any image pixel can be projected.
  • bifurcations in a training set can be projected into the feature space in this way to form the positive reference points.
  • Non-bifurcation points can be randomly sampled in the training set and projected into the feature space in the same way to form the negative reference points. Both bifurcation and non-bifurcation points together can form the reference points in the feature space.
  • sampled pixels from the training set can be projected into the initial feature space by convolving them with the filters for later usage.
  • images can be processed based upon the method illustrated in
  • FIG. 4 However, other sequences, steps, and processes, can be used and applied in various domains.
  • step 402 a preprocessing step can be used to identify and normalize
  • step 404 for each pixel pi;j selected in the preprocessing step 402, the normalized neighborhood of the pixel - the sample - can be input to the classifier.
  • the risk of presence of the target lesion can be computed by the classifier for pi;j and a lesion risk map can thus be obtained, at 406.
  • step 408 if the lesions need to be segmented, the risk map defined in step 406 can be thresholded, and the connected foreground pixels can be identified using morphological labeling.
  • Each connected component can be regarded as an automatically detected lesion, and this lesion can be assigned a risk value defined as the maximal pixelwise risk of presence within the connected component.
  • step 410 if a probabilistic diagnosis for the entire image is desired/required, the risks assigned to each automatically detected lesions can be fused. Typically, if a single type of lesions is detected, the probabilistic diagnosis for the image can simply be defined as the maximum risk of presence of a target lesion.
  • a candidate extraction step 116 can be used for both selecting reference negative lesion confounders in the direct sampling and selecting the pixels that can be fed to the classifier when processing unseen images.
  • the first step to extract candidates in an input image is to transform the image, at 502.
  • the techniques discussed at step 108 can be used to transform the image.
  • other transforms can be used.
  • the intensity of the potential lesion within the neighborhood Ni;j of the pixel pi;j can then be estimated, at 504.
  • the techniques discussed at step 108 can be used to estimate intensity of pixels of the image.
  • other estimation processes and methods can be used.
  • step 506 the intensity of the potential legion can be compared to a
  • threshold or pre-defined range value If the intensity of the potential lesion is outside the normal range of intensities for the target lesions— the range observed on the training set plus a margin— , pi;j can be rejected as a potential lesion. As an example, candidates with low intensity can be discarded (the background): by chance, magnified background pixels might look similar to the target lesions;
  • Attenuated pigment clumps or attenuated small nevi might look like lesions, so candidates with an unusually high intensity can be discarded. If the lesion intensity is within that range, the neighborhood Ni;j can be normalized (i.e. divided by the estimated intensity), at 508, before being further classified.
  • optimal feature space at 118, can be derived from a representative set of lesions and lesion confounders, a straightforward approach to classify new image samples within the optimal feature space can be to find the most similar representative samples within this space.
  • a lesion risk can be computed (e.g., at 112) for a new sample from the distance to these representative samples and their labels.
  • the distance between two samples can simply be defined as the Euclidian distance between their projections on the optimal set of filters. Approximate nearest neighbor search, based on k-d trees, can be applied to find the most similar reference samples.
  • a target lesion e.g., typical lesion or common positive lesion confounder
  • ⁇ ( ⁇ ) - ⁇ if n is a common negative lesion confounder.
  • This risk ranges from -1 (e.g., high risk of being negative) to 1 (e.g., high risk of being positive).
  • the distance to the most similar reference samples is likely to be high.
  • a risk close to zero can be measured. In particular, if background pixels are not discarded by the candidate selection step, it is likely that a risk close to zero is measured.
  • the optimal filters and reference points can be used to classify whether a pixel in a fundus image is a bifurcation.
  • An image under detection can be transferred into the feature space by being convolved with the optimal filters so each pixel corresponds to a query point.
  • the k-Nearest Neighbor algorithm k-NN
  • ANN Approximate Nearest Neighbor
  • p n/k
  • k means the number of neighbors considered
  • n is the number of positive points among the k nearest neighbors.
  • probability maps can be generated with different optimal filters chosen by Sequential Forward Selection (SFS) for each iteration.
  • the metric can evaluate the probability maps and give the AUC for the ROC curve for the whole set of probability maps.
  • SFS first selects out one filter with the highest AUC, then adds a new filter from the remaining filters such that the two filters have the highest AUC.
  • SFS can add a new filter from the remaining filters to give the highest AUC until the stop criteria of the feature selection is met.
  • the feature selection stops when the number of selected filters reaches the maximal number of filters, or the AUC starts to decline.
  • a metric to give the AUC can be pixel-based. Since the
  • the map can be thresholded to obtain a binary image and evaluate the binary image pixel by pixel.
  • every pixel P is decided as true positive (TP), false positive (FP), true negative (TN) or false negative (FN) according to the pixel's distance to a bifurcation, which is shown by ⁇ TP If P Is lifefei and
  • factor F introduction of factor F can enhance the influence of positive points so the ROC curve is more sensitive to changes of sensitivity and specificity.
  • the metric giving the AUC for the probability maps is outlined as follows:
  • a unit can be software, hardware, or a combination of software and hardware.
  • the units can comprise the processing Software 606 as illustrated in FIG. 6 and described below.
  • the units can comprise a computer 601 as illustrated in FIG. 6 and described below.
  • FIG. 6 is a block diagram illustrating an exemplary operating environment for performing the disclosed methods.
  • This exemplary operating environment is only an example of an operating environment and is not intended to suggest any limitation as to the scope of use or functionality of operating environment architecture. Neither should the operating environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment.
  • Examples of well known computing systems, environments, and/or configurations that can be suitable for use with the systems and methods comprise, but are not limited to, personal computers, server computers, laptop devices, and multiprocessor systems. Additional examples comprise set top boxes,
  • the processing of the disclosed methods and systems can be performed by software components.
  • the disclosed systems and methods can be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices.
  • program modules comprise computer code, routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the disclosed methods can also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules can be located in both local and remote computer storage media including memory storage devices.
  • the components of the computer 601 can comprise, but are not limited to, one or more processors or processing units 603, a system memory 612, and a system bus 613 that couples various system components including the processor 603 to the system memory 612. In the case of multiple processing units 603, the system can utilize parallel computing.
  • the system bus 613 represents one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • such architectures can comprise an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, an Accelerated Graphics Port (AGP) bus, and a Peripheral Component Interconnects (PCI), a PCI-Express bus, a Personal Computer Memory Card Industry Association (PCMCIA), Universal Serial Bus (USB) and the like.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • AGP Accelerated Graphics Port
  • PCI Peripheral Component Interconnects
  • PCMCIA Personal Computer Memory Card Industry Association
  • USB Universal Serial Bus
  • the bus 613, and all buses specified in this description can also be implemented over a wired or wireless network connection and each of the subsystems, including the processor 603, a mass storage device 604, an operating system 605, processing software 606, processing data 607, a network adapter 608, system memory 612, an Input/Output Interface 610, a display adapter 609, a display device 611, and a human machine interface 602, can be contained within one or more remote computing devices
  • the computer 601 typically comprises a variety of computer readable media.
  • Exemplary readable media can be any available media that is accessible by the computer 601 and comprises, for example and not meant to be limiting, both volatile and non-volatile media, removable and non-removable media.
  • the system memory 612 comprises computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM).
  • RAM random access memory
  • ROM read only memory
  • the system memory 612 typically contains data such as processing data 607 and/or program modules such as operating system 605 and processing software 606 that are immediately accessible to and/or are presently operated on by the processing unit 603.
  • the computer 601 can also comprise other removable/nonremovable, volatile/non-volatile computer storage media.
  • FIG. 6 illustrates a mass storage device 604 which can provide non-volatile storage of computer code, computer readable instructions, data structures, program modules, and other data for the computer 601.
  • a mass storage device 604 can be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like.
  • any number of program modules can be stored on the mass storage device 604, including by way of example, an operating system 605 and processing software 606.
  • Each of the operating system 605 and processing software 606 (or some combination thereof) can comprise elements of the programming and the processing software 606.
  • Processing data 607 can also be stored on the mass storage device 604.
  • Processing data 607 can be stored in any of one or more databases known in the art. Examples of such databases comprise, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, and the like.
  • the databases can be centralized or distributed across multiple systems.
  • the user can enter commands and information into the computer 601 via an input device (not shown).
  • input devices comprise, but are not limited to, a keyboard, pointing device (e.g., a "mouse"), a microphone, a joystick, a scanner, tactile input devices such as gloves, and other body coverings, and the like
  • a human machine interface 602 that is coupled to the system bus 613, but can be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 Port (also known as a Firewire port), a serial port, or a universal serial bus (USB).
  • a display device 611 can also be connected to the system bus 613 via an interface, such as a display adapter 609. It is contemplated that the computer 601 can have more than one display adapter 609 and the computer 601 can have more than one display device 611.
  • a display device can be a monitor, an LCD (Liquid Crystal Display), or a projector.
  • other output peripheral devices can comprise components such as speakers (not shown) and a printer (not shown) which can be connected to the computer 601 via Input/Output Interface 610. Any step and/or result of the methods can be output in any form to an output device.
  • the computer 601 can operate in a networked environment using logical connections to one or more remote computing devices 614a,b,c.
  • a remote computing device can be a personal computer, portable computer, a server, a router, a network computer, a peer device or other common network node, and so on.
  • Logical connections between the computer 601 and a remote computing device 614a,b,c can be made via a local area network (LAN) and a general wide area network (WAN).
  • LAN local area network
  • WAN general wide area network
  • a network adapter 608 can be implemented in both wired and wireless environments. Such networking environments are conventional and commonplace in offices, enterprise-wide computer networks, intranets, and the Internet 615.
  • processing software 606 can be stored on or transmitted across some form of computer readable media. Any of the disclosed methods can be performed by computer readable instructions embodied on computer readable media.
  • Computer readable media can be any available media that can be accessed by a computer.
  • Computer readable media can comprise “computer storage media” and “communications media.”
  • “Computer storage media” comprise volatile and non-volatile, removable and non-removable media implemented in any methods or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
  • Exemplary computer storage media comprises, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
  • the methods and systems can employ Artificial Intelligence techniques such as machine learning and iterative learning.
  • Artificial Intelligence techniques such as machine learning and iterative learning. Examples of such techniques include, but are not limited to, expert systems, case based reasoning, Bayesian networks, behavior based AI, neural networks, fuzzy systems, evolutionary computation (e.g. genetic algorithms), swarm intelligence (e.g. ant algorithms), and hybrid intelligent systems (e.g. Expert inference rules generated through a neural network or production rules from statistical learning).
  • microaneurysm detection for DR screening and drusen detection for AMD detection were performed in the green channel of fundus images.
  • the performance of the disclosed framework at detecting microaneurysms was first evaluated at a lesion level, on a set of 100 deidentified images manually segmented by a clinician expert (MDA). Twenty-six of these images were taken from a DR screening program in the Netherlands; these images were originally captured using a Canon CR5 nonmydriatic 3CCD camera at 45 degree field of view; and 74 were obtained from a retinal referral clinic; these images were captured using a Topcon TRC-50 at 45 degree field of view.
  • MDA clinician expert
  • the size of the images is 768X576 pixels and the field of view is
  • FIGS. 7A-7B illustrate that similar sets of filters can be obtained regardless of the method that is used to collect the set of reference samples.
  • FIG. 8 illustrates an exemplary microaneurysm detection (e.g., detection image 804) by the disclosed mathematical model based method in the macular region of a JPEG-compressed image 802 used in the EyeCheck screening program.
  • each vertex in FIG. 10A from left to right corresponds to one curve in FIG. 10B from "299 samples” to "61373 samples.”
  • each vertex in FIG. IOC from left to right corresponds to one curve in FIG. 10D from "5 PCSs" to "200 PCs.”
  • a set of 15 images of patients with AMD, all containing drusen, as confirmed by a clinician expert (MDA), and 15 images containing flecks as confirmed to have Stargardt's disease by positive genetic testing for the ABCA4 haplotype were selected. Images were acquired with a Zeiss FF3 30-degree camera and recorded on Kodachrome film (ASA 100). Film-based images were scanned at 3456 x 2304 pixels, 12 bit depth, using a 3 color channel CCD scanner.
  • an optimal filter framework for detecting objects in retinal images can comprise a design of a set of filters optimally representing the typical target lesions, as well as the negative lesion confounders, e.g. lesions that are similar looking but not the target lesions, and the positive lesion confounders, e.g. lesions that are easily missed because the lesions have specific properties that are substantially different from the standard case.
  • image samples representative of the three types of lesions are collected and the filters are defined as the principal components of these samples.
  • the disclosed methods and systems can ensure the
  • the optimal filter set is used as a reference to classify new image samples. Because the feature space spanned by these filters is compact, the classification procedure, based on an approximate nearest neighbor search, is fast. This property can be relevant in medical applications such as the instantaneous assessment of retinal diseases.
  • the optimal filter sets obtained varied little with the method through which the optimal filter sets were obtained.
  • the representative samples from which the optimal filters are derived
  • the representative samples are obtained from a mathematical model designed by a non-clinician domain expert, i.e. expert driven, or by directly sampling a representative set of images at locations annotated by one or more expert clinicians, i.e. data-driven, following the sampling procedure described herein.
  • an optimal filter framework with the set of domain optimal filters obtained directly from image samples, and only relying on an expert clinician input for annotating, performs close (for DR detection) or equivalent (for drusen versus flecks differentiation) to a system in which the filters require the design of a model by a non-clinician expert.
  • the data-driven approach is expected to generalize well to other retinal image analysis problems, and possibly, to other medical image analysis problems, because only limited clinician expert annotation effort is required, and no further domain knowledge is required on the designer of the framework.
  • microaneurysm detection shows that, if expert knowledge is available, it is possible to push performance further through mathematical modeling. Moreover, should the method be applied to a new dataset, the value of a few parameters could be changed, while in the data-driven approach additional annotation might have been required.
  • PCA - may be superior, and at least equal, to the set of filters obtained through feature selection on a Gaussian derivative or Gabor wavelet or filterbank.
  • a small set Gabor or Gaussian derivative filters may be optimal, but that for specific object detection problems, a locally optimal, better performing filter set exists, and can be found in this space through the framework described herein.
  • a total of 80 de-identified retinal color images from 80 subjects from a diabetic retinopathy screening program in the Netherlands were created by registration of 2 separate images per patients.
  • the size of images was approximately 800X650 pixels in JPEG format.
  • the number of images in a first training set was 50, and the number of images in a second training set was 10.
  • the number of test images was 20.
  • an expert annotated a set of centers of bifurcations in the training sets and test images and a retinal specialist reviewed these annotations for correctness. After annotation, all bifurcation were samples at a scale ⁇ of 25X25 pixels.
  • the information of bifurcation samples are concentrated in the first few number of PCs.
  • the first 30 PCs of red and green channels can be selected respectively for feature selection.
  • 2000 non- bifurcations are selected from each training image so there will be 100000 negative points in the feature space. This number is chosen in the consideration that the ratio of true bifurcation and true non-bifurcation is very large, accordingly, a large amount of negative points can be selected to simulate this situation.
  • the maximal number of optimal filters is set to be 10.
  • the distance D that defines the region of true bifurcation is the square root of 41.
  • the 10 most discriminative filters are 6th PC in red, and 2nd, 3rd, 4th, 5th, 6th, 7th, 18th and 19th PCs in green channels. It is worth noticing that the selected filters depends on the distribution of the reference points and since the negative points are randomly sampled for different reference points feature selection might give different results.
  • results show that a vision machine based retinal bifurcation detection method can be developed based solely on a limited number of expert annotated retinal images with exemplar branching points.
  • the results also show that detection performance of the system, compared to a human expert, on a different dataset, as measured by AUC, is satisfactory.
  • similar approaches can be used to create detection systems for various objects of interest.
  • the systems and methods described herein can be used to detect apples in images of orchards, helicopter landing pads in high altitude reconnaissance images, and cell nuclei in phase contrast images of mitotic fibroblasts.
  • Other images and objects of interest can be processed.
  • the methods described herein can be highly general but can be trained to be highly specific in a specific domain. Any domain can be used and the systems and methods can be trained for any particular object of interest.
  • each of these images comprises a plurality of wavelength bands
  • each ImageORG comprises a plurality of raw images that were combined into a single ImageORG.
  • the methods can comprise obtaining first samples of the intensity distribution of the object of interest in one or more wavelength bands, obtaining second samples of the intensity distribution of confounder objects, at a frequency high enough to affect a performance metric, transforming the first and second samples into an appropriate space, thereby creating Imagetrans, performing dimension reduction on the transformed first and second samples altogether, whereby the dimension reduction of the transformed first and second samples creates an object detector, transforming ImageORG into the appropriate space to obtain Imagetrans, projecting Imagetrans into the space of reduced dimension to obtain Imagereduc, classifying each pixel p in Imagereduc with an appropriate neighborhood p, based on a comparison with the samples obtained, and automatically identifying an object of interest from the abnormal pixels.
  • a method for direct bifurcation detection based on the optimal filter framework can be performed. For example, a set of filters can be generated representing all cases of bifurcations. These filters can be used to generate a feature space for a classifier to distinguish bifurcations and non-bifurcations. This approach can use a minimal number of assumptions, and thus can be based upon training images and expert annotations of bifurcations.
  • the method can comprise training on 60 fundus images and testing on 20 fundus images.
  • Example results can comprise an AUC of 0.883, demonstrating that the method can compare well to a human expert.
  • retinal vessel bifurcations can help monitor and diagnose retinopathies such as vessel occlusion, hypertension and/or diabetes.
  • Retinal vessel bifurcations are also important structures for vessel segmentation and analysis of the retinal vessel trees and usually used as landmarks for image registration because of the geometrical stability of the vessel bifurcations. Much work has been done for detection of bifurcations so far. However, most of these approaches use the geometrical and topological information of the vessel tree to detect bifurcations and the successful detection depends on the quality of the underlying vessel
  • the methods of the present disclosure contemplate an optimal filter framework comprising a formalized image analysis approach that allows an unbiased, almost parameterless automated system for object-background separation.
  • the filter framework can build an object (e.g., vessel bifurcation) detector based on a limited number of expert-marked exemplar objects. This approach is optimal in comparison to model-based and filterbank-based approaches for the separation of object and background.
  • FIG. 13 A illustrates an overview of an exemplary filter framework
  • FIG. 13B illustrates an example fundus image with annotated bifurcations labeled by dark circles.
  • FIG. 13C illustrates bifurcation samples (1st column), and in red (2nd column), green (3rd column) and blue (4th column) channels.
  • the optimal filter framework can consist of a training and a test stage as shown in FIG. 13A.
  • a set of fundus images S with expert-annotated bifurcations indicated as in Fig. 13B, can be used to build a set of optimal filters and to optimize the classification.
  • a training set S can be divided into two sets, S I and S2.
  • SI can be used to build the filter set F, while S2 can be used to perform feature selection on F, resulting in selected filters F', from which the feature space can be formed and reference points can be generated. Then, a classifier can be trained to detect bifurcations for a fundus image in terms of generating a soft-labeled map indicating the probability of every pixel to be a bifurcation. During the test stage, a probability map can be generated for every test image in T, and the performance can be evaluated and compared to the ground truth. To maintain generality and robustness, SI, S2 and T can comprise representative images from the real data that are kept entirely separated. The following parts discuss the details of the optimal filter framework, including the sample generation, filter generation, bifurcation detection, feature selection, and the evaluation metric.
  • methods can comprise generating samples.
  • methods can comprise annotating the center of bifurcations in SI and S2 to obtain bifurcation samples as subimages at a specific scale ⁇ (measured in pixels).
  • measured in pixels
  • each sample can be rotated, for example, by 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees, and 315 degrees.
  • a circular mask can be superimposed on the rotated sample image to generate circular samples.
  • the sample images can be decomposed into red, green and blue channels (FIG. 13C), and samples in these channels can be considered as independent sample sets.
  • methods can comprise generating filters. For example,
  • PCA principal component analysis
  • PCs can comprise an orthogonal transformation converting the original data into a new coordinate system so that the features are represented by a number of linearly uncorrelated variables called principal components (PCs).
  • every sample of subimage can be a single observation.
  • PCs can be obtained representing any instance of that sample set.
  • the first few PCs can account for the majority variance of the original data. If the first M PCs are retained such that the cumulative variance (CV) of the PCs are the majority of the total variance, the first M PCs can be used to
  • the PCs can be selected as the optimal filters.
  • the first component extracted in a principal component analysis can account for a maximal amount of total variance in the observed variables. Under typical conditions, this means that the first component can be correlated with at least some of the observed variables. It may be correlated with many.
  • the second component extracted can have two important characteristics. First, this component can account for a maximal amount of variance in the data set that was not accounted for by the first component. Again under typical conditions, this means that the second component can be correlated with some of the observed variables that did not display strong correlations with component 1. The second characteristic of the second component is that it can be uncorrected with the first component. Literally, if one were to compute the correlation between components 1 and 2, that correlation would be zero.
  • each component accounts for a maximal amount of variance in the observed variables that was not accounted for by the preceding components, and is uncorrelated with all of the preceding components.
  • a principal component analysis proceeds in this fashion, with each new component accounting for progressively smaller and smaller amounts of variance (this is why only the first few components are usually retained and interpreted).
  • the resulting components can display varying degrees of correlation with the observed variables, but can be completely uncorrelated with one another.
  • Samples from SI can be projected into the initial feature space by
  • bifurcation samples in training set S I can be projected into the feature space to form the positive reference points.
  • non-bifurcations can be randomly sampled in SI and projected into the feature space to form the negative reference points. While all features can be computed here, in the trained algorithm, this computation can be performed with those filters that span the final, low-dimension feature space, which results from the feature selection described below.
  • the methods can comprise performing bifurcation detection.
  • a classifier can be trained for the detection of bifurcations. For example, an image under detection can be transferred into the feature space by being convolved with filters so each pixel corresponds to a query point.
  • the K-Nearest Neighbor algorithm k- NN or variation thereof can be used to classify these query points.
  • classification can be applied so the probability of the query point P to be a bifurcation is n/k, where k is the number of neighbors considered and n is the number of positive points among these neighbors.
  • the methods can comprise performing feature selection.
  • the dimension of the feature space can comprise the number of filters (e.g., PCs).
  • the feature space is a high-dimension space.
  • the feature space can be defined by the set of optimal filters, and thus, the features and filters can be interchangeable.
  • feature selection can be applied to lower the dimension.
  • feature selection can comprise a Sequential Forward Selection (SFS) algorithm or variation thereof.
  • SFS Sequential Forward Selection
  • the SFS can start with a candidate feature pool and, for each iteration, select out the best remaining feature according to a criterion, until a maximum number of features is reached or the criterion value starts to decline.
  • Training set S2 can be used to generate the probability maps using algorithms of feature selection described above.
  • probability maps can be generated with selected filters every iteration.
  • the criterion to guide the selection can comprise the AUC for the ROC curve for the whole set of probability maps explained below.
  • the methods can be evaluated according to an evaluation
  • An example metric to give the AUC can be pixel-based.
  • a bifurcation region can be defined with radius of R centered at every annotated bifuraction pixel. Every pixel of a binary image obtained by thresholding a probability map can be decided as true positive (TP), false positive (FP), true negative (TN) or false negative (FN) depending on whether the pixel falls within bifurcation regions or outside.
  • TP true positive
  • FP true negative
  • FN false negative
  • n probability maps can be generated with the selected filters.
  • the n probability maps can be thresholded with different values to generate different pairs of specificities and sensitivities. Then, the overall ROC can be plotted, and the AUC can be calculated.
  • the data can be highly skewed when the number of bifurcation pixels are small compared to a whole image.
  • the ROC can be insensitive to the change of specificity.
  • positive points can be weighed with a self-defined factor F larger than 1 , thereby enhancing the influence of positive points so the ROC curve is more sensitive to changes of sensitivity and specificity.
  • the data can comprise 80 fundus images created by registration of one image pair as described in herein
  • the size of images can be about 800 x 650 pixels in JPEG format.
  • the numbers of images in S I, S2 and T can comprise 50, 10 and 20 respectively.
  • the centers of bifurcations can be manually annotated and a retinal specialist can review the annotations for correctness.
  • Bifurcation can be sampled at a scale ⁇ of 25 x 25 pixels. In the training stage, 10 bifurcations can be randomly sampled per image so that 500 samples are created. Rotating these 500 samples can result in 4000 samples in total.
  • FIG. 14 illustrates example filters based on the example data. Specifically, FIG. 14 shows the first 12 filters in three channels (e.g., first row is red channel, second row is green channel, and third row is blue channel) based on the example data.
  • FIG. 15 is an example plot of the cumulative variance (CV) of three channels (e.g., red, green, and blue) with respect to the number of PCs. As illustrated in FIG.
  • the lowest CV e.g., representing the blue channel
  • two higher CVs e.g., the highest representing the red channel, and the second highest representing the green channel.
  • the slower increase of the blue channel can be explained by the fact that the blue channel comprises little information but noise.
  • the first 30 PCs of red and green channels can be selected for feature selection.
  • non-bifurcations of 2000 can be randomly selected per training image, generating a total of 100000 negative points.
  • Ten filters can be selected after the feature selection to build a 10-D feature space.
  • the radius R of the bifurcation region can be the square root of 41 according to the vessel width in images.
  • the AUC can be 0.883.
  • FIG. 16A illustrates an AUC curve of test images with 10 selected features.
  • FIG. 16C illustrates an example probability map of example data and FIG. 16C illustrates vein bifurcations detected by the methods described herein. Specifically, FIG. 16C illustrates a binary map overlapped with the original image used an example data. Bifurcations are labeled with circles, and white pixels are TPs, and black pixels are FPs. These figures indicate that this method finds most of the bifurcations. Most of false positives occurred within and near the optic disc.
  • the methods can comprise an optimal filter-based retinal
  • the optimal filter-based retinal bifurcation detection method can be based on a limited number of annotated retinal images with exemplar branching samples. Compared to other methods, this approach requires little morphological information about the target, and has few parameters: the scale ⁇ of the sample, the numbers of positive samples and negative samples, and a classifier. Furthermore, the parameter scale ⁇ might be eliminated in a multiscale approach.
  • the methods can be implemented in an automated retinal vessel bifurcation detection system, based on very limited expert contribution. This method is tested on 20 fundus images, resulting in a general AUC of ROC curve 0.883, which is satisfactory compared to a human expert.
  • FIG. 17 is a flowchart illustrating an exemplary method of analyzing an image.
  • the feature can comprise a bifurcation.
  • the feature can comprise a blood vessel bifurcation.
  • the at least one image can comprise one or more fundus images.
  • training images can be generating from the at least one image.
  • each training image can be based on a respective section of the at least one image.
  • the training images can comprise positive images having the feature and negative images without the feature.
  • one or more of the respective sections of the at least one image can be reproduced as a plurality of images rotated at different angles.
  • a feature space can be generated based on the positive images and the negative images.
  • one or more orthogonal filters can be generated from the positive and the negative images.
  • each of the one or more orthogonal filters comprise an image based on one or more aspects indicative of at least one of the positive images or the negative images.
  • optimal filters can be generated from the positive images and the negative images. The optimal filters can comprise the highest ranked filters according to each filter's relevance in identifying the one or more features.
  • the optimal filters can be generated by performing a Sequential Forward Selection algorithm.
  • the feature can be identified in one or more unclassified images based upon the feature space. For example, each pixel of the one or more unclassified images can be classified based on a K-Nearest Neighbor algorithm.
  • a probability image can be generated.
  • the probability image can comprise probability pixels.
  • each of the probability pixels can indicate a respective probability that a corresponding classified pixel indicates the feature.
  • FIG. 18 provided are methods of analyzing an image, comprising receiving an image at 1802, transferring the image into a feature space at 1804, and classifying one or more objects of interest in the transferred image as a bifurcation at 1806.
  • the image can comprise one or more fundus images.
  • the methods can be any one or more fundus images.
  • the methods can still further comprise generating the feature space with one or more positive reference points and one or more negative reference points.
  • the filter set can comprise a plurality of principal components wherein the plurality of principal components comprise a cumulative variance exceeds a threshold.
  • Transferring the image into a feature space can comprise convolving the image with filters so each pixel of the image corresponds to a query point in the feature space.
  • Classifying one or more objects of interest in the transferred image as a bifurcation can comprise applying a classifier trained for the detection of bifurcations.
  • the methods can further comprise generating a probability image comprising probability pixels, wherein each of the probability pixels indicating a respective probability that a corresponding classified pixel indicates the one or more objects of interest.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne des systèmes et des procédés d'analyse d'image. Un procédé exemplaire de la présente invention peut inclure la réception d'une ou plusieurs images comportant une ou plusieurs annotations indiquant une caractéristique. Le procédé peut inclure la génération d'images d'entraînement à partir de la ou des images. Chaque image d'entraînement est basée sur une section correspondante de la ou des images. Les images d'entraînement peuvent inclure des images positives incluant la caractéristique et des images négatives n'incluant pas la caractéristique. Le procédé peut inclure la génération d'un espace de caractéristique basé sur les images positives et les images négatives. Le procédé peut en outre inclure l'identification de la caractéristique dans une ou plusieurs images non classées en se basant sur l'espace de caractéristique.
PCT/US2014/014298 2010-12-07 2014-01-31 Procédés et systèmes permettant de détecter la bifurcation d'un récipient WO2014158345A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/764,926 US20150379708A1 (en) 2010-12-07 2014-01-31 Methods and systems for vessel bifurcation detection

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361759201P 2013-01-31 2013-01-31
US61/759,201 2013-01-31
US201313992552A 2013-06-07 2013-06-07
US13/992,552 2013-06-07

Publications (1)

Publication Number Publication Date
WO2014158345A1 true WO2014158345A1 (fr) 2014-10-02

Family

ID=51624996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/014298 WO2014158345A1 (fr) 2010-12-07 2014-01-31 Procédés et systèmes permettant de détecter la bifurcation d'un récipient

Country Status (1)

Country Link
WO (1) WO2014158345A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262737A1 (en) * 2016-03-11 2017-09-14 Magic Leap, Inc. Structure learning in convolutional neural networks
US10275902B2 (en) 2015-05-11 2019-04-30 Magic Leap, Inc. Devices, methods and systems for biometric user recognition utilizing neural networks
CN111861999A (zh) * 2020-06-24 2020-10-30 北京百度网讯科技有限公司 动静脉交叉压迫征的检测方法、装置、电子设备及可读存储介质
CN113826140A (zh) * 2019-06-12 2021-12-21 布莱恩欧米克斯有限公司 血管造影数据分析
US11775836B2 (en) 2019-05-21 2023-10-03 Magic Leap, Inc. Hand pose estimation
US11790523B2 (en) 2015-04-06 2023-10-17 Digital Diagnostics Inc. Autonomous diagnosis of a disorder in a patient from image analysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090148024A1 (en) * 2007-11-27 2009-06-11 Siemens Medical Solutions Usa, Inc. System and Method for Blood Vessel Bifurcation Detection in Thoracic CT Scans
WO2012078636A1 (fr) * 2010-12-07 2012-06-14 University Of Iowa Research Foundation Séparation optimale et conviviale d'arrière-plan d'objet

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090148024A1 (en) * 2007-11-27 2009-06-11 Siemens Medical Solutions Usa, Inc. System and Method for Blood Vessel Bifurcation Detection in Thoracic CT Scans
WO2012078636A1 (fr) * 2010-12-07 2012-06-14 University Of Iowa Research Foundation Séparation optimale et conviviale d'arrière-plan d'objet

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11790523B2 (en) 2015-04-06 2023-10-17 Digital Diagnostics Inc. Autonomous diagnosis of a disorder in a patient from image analysis
US10636159B2 (en) 2015-05-11 2020-04-28 Magic Leap, Inc. Devices, methods and systems for biometric user recognition utilizing neural networks
US11216965B2 (en) 2015-05-11 2022-01-04 Magic Leap, Inc. Devices, methods and systems for biometric user recognition utilizing neural networks
US10275902B2 (en) 2015-05-11 2019-04-30 Magic Leap, Inc. Devices, methods and systems for biometric user recognition utilizing neural networks
US10963758B2 (en) 2016-03-11 2021-03-30 Magic Leap, Inc. Structure learning in convolutional neural networks
US20170262737A1 (en) * 2016-03-11 2017-09-14 Magic Leap, Inc. Structure learning in convolutional neural networks
US10255529B2 (en) 2016-03-11 2019-04-09 Magic Leap, Inc. Structure learning in convolutional neural networks
US11657286B2 (en) 2016-03-11 2023-05-23 Magic Leap, Inc. Structure learning in convolutional neural networks
WO2017156547A1 (fr) * 2016-03-11 2017-09-14 Magic Leap, Inc. Apprentissage de structure dans des réseaux neuronaux à convolution
US11775836B2 (en) 2019-05-21 2023-10-03 Magic Leap, Inc. Hand pose estimation
CN113826140A (zh) * 2019-06-12 2021-12-21 布莱恩欧米克斯有限公司 血管造影数据分析
CN113826140B (zh) * 2019-06-12 2024-02-02 布莱恩欧米克斯有限公司 血管造影数据分析
CN111861999A (zh) * 2020-06-24 2020-10-30 北京百度网讯科技有限公司 动静脉交叉压迫征的检测方法、装置、电子设备及可读存储介质

Similar Documents

Publication Publication Date Title
US11935235B2 (en) Diagnosis of a disease condition using an automated diagnostic model
US20150379708A1 (en) Methods and systems for vessel bifurcation detection
Wang et al. Blood vessel segmentation from fundus image by a cascade classification framework
dos Santos Ferreira et al. Convolutional neural network and texture descriptor-based automatic detection and diagnosis of glaucoma
Zhang et al. Retinal vessel delineation using a brain-inspired wavelet transform and random forest
Akbar et al. Automated techniques for blood vessels segmentation through fundus retinal images: A review
Morales et al. Retinal disease screening through local binary patterns
Tang et al. Splat feature classification with application to retinal hemorrhage detection in fundus images
Pathan et al. Automated segmentation and classifcation of retinal features for glaucoma diagnosis
Phan et al. Automatic Screening and Grading of Age‐Related Macular Degeneration from Texture Analysis of Fundus Images
Melo et al. Microaneurysm detection in color eye fundus images for diabetic retinopathy screening
Garcia et al. Detection of hard exudates in retinal images using a radial basis function classifier
WO2014158345A1 (fr) Procédés et systèmes permettant de détecter la bifurcation d'un récipient
Gopalakrishnan et al. Itl-cnn: Integrated transfer learning-based convolution neural network for ultrasound pcos image classification
Pendekal et al. An ensemble classifier based on individual features for detecting microaneurysms in diabetic retinopathy
Bouacheria et al. Automatic glaucoma screening using optic nerve head measurements and random forest classifier on fundus images
Sindhusaranya et al. Retinal blood vessel segmentation using root Guided decision tree assisted enhanced Fuzzy C-mean clustering for disease identification
Gupta et al. Comparative study of different machine learning models for automatic diabetic retinopathy detection using fundus image
US20210209755A1 (en) Automatic lesion border selection based on morphology and color features
Badeka et al. Evaluation of LBP variants in retinal blood vessels segmentation using machine learning
Verma et al. Machine learning classifiers for detection of glaucoma
Khalid et al. FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
Narhari et al. Automated diagnosis of diabetic retinopathy enabled by optimized thresholding-based blood vessel segmentation and hybrid classifier
Escorcia-Gutierrez et al. A feature selection strategy to optimize retinal vasculature segmentation
US20240177305A1 (en) Diagnosis of a disease condition using an automated diagnostic model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14773763

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14764926

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14773763

Country of ref document: EP

Kind code of ref document: A1