US20150363672A1 - Method and system of classifying medical images - Google Patents

Method and system of classifying medical images Download PDF

Info

Publication number
US20150363672A1
US20150363672A1 US14/833,182 US201514833182A US2015363672A1 US 20150363672 A1 US20150363672 A1 US 20150363672A1 US 201514833182 A US201514833182 A US 201514833182A US 2015363672 A1 US2015363672 A1 US 2015363672A1
Authority
US
United States
Prior art keywords
image
computerized method
visual
category model
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/833,182
Inventor
Hayit Greenspan
Jacob Goldberger
Uri AVNI
Eli KONEN
Michal Sharon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ramot at Tel Aviv University Ltd
Bar Ilan University
Bar Ilan Research and Development Co Ltd
Tel HaShomer Medical Research Infrastructure and Services Ltd
Original Assignee
Ramot at Tel Aviv University Ltd
Bar Ilan Research and Development Co Ltd
Tel HaShomer Medical Research Infrastructure and Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ramot at Tel Aviv University Ltd, Bar Ilan Research and Development Co Ltd, Tel HaShomer Medical Research Infrastructure and Services Ltd filed Critical Ramot at Tel Aviv University Ltd
Priority to US14/833,182 priority Critical patent/US20150363672A1/en
Assigned to RAMOT AT TEL-AVIV UNIVERSITY LTD. reassignment RAMOT AT TEL-AVIV UNIVERSITY LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AVNI, URI, GREENSPAN, HAYIT
Assigned to TEL HASHOMER MEDICAL RESEARCH INFRASTRUCTURE AND SERVICES LTD. reassignment TEL HASHOMER MEDICAL RESEARCH INFRASTRUCTURE AND SERVICES LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KONEN, ELI, SHARON, MICHAL
Assigned to BAR-ILAN UNIVERSITY reassignment BAR-ILAN UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GOLDBERGER, JACOB
Publication of US20150363672A1 publication Critical patent/US20150363672A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/6267
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/503Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06K9/46
    • G06K9/4661
    • G06K9/6218
    • G06K9/6256
    • G06T3/0056
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/10Selection of transformation methods according to the characteristics of the input images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06K2009/4666
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Definitions

  • the present invention relates to analysis of medical images and, more particularly, but not exclusively to automatic analysis and classification of medical images depicting an organ or a human body system.
  • Systems and devices for visualizing the inside of living organisms are among the most important medical developments in the last thirty years.
  • Systems like X-ray scanners, computerized tomography (CT) scanners and magnetic resonance imaging (MRI) scanners allow physicians to examine internal organs or areas of the body that require a thorough examination.
  • the visualizing scanner outputs a medical image, such as a cross-sectional image, or a sequence of computerized cross-sectional images of a certain body organ, which is then diagnosed by radiologists and/or other physicians.
  • the medical images are transferred to a picture archiving communication system (PACS) before being accessed by the radiologists.
  • PACS picture archiving communication system
  • the PACS is installed on one or more of computers, which are dedicated for storing, retrieving, distributing and presenting the stored 3D medical images.
  • the 3D medical images are stored in an independent format.
  • the most common format for image storage is digital imaging and communications in medicine (DICOM).
  • a method of generating a category model for classifying medical images comprises providing a plurality of medical images each categorized as one of a plurality of categorized groups, generating an index of a plurality of visual words according to a distribution of a plurality of local descriptors in each the image, modeling a category model mapping a relation between each the visual word and at least one of the plurality of categorized groups according to the index, and outputting the category model for facilitating the categorization of an image based on local descriptors thereof.
  • the method further comprises dividing the plurality of medical images among the plurality of categorized groups.
  • the index comprises less than 700 visual words.
  • the plurality of medical images are part of a training set having more than 10,000 medical images.
  • the generating comprises clustering the plurality of local descriptors in a plurality of clusters, the plurality of visual words being defined according to the plurality of clusters.
  • the clustering is performed according to a principal component analysis (PCA).
  • PCA principal component analysis
  • the modeling is performed using a support vector machine (SVM) training procedure.
  • SVM support vector machine
  • the SVM training procedure is a multi-class SVM with a radial basis function (RBF) kernel.
  • RBF radial basis function
  • the plurality of medical images are provided from a picture archiving communication system (PACS).
  • PACS picture archiving communication system
  • the plurality of categorized groups define a plurality of pathologies.
  • the method further comprises automatically categorizing the plurality of medical images.
  • a method of classifying a medical image using a category model comprises providing a category model which maps a plurality of visual-words in a space, each the visual-word being associated with at least one of a plurality of image categories, receiving an examined medical image, identifying a group of the plurality of visual-words in the examined medical image, using the category model to match the group with an image category of the plurality of image categories, and outputting the image category.
  • the outputting comprises presenting the image category in a client terminal used to provide the examined medical image.
  • identifying is performed without segmenting the examined medical image.
  • identifying is performed without registering the examined medical image.
  • the method further comprises updating the category model according to the matching.
  • a medical image analysis system of classifying a medical image using a category model comprises a repository which stores a category model mapping a plurality of visual-words in a space, each the visual-word being associated with at least one of a plurality of image categories, an input unit which receives an examined medical image, a categorization module which identifies a group of the plurality of visual-words in the examined medical image and uses the category model to match the group with an image category of the plurality of image categories, and a presentation unit which present the image category in response to the receiving of the examined medical image.
  • Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
  • a data processor such as a computing platform for executing a plurality of instructions.
  • the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
  • a network connection is provided as well.
  • a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
  • FIG. 1 is a flowchart of a method of generating a category model for classifying medical images, according to some embodiments of the present invention
  • FIG. 2 is a method of classifying a medical image using a category model, for example as generated according to FIG. 1 , according to some embodiments of the present invention
  • FIG. 3 is a schematic illustration of a medical image analysis system of classifying a medical image using a category model, for example as generated according to FIG. 1 , according to some embodiments of the present invention
  • FIG. 4A is a distribution images across categories
  • FIG. 4B depicts a graph which illustrates the effect of dictionary size on the accuracy of categorization using a category model generated as depicted in FIG. 2 , according to some embodiments of the present invention
  • FIG. 4C depicts a graph which illustrates the effect of dictionary size on the accuracy of categorization when the image patches have between 5 and 8 feature components, according to some embodiments of the present invention
  • FIG. 5 is a graph mapping the relation between the weight of spatial features in x-axis and the classification accuracy in y-axis where the bars show mean and standard deviation of 20 experiments;
  • FIG. 6 is a set of images where the first two images are the query images and the following images (left to right, top to bottom) are the retrieval results;
  • FIG. 7 is a graph depicting the relation between the precision shown for first 5, 10, 15, 20 and 30 returned images and the number of images.
  • the present invention relates to analysis of medical images and, more particularly, but not exclusively to automatic analysis and classification of medical images depicting an organ or a human body system.
  • a category model which is used for classifying medical images.
  • the method is based on an analysis of a plurality of medical images, such as X-ray scans and volumetric scan images.
  • Each medical image is categorized, manually and/or automatically, as one of a plurality of categorized groups, for example according to visual characteristic of one or more pathologies.
  • This allows generating an index, a dictionary, of visual words, which are patterns of salient local image patches.
  • the dictionary is generated according to a distribution of a plurality of local descriptors in each image.
  • a category model mapping a relation between each visual word and one or more of the plurality of categorized groups is modeled according to the index.
  • the category model may be provided, for example sent, for facilitating the categorization of an image based on local descriptors thereof.
  • a category model which is outlined above and described below.
  • This method is based on a category model which maps a plurality of visual-words in a space where each visual-word is associated with one or more image categories.
  • the category model may be locally stored in a computing unit that implements the method or in a remote and/or external database.
  • an examined medical image is received and a group of visual-words which are documented in the category model are extracted from the examined medical image, optionally using an index of visual words, such as the aforementioned dictionary. This allows using the category model to match the group with an image category of the plurality of image categories and outputting the image category.
  • FIG. 1 is a flowchart of a method of generating a category model for classifying medical images, according to some embodiments of the present invention.
  • a training set having a plurality of medical images is received.
  • a medical image means an X-Ray scan image, a computerized tomography (CT) scan image, a magnetic resonance imager (MRI) scan image, and a positron emission tomography (PET)-CT scan image.
  • CT computerized tomography
  • MRI magnetic resonance imager
  • PET positron emission tomography
  • the images are taken from a medical database, such as PACS or radiology information system (RIS).
  • the number of medical images in the training set is between few hundreds to few hundreds of thousands or even more.
  • the training set includes about 1200 medical images or about 65,000 medical images as exemplified below.
  • the number images changes according to the number of possible pathologies which are categorized in the category model.
  • a ratio of about 2000 images per category is maintained.
  • local descriptors which may be referred to herein as image patches, are identified in each one of the provided medical images.
  • the local descriptors are repeatable multidimensional features so that if there is a transformation between two instances of an object, corresponding points are detected and substantially identical descriptor values are obtained around each.
  • each image patch is represented by a multidimensional record.
  • the descriptors are resistant to geometric and illumination variations, for example as described in any of the following T. Lindenberg, Scale-space theory in computer vision, Kluwer Academic Publishers, 1994, D. G. Lowe, Object Recognition from local scale-invariant features, ICCV (International Conference on Computer Vision), 1999; J. Matas, J. Burianek, and J. Kittler. Object recognition using the invariant pixel-set signature, BMVC (British Machine Vision Conference), 2000; and F. Schaffalitzky and A. Zisserman. Viewpoint invariant texture matching and wide baseline stereo, ICCV, 2001, which are incorporated herein by reference.
  • the image patches are acquired using one or more patch sampling strategies such as random sampling and/or grid sampling, optionally with spacings.
  • the size of a patch is of 9 ⁇ 9 pixels.
  • image patches along the border of the image are considered ignored.
  • the intensity values within an image patch are normalized to have zero mean and unit variance. This provides local contrast enhancement and augments the information within the image patches.
  • image patches that have a single intensity value of black are ignored.
  • the data dimensionality and optionally the computational complexity of reducing the level of noise may be diminished using a procedure such as a principal component analysis (PCA), principal component regression (PCR) and/or partial least squares (PLS) regression.
  • PCA principal component analysis
  • PCR principal component regression
  • PLS partial least squares
  • a resultant PCA component does not contain information regarding the average intensity of the respective image patch.
  • This average value contains information that discriminates between the dark background and the bright tissue and may be used to distinguish between tissue types.
  • the mean gray level of the image patch may be taken as an additional multidimensional features feature.
  • each image patch, coordinates (x, y) is added to a respective image patch multidimensional record as two additional features, for example as an overall ten-dimensional image patch representation.
  • the addition of the spatial coordinates to the image patch multidimensional record introduces spatial information into the image representation.
  • the relative feature weights in the proposed system are tuned experimentally on a test/cross-validation set, for example as described in the example below.
  • a dataset which documents the image patches is generated for each image in the training set.
  • the dataset is optionally a multidimensional record.
  • a dictionary is generated according to the image patches.
  • some or all of the images are selected.
  • the image patches of the selected images are clustered in a plurality of clusters distributed in a feature space, which may be referred to herein as an image patch space.
  • Each cluster is defined in a different subspace which may be referred to herein as visual word, for example using iterative square error partitioning and/or hierarchical technique.
  • the visual words form an index or a codebook, referred to herein as a dictionary, of the image patches in a feature space.
  • the number of visual words is limited to a predefined amount.
  • the predefined amount is 700 or less, for example as shown in FIGS. 4B and 4C and described below.
  • each visual word includes 7 PCA coefficients, for example as described above.
  • a k-means algorithm is used to cluster the image patches.
  • This algorithm proceeds by iterated assignments of image patches to their closest cluster centers (visual word) and re-computation of the cluster centers (other visual words), see O. Duda, P. E. Hart, D. G. Stork, Pattern classification, John Wiley & Sons, 2000, which is incorporated herein by reference. Note that this dictionary development step is done in an unsupervised mode without any reference to the image categories, such as pathologies.
  • each image is represented as a bag of visual words, namely a dataset of visual words which appears in the image, such as a visual word vector.
  • the visual words are selected according to the image patches which have been identified in each image.
  • the bag of visual words which may be referred to herein as a visual-word vector, contains the presence and/or absence information of each visual word from the dictionary in the image, the count of each visual word (i.e., the number of image patches in the corresponding visual word cluster), or the count weighted by other factors.
  • the visual-word vector is represented as a histogram wherein each bin in the histogram is a visual word index number selected out of the dictionary and generated automatically from the data.
  • the plurality of medical images are categorized according to one or more pathologies which have been identified as depicted therein.
  • the categorization is optionally performed manually, for example by a diagnosis of one or more, such as physicians, for example orthopedic physician and radiologists.
  • the categorization may be performed automatically, for example using known image classification methods, and/or by an analysis of a diagnosis and/or a textual description that is attached to the image.
  • the categorization may be semi automatic, for example by a combination of an automatic textual and/or image classification methods and a manual verification of one or more practitioners.
  • Each visual-word vectors is categorized according to the image which is related thereto.
  • the categorized visual-word vectors of the categorized images are combined to create a category model.
  • a support vector machine (SVM) training algorithm builds a category model that allows estimating to which one of the categories, if any, a certain medical image which is not from the training set is related.
  • the category model is an SVM model in which the visual-word vectors are represented as points in space, mapped so that the categorized visual-word vectors of the separate categories are divided by a clear gap that is as wide as possible.
  • the SVM training algorithm is a multi-class SVM that is optionally implemented as a series of one-vs-one binary SVMs with a radial basis function (RBF) kernel, for example based on the LIBSVM library, found in http://www.csie.ntu.edu.tw/ ⁇ cjlin/libsvm/, which is incorporated herein by reference.
  • RBF radial basis function
  • SIFT image features are extracted from each image and used to reduce the visual word extraction time.
  • the category model is outputted, as shown at 107 , facilitating the categorization of new medical image which is mapped into the space of the category model and predicted to belong to a category based on which side of the gap they fall on.
  • FIG. 2 is a method 200 of classifying a medical image using a category model, for example as generated according to FIG. 1 , according to some embodiments of the present invention.
  • a category model which maps a plurality of categorized visual-words and/or visual-word vectors in space is received.
  • the category model is optionally generated based on a training set of a plurality of exemplary medical images, for example as depicted in FIG. 1 .
  • an examined medical image is received.
  • the examined medical image is uploaded from a PACS and/or a non transitory storage medium, such as a CD, a DVD, and/or a memory card, to a client terminal which implements the method 200 and/or a client terminal connected to a computing unit which implements the method 200 .
  • the client terminal may be a laptop, a Smartphone, a cellular phone, a tablet, a personal computer a personal digital assistance (PDA) and the like.
  • PDA personal digital assistance
  • a visual word vector and/or a histogram are generated according to an analysis of the image.
  • the visual word vector represents image patches of the image which correspond with visual words at the space of the category model.
  • the conversion is optionally similar to the described in relation to blocks 102 , 103 , and 105 where image patches are identified and matched with visual words in the dictionary to generate the respective bag of visual words.
  • the visual words of the examined image are matched with the category model.
  • the match maps the visual words of the vector in the space of the category model.
  • the mapping is to a subspace, or to the proximity of a subspace, which is associated with a certain category mapped in the category model.
  • the categorization of the examined image is outputted, for example presented to the operator of the client terminal, forwarded to a database which hosts the examined image for an association therewith, and/or sent, for example via an email service, to a practitioner which is related to the examined image and/or to the imaged patient.
  • each examined image and/or the related visual word vector and the categorization thereof is used to update the category model.
  • the category model is improved each time it is being used for categorizing a medical image.
  • the update may be performed by rerunning the dictionary generation process and respectively the category model generation process depicted in blocks 103 , 104 , and 106 of FIG. 1 .
  • the method depicted in FIG. 2 allows categorizing medical images such as 2 dimensional (2D) X-Ray images and 3D CT or MRI images without segmentation and/or registration. In such a manner, the computational complexity involved in categorizing each examined image is minimal. Such a method maybe implemented on thin end client with limited computational power.
  • the medical image analysis system 301 comprises an input module 302 for obtaining or receiving a medical image, a repository 303 for storing the category model and a categorization module 304 for categorizing the received medical image according to the category model.
  • the input module 302 is designed to receive the medical image either directly from a medical imaging system or indirectly via a content source such as a PACS server, a PACS workstation, a computer network, or a portable memory device such as a DVD, a CD, a memory card, etc.
  • Each received medical image is preferably associated with medical information.
  • Such medical information may comprise the patient age, gender, medical condition, ID, and the like.
  • the medical image found in a digital imaging and communications in medicine (DICOM) object.
  • DICOM digital imaging and communications in medicine
  • the input module 302 is to forward the received medical image to the categorization module 304 .
  • the categorization module 304 optionally implements the method depicted in FIG. 2 so as to categorize the received image.
  • the system 301 further includes a presentation unit 305 , such as a display for presenting the categorization performed by the categorization module 304 .
  • the categorization may be displayed in a window or any other graphical user interface (GUI).
  • GUI graphical user interface
  • the medical image analysis system 301 can alert the user on real time whenever a critical pathological categorization has been identified in one of the received medical images.
  • the medical image analysis system 301 includes a model generation model which is set to generate and optionally to update the category model, for example as described above in relation to FIG. 1 and block 207 of FIG. 2 .
  • composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
  • a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range.
  • the phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
  • the distribution of the images across the categories is non-uniform; the most frequent category contains over 19% of the images in the database, while many categories are represented by less than 0.1% of the images.
  • the system parameters have been optimized using the training portion of this set, by running 20 cross-validation experiments trained on 10,000 images and verified on 1000 randomly drawn test images. Each parameter was optimized independently.
  • FIG. 4B shows, increasing the number of dictionary words proved useful up to 700 words. Beyond this value the running time increased significantly, with no evident improvement in the classification rate.
  • FIG. 4C shows similar classification results in the range of 5 to 8 components, with an average classification rate of approximately 90% using the SVM classifying algorithm. Based on the above experiments, a dictionary size of 700 visual words was selected, where each word contains 7 PCA coefficients.
  • FIG. 5 is a graph mapping the relation between the weight of spatial features in x-axis and the classification accuracy in y-axis where the bars show mean and standard deviation of 20 experiments.
  • the optimal range for the (x, y) coordinates is [ ⁇ 3; 3].
  • the patch variance normalization step improves the classification rate as well: with no normalization, the average classification rate is 88:19, while with normalization it climbs to 90:9.
  • Using SIFT features with the SVM classification increased significantly the feature extraction time, and achieved an average of 85.4% classification accuracy, well below the classification rate of a raw patch based classification.
  • classification of previously unseen 1000 test images was conducted.
  • the overall classification rate achieved is 89:1%.
  • the total running time for the whole system, training and classification was approximately 40 minutes on the full resolution images, and 3 minutes on the 1 ⁇ 4 scaled down images, as measured on dual quad-core Intel Xeon 2.33 GHz.
  • FIG. 6 which depicts a set of images where the first two images are the query images and the following images (left to right, top to bottom) are the retrieval results. The retrieved results were manually judged for relevance by medical experts.
  • FIG. 7 is a graph depicting the relation between the precision shown for first 5, 10, 15, 20 and 30 returned images and the number of images. The precision achieved using the method described above is marked with (*). The other outcomes are achieved using visual retrieval algorithms described in the Muller et al. Overview of the imageclefmed 2008 medical image retrieval task. In CLEF working notes (http://www.clef-campaign.org/2008/working_notes/CLEF2008WN-Contents.html.), which is incorporated herein by reference.
  • the line labeled ‘Proposed System’ depicts the outcomes achieved when using image patch normalization and the line labeled ‘Not Normalized’ depicts the outcomes achieved when using the patch original gray levels.
  • the normalized patch approach in the proposed system is shown to rank first among the automatic purely visual retrieval systems.
  • the retrieval system is computationally efficient, with an average retrieval time of less than 400 ms per query.
  • Image similarity-based categorization and retrieval becomes of clinical value once the task involves a diagnostic-level categorization, such as healthy vs. pathology.
  • the category models generated as described in the examples above were examined on chest x-rays obtained for various clinical indications in the emergency room of Sheba medical center. 102 frontal chest images have been used; from which 26 diagnosed as normal and 76 as having have one or more pathologies, such as lung infiltrates, left or right pleural effusion or an enlarged heart shadow. X-ray interpretations, made by two radiologists, served as the referral gold standard. Inconclusive results were not included in this set. Four sample images from this data are presented in FIG. 7 .
  • a patch-based classifying was implemented using an SVM classifying algorithm with two classes, the classification was conducted for each pathology type, and for healthy vs. any pathology.
  • system parameters were tuned using the general ImageClef 2007 database and were not specifically tuned to the lung pathology task.
  • a leave one out classification was performed (results averaged over 102 trials). Table 1 summarizes the classification results:
  • the performance depends on the pathology type: it is highly accurate in detecting enlarged hearts, with a sensitivity of 95.24% and specificity of 93.48%. It is less accurate in detecting lung infiltrates and effusions.
  • a patch-based classification system was applied to a variety of medical image archives, in categorization and retrieval tasks.
  • the exemplary system was tuned to achieve high accuracy, with an average of over 90% correct classification on a publicly available database of 12,000 medical radiographs.
  • ImageClef 2008 In the ImageClef 2008 medical annotation challenge it ranked second. It is a highly efficient, with less than 200 milliseconds training and classification time per image. Using the same methods, an image retrieval utility, which was ranked first in ImageClef 2008 among the visual retrieval systems was developed. Extending the system to pathology-level discrimination showed initial results for lung disease categorization.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Cardiology (AREA)
  • Quality & Reliability (AREA)
  • Physiology (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

A method of generating a category model for classifying medical images. The method comprises providing a plurality of medical images each categorized as one of a plurality of categorized groups, generating an index of a plurality of visual words according to a distribution of a plurality of local descriptors in each the image, modeling a category model mapping a relation between each visual word and at least one of the categorized groups according to the index, and outputting the category model for facilitating the categorization of an image based on local descriptors thereof.

Description

    RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 13/170,200 filed Jun. 28, 2011, which claims the benefit of priority under 35 USC 119(e) of U.S. Provisional Patent Application No. 61/358,979 filed Jun. 28, 2010. The contents of the above applications are all incorporated by reference as if fully set forth herein in their entirety.
  • FIELD AND BACKGROUND OF THE INVENTION
  • The present invention relates to analysis of medical images and, more particularly, but not exclusively to automatic analysis and classification of medical images depicting an organ or a human body system.
  • Systems and devices for visualizing the inside of living organisms are among the most important medical developments in the last thirty years. Systems like X-ray scanners, computerized tomography (CT) scanners and magnetic resonance imaging (MRI) scanners allow physicians to examine internal organs or areas of the body that require a thorough examination. In use, the visualizing scanner outputs a medical image, such as a cross-sectional image, or a sequence of computerized cross-sectional images of a certain body organ, which is then diagnosed by radiologists and/or other physicians.
  • In most hospitals and radiology centers, the medical images are transferred to a picture archiving communication system (PACS) before being accessed by the radiologists. The PACS is installed on one or more of computers, which are dedicated for storing, retrieving, distributing and presenting the stored 3D medical images. The 3D medical images are stored in an independent format. The most common format for image storage is digital imaging and communications in medicine (DICOM).
  • The rapid growth of computerized medical imagery using PACS in hospitals throughout the world led to the development of systems for classifying visual medical data. For example, International Patent Application Publication No. WO/2007/099525, filed in Feb. 18, 2007 describes a system for analyzing a source medical image of a body organ. The system comprises an input unit for obtaining the source medical image having three dimensions or more, a feature extraction unit that is designed for obtaining a number of features of the body organ from the source medical image, and a classification unit that is designed for estimating a priority level according to the features.
  • Another example is described in U.S. Pat. No. 6,754,675 filed on Jul. 16, 2001 which describes image retrieval system contains a database with a large number of images. The system retrieves images from the database that are similar to a query image entered by the user. The images in the database are grouped in clusters according to a similarity criterion so that mutually similar images reside in the same cluster. Each cluster has a cluster center which is representative for the images in it. A first step of the search to similar images selects the clusters that may contain images similar with the query image, by comparing the query image with the cluster centers of all clusters. A second step of the search compares the images in the selected clusters with the query image in order to determine their similarity with the query image.
  • SUMMARY OF THE INVENTION
  • According to some embodiments of the present invention there is provided a method of generating a category model for classifying medical images. The method comprises providing a plurality of medical images each categorized as one of a plurality of categorized groups, generating an index of a plurality of visual words according to a distribution of a plurality of local descriptors in each the image, modeling a category model mapping a relation between each the visual word and at least one of the plurality of categorized groups according to the index, and outputting the category model for facilitating the categorization of an image based on local descriptors thereof.
  • Optionally, the method further comprises dividing the plurality of medical images among the plurality of categorized groups.
  • Optionally, the index comprises less than 700 visual words.
  • Optionally, the plurality of medical images are part of a training set having more than 10,000 medical images.
  • Optionally, the generating comprises clustering the plurality of local descriptors in a plurality of clusters, the plurality of visual words being defined according to the plurality of clusters.
  • More optionally, the clustering is performed according to a principal component analysis (PCA).
  • Optionally, the modeling is performed using a support vector machine (SVM) training procedure.
  • Optionally, the SVM training procedure is a multi-class SVM with a radial basis function (RBF) kernel.
  • Optionally, the plurality of medical images are provided from a picture archiving communication system (PACS).
  • Optionally, the plurality of categorized groups define a plurality of pathologies.
  • Optionally, the method further comprises automatically categorizing the plurality of medical images.
  • According to some embodiments of the present invention there is provided a method of classifying a medical image using a category model. The method comprises providing a category model which maps a plurality of visual-words in a space, each the visual-word being associated with at least one of a plurality of image categories, receiving an examined medical image, identifying a group of the plurality of visual-words in the examined medical image, using the category model to match the group with an image category of the plurality of image categories, and outputting the image category.
  • Optionally, the outputting comprises presenting the image category in a client terminal used to provide the examined medical image.
  • Optionally, identifying is performed without segmenting the examined medical image.
  • Optionally, identifying is performed without registering the examined medical image.
  • More optionally, the method further comprises updating the category model according to the matching.
  • According to some embodiments of the present invention there is provided a medical image analysis system of classifying a medical image using a category model. The system comprises a repository which stores a category model mapping a plurality of visual-words in a space, each the visual-word being associated with at least one of a plurality of image categories, an input unit which receives an examined medical image, a categorization module which identifies a group of the plurality of visual-words in the examined medical image and uses the category model to match the group with an image category of the plurality of image categories, and a presentation unit which present the image category in response to the receiving of the examined medical image.
  • Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
  • Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
  • For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
  • The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
  • Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
  • In the drawings:
  • FIG. 1 is a flowchart of a method of generating a category model for classifying medical images, according to some embodiments of the present invention;
  • FIG. 2 is a method of classifying a medical image using a category model, for example as generated according to FIG. 1, according to some embodiments of the present invention;
  • FIG. 3 is a schematic illustration of a medical image analysis system of classifying a medical image using a category model, for example as generated according to FIG. 1, according to some embodiments of the present invention;
  • FIG. 4A is a distribution images across categories;
  • FIG. 4B depicts a graph which illustrates the effect of dictionary size on the accuracy of categorization using a category model generated as depicted in FIG. 2, according to some embodiments of the present invention;
  • FIG. 4C depicts a graph which illustrates the effect of dictionary size on the accuracy of categorization when the image patches have between 5 and 8 feature components, according to some embodiments of the present invention;
  • FIG. 5 is a graph mapping the relation between the weight of spatial features in x-axis and the classification accuracy in y-axis where the bars show mean and standard deviation of 20 experiments;
  • FIG. 6 is a set of images where the first two images are the query images and the following images (left to right, top to bottom) are the retrieval results; and
  • FIG. 7 is a graph depicting the relation between the precision shown for first 5, 10, 15, 20 and 30 returned images and the number of images.
  • DESCRIPTION OF EMBODIMENTS OF THE INVENTION
  • The present invention relates to analysis of medical images and, more particularly, but not exclusively to automatic analysis and classification of medical images depicting an organ or a human body system.
  • According to some embodiments of the present invention there are provided systems and methods of modeling a category model which is used for classifying medical images. The method is based on an analysis of a plurality of medical images, such as X-ray scans and volumetric scan images. Each medical image is categorized, manually and/or automatically, as one of a plurality of categorized groups, for example according to visual characteristic of one or more pathologies. This allows generating an index, a dictionary, of visual words, which are patterns of salient local image patches. The dictionary is generated according to a distribution of a plurality of local descriptors in each image. Now, a category model mapping a relation between each visual word and one or more of the plurality of categorized groups is modeled according to the index. In such a manner, the category model may be provided, for example sent, for facilitating the categorization of an image based on local descriptors thereof.
  • According to some embodiments of the present invention there are provided systems and methods of classifying a medical image using a category model, such as the category model which is outlined above and described below. This method is based on a category model which maps a plurality of visual-words in a space where each visual-word is associated with one or more image categories. The category model may be locally stored in a computing unit that implements the method or in a remote and/or external database. Now, an examined medical image is received and a group of visual-words which are documented in the category model are extracted from the examined medical image, optionally using an index of visual words, such as the aforementioned dictionary. This allows using the category model to match the group with an image category of the plurality of image categories and outputting the image category.
  • Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
  • Reference is now made to FIG. 1, which is a flowchart of a method of generating a category model for classifying medical images, according to some embodiments of the present invention.
  • First, as shown at 101 a training set having a plurality of medical images is received. As used herein, a medical image means an X-Ray scan image, a computerized tomography (CT) scan image, a magnetic resonance imager (MRI) scan image, and a positron emission tomography (PET)-CT scan image. For example, the images are taken from a medical database, such as PACS or radiology information system (RIS). Optionally, the number of medical images in the training set is between few hundreds to few hundreds of thousands or even more. For example, the training set includes about 1200 medical images or about 65,000 medical images as exemplified below. Optionally, the number images changes according to the number of possible pathologies which are categorized in the category model. Optionally a ratio of about 2000 images per category is maintained.
  • Now, as shown at 102, local descriptors, which may be referred to herein as image patches, are identified in each one of the provided medical images. The local descriptors are repeatable multidimensional features so that if there is a transformation between two instances of an object, corresponding points are detected and substantially identical descriptor values are obtained around each. Optionally each image patch is represented by a multidimensional record.
  • Optionally, the descriptors are resistant to geometric and illumination variations, for example as described in any of the following T. Lindenberg, Scale-space theory in computer vision, Kluwer Academic Publishers, 1994, D. G. Lowe, Object Recognition from local scale-invariant features, ICCV (International Conference on Computer Vision), 1999; J. Matas, J. Burianek, and J. Kittler. Object recognition using the invariant pixel-set signature, BMVC (British Machine Vision Conference), 2000; and F. Schaffalitzky and A. Zisserman. Viewpoint invariant texture matching and wide baseline stereo, ICCV, 2001, which are incorporated herein by reference.
  • Optionally, the image patches are acquired using one or more patch sampling strategies such as random sampling and/or grid sampling, optionally with spacings. Optionally, the size of a patch is of 9×9 pixels. Optionally, image patches along the border of the image are considered ignored. Optionally, the intensity values within an image patch are normalized to have zero mean and unit variance. This provides local contrast enhancement and augments the information within the image patches. Optionally, image patches that have a single intensity value of black are ignored.
  • According to some embodiments of the present invention, the data dimensionality and optionally the computational complexity of reducing the level of noise, may be diminished using a procedure such as a principal component analysis (PCA), principal component regression (PCR) and/or partial least squares (PLS) regression. For example data dimensionality of each 9×9 image patch is reduced in size from 81 to 7.
  • For example, when PCA is used, a resultant PCA component does not contain information regarding the average intensity of the respective image patch. This average value contains information that discriminates between the dark background and the bright tissue and may be used to distinguish between tissue types. In such embodiments, the mean gray level of the image patch may be taken as an additional multidimensional features feature.
  • Optionally, the center of each image patch, coordinates (x, y) is added to a respective image patch multidimensional record as two additional features, for example as an overall ten-dimensional image patch representation. The addition of the spatial coordinates to the image patch multidimensional record introduces spatial information into the image representation. Optionally, the relative feature weights in the proposed system are tuned experimentally on a test/cross-validation set, for example as described in the example below.
  • Optionally a dataset which documents the image patches is generated for each image in the training set. The dataset is optionally a multidimensional record.
  • Now, as shown at 103, a dictionary is generated according to the image patches. First, some or all of the images are selected. Now, the image patches of the selected images are clustered in a plurality of clusters distributed in a feature space, which may be referred to herein as an image patch space. Each cluster is defined in a different subspace which may be referred to herein as visual word, for example using iterative square error partitioning and/or hierarchical technique. The visual words form an index or a codebook, referred to herein as a dictionary, of the image patches in a feature space. Optionally, the number of visual words is limited to a predefined amount. Optionally, the predefined amount is 700 or less, for example as shown in FIGS. 4B and 4C and described below. Optionally, each visual word includes 7 PCA coefficients, for example as described above.
  • Optionally, a k-means algorithm is used to cluster the image patches. This algorithm proceeds by iterated assignments of image patches to their closest cluster centers (visual word) and re-computation of the cluster centers (other visual words), see O. Duda, P. E. Hart, D. G. Stork, Pattern classification, John Wiley & Sons, 2000, which is incorporated herein by reference. Note that this dictionary development step is done in an unsupervised mode without any reference to the image categories, such as pathologies.
  • As shown at 104, each image is represented as a bag of visual words, namely a dataset of visual words which appears in the image, such as a visual word vector. The visual words are selected according to the image patches which have been identified in each image. The bag of visual words, which may be referred to herein as a visual-word vector, contains the presence and/or absence information of each visual word from the dictionary in the image, the count of each visual word (i.e., the number of image patches in the corresponding visual word cluster), or the count weighted by other factors. Optionally, the visual-word vector is represented as a histogram wherein each bin in the histogram is a visual word index number selected out of the dictionary and generated automatically from the data.
  • As shown at 105, the plurality of medical images, are categorized according to one or more pathologies which have been identified as depicted therein. The categorization is optionally performed manually, for example by a diagnosis of one or more, such as physicians, for example orthopedic physician and radiologists. Alternatively, the categorization may be performed automatically, for example using known image classification methods, and/or by an analysis of a diagnosis and/or a textual description that is attached to the image. Alternatively, the categorization may be semi automatic, for example by a combination of an automatic textual and/or image classification methods and a manual verification of one or more practitioners. Each visual-word vectors is categorized according to the image which is related thereto.
  • Now, as shown at 106, the categorized visual-word vectors of the categorized images are combined to create a category model.
  • Optionally, giving the categorized visual-word vectors, which may be divided to categories, a support vector machine (SVM) training algorithm builds a category model that allows estimating to which one of the categories, if any, a certain medical image which is not from the training set is related. Optionally, the category model is an SVM model in which the visual-word vectors are represented as points in space, mapped so that the categorized visual-word vectors of the separate categories are divided by a clear gap that is as wide as possible. Optionally, the SVM training algorithm is a multi-class SVM that is optionally implemented as a series of one-vs-one binary SVMs with a radial basis function (RBF) kernel, for example based on the LIBSVM library, found in http://www.csie.ntu.edu.tw/˜cjlin/libsvm/, which is incorporated herein by reference. Optionally, SIFT image features are extracted from each image and used to reduce the visual word extraction time.
  • Now, the category model is outputted, as shown at 107, facilitating the categorization of new medical image which is mapped into the space of the category model and predicted to belong to a category based on which side of the gap they fall on.
  • Reference is now made to FIG. 2, which is a method 200 of classifying a medical image using a category model, for example as generated according to FIG. 1, according to some embodiments of the present invention.
  • First, as shown at 201, a category model which maps a plurality of categorized visual-words and/or visual-word vectors in space is received. The category model is optionally generated based on a training set of a plurality of exemplary medical images, for example as depicted in FIG. 1.
  • As shown at 202, an examined medical image is received. Optionally, the examined medical image is uploaded from a PACS and/or a non transitory storage medium, such as a CD, a DVD, and/or a memory card, to a client terminal which implements the method 200 and/or a client terminal connected to a computing unit which implements the method 200. The client terminal may be a laptop, a Smartphone, a cellular phone, a tablet, a personal computer a personal digital assistance (PDA) and the like.
  • Now, as shown at 203, a visual word vector and/or a histogram are generated according to an analysis of the image. The visual word vector represents image patches of the image which correspond with visual words at the space of the category model. The conversion is optionally similar to the described in relation to blocks 102, 103, and 105 where image patches are identified and matched with visual words in the dictionary to generate the respective bag of visual words.
  • Now, as shown at 204, the visual words of the examined image are matched with the category model. The match maps the visual words of the vector in the space of the category model. The mapping is to a subspace, or to the proximity of a subspace, which is associated with a certain category mapped in the category model. This allow, as shown at 205, the categorization of the examined image. As shown at 206, the categorization is outputted, for example presented to the operator of the client terminal, forwarded to a database which hosts the examined image for an association therewith, and/or sent, for example via an email service, to a practitioner which is related to the examined image and/or to the imaged patient.
  • Optionally, each shown at 207, each examined image and/or the related visual word vector and the categorization thereof is used to update the category model. In such a manner, the category model is improved each time it is being used for categorizing a medical image. The update may be performed by rerunning the dictionary generation process and respectively the category model generation process depicted in blocks 103, 104, and 106 of FIG. 1.
  • It should be noted that the method depicted in FIG. 2 allows categorizing medical images such as 2 dimensional (2D) X-Ray images and 3D CT or MRI images without segmentation and/or registration. In such a manner, the computational complexity involved in categorizing each examined image is minimal. Such a method maybe implemented on thin end client with limited computational power.
  • Reference is now made to FIG. 3, which is a schematic illustration of a medical image analysis system of classifying a medical image using a category model, for example as generated according to FIG. 1, according to some embodiments of the present invention. The medical image analysis system 301 comprises an input module 302 for obtaining or receiving a medical image, a repository 303 for storing the category model and a categorization module 304 for categorizing the received medical image according to the category model. The input module 302 is designed to receive the medical image either directly from a medical imaging system or indirectly via a content source such as a PACS server, a PACS workstation, a computer network, or a portable memory device such as a DVD, a CD, a memory card, etc. Each received medical image is preferably associated with medical information. Such medical information may comprise the patient age, gender, medical condition, ID, and the like. Optionally, the medical image found in a digital imaging and communications in medicine (DICOM) object.
  • Optionally, the input module 302 is to forward the received medical image to the categorization module 304. The categorization module 304 optionally implements the method depicted in FIG. 2 so as to categorize the received image. The system 301 further includes a presentation unit 305, such as a display for presenting the categorization performed by the categorization module 304. The categorization may be displayed in a window or any other graphical user interface (GUI). When such an embodiment is used, the medical image analysis system 301 can alert the user on real time whenever a critical pathological categorization has been identified in one of the received medical images. Such an embodiment increases the effectiveness of a therapy given to patients as it alarms the system user regarding a pathological indication immediately after the medical image has been acquired. Optionally, the medical image analysis system 301 includes a model generation model which is set to generate and optionally to update the category model, for example as described above in relation to FIG. 1 and block 207 of FIG. 2.
  • It is expected that during the life of a patent maturing from this application many relevant systems and methods will be developed and the scope of the term client terminal, computing unit, and image processing is intended to include all such new technologies a priori.
  • As used herein the term “about” refers to ±10%.
  • The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. This term encompasses the terms “consisting of” and “consisting essentially of”.
  • The phrase “consisting essentially of” means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
  • As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof.
  • The word “exemplary” is used herein to mean “serving as an example, instance or illustration”. Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
  • The word “optionally” is used herein to mean “is provided in some embodiments and not provided in other embodiments”. Any particular embodiment of the invention may include a plurality of “optional” features unless such features conflict.
  • Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
  • Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
  • It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
  • Various embodiments and aspects of the present invention as delineated hereinabove and as claimed in the claims section below find experimental support in the following examples.
  • Reference is now made to the following example, which together with the above descriptions, illustrates some embodiments of the invention in a non limiting fashion.
  • In the example system and method validation was conducted using a database of 12,000 categorized medical images, radiographs. This dataset is the basis for the ImageClef 2007 medical image classification competition; see T. Deselaers et al. Overview of the imageclef 2007 object retrieval task, in workshop of the cross language evaluation forum 2007, volume 5152, 2008, which is incorporated herein by reference. A set of 11,000 medical images have been used for training, and 1000 serve for testing. There are 116 different categories within the archive, differing in either the examined region, the image orientation with respect to the body or the biological system under evaluation. Several of these images are presented in FIG. 4A. The distribution of the images across the categories is non-uniform; the most frequent category contains over 19% of the images in the database, while many categories are represented by less than 0.1% of the images. The system parameters have been optimized using the training portion of this set, by running 20 cross-validation experiments trained on 10,000 images and verified on 1000 randomly drawn test images. Each parameter was optimized independently. As FIG. 4B shows, increasing the number of dictionary words proved useful up to 700 words. Beyond this value the running time increased significantly, with no evident improvement in the classification rate. FIG. 4B also demonstrates that using an SVM classifying algorithm provides results that are more than 3% higher than the best K-NN classifier (k=3). The effect of the number of PCA components was examined next. FIG. 4C shows similar classification results in the range of 5 to 8 components, with an average classification rate of approximately 90% using the SVM classifying algorithm. Based on the above experiments, a dictionary size of 700 visual words was selected, where each word contains 7 PCA coefficients.
  • Incorporating spatial coordinates of the patch as additional features improves the classification performance noticeably, as seen in FIG. 5, which is a graph mapping the relation between the weight of spatial features in x-axis and the classification accuracy in y-axis where the bars show mean and standard deviation of 20 experiments.
  • The optimal range for the (x, y) coordinates is [−3; 3]. The patch variance normalization step improves the classification rate as well: with no normalization, the average classification rate is 88:19, while with normalization it climbs to 90:9. Using SIFT features with the SVM classification increased significantly the feature extraction time, and achieved an average of 85.4% classification accuracy, well below the classification rate of a raw patch based classification.
  • Using the parameter set defined above, classification of previously unseen 1000 test images was conducted. The overall classification rate achieved is 89:1%. The total running time for the whole system, training and classification, was approximately 40 minutes on the full resolution images, and 3 minutes on the ¼ scaled down images, as measured on dual quad-core Intel Xeon 2.33 GHz.
  • Reference is now also made to another example in which a system and a method validation were conducted using a database of 66,000 categorized medical images, radiographs. This dataset is optionally, the ImageClef 2008 database; see http://www.imageclef.org/ImageCLEF2008, which is incorporated herein by reference. In ImageClef 2008 a large-scale medical image retrieval competition was conducted. A database of over 66,000 images was used with 30 query topics. Each topic is composed of one or more example images and a short textual description in several languages. The objective is to return a ranked set of 1000 images from the complete database, sorted by their relevance to the presented queries. Sample queries from this challenge and the first few returned images are seen in FIG. 6 which depicts a set of images where the first two images are the query images and the following images (left to right, top to bottom) are the retrieval results. The retrieved results were manually judged for relevance by medical experts. FIG. 7 is a graph depicting the relation between the precision shown for first 5, 10, 15, 20 and 30 returned images and the number of images. The precision achieved using the method described above is marked with (*). The other outcomes are achieved using visual retrieval algorithms described in the Muller et al. Overview of the imageclefmed 2008 medical image retrieval task. In CLEF working notes (http://www.clef-campaign.org/2008/working_notes/CLEF2008WN-Contents.html.), which is incorporated herein by reference.
  • In this Figure, the line labeled ‘Proposed System’ depicts the outcomes achieved when using image patch normalization and the line labeled ‘Not Normalized’ depicts the outcomes achieved when using the patch original gray levels. The normalized patch approach in the proposed system is shown to rank first among the automatic purely visual retrieval systems.
  • The retrieval system is computationally efficient, with an average retrieval time of less than 400 ms per query.
  • Categorization on the Pathology Level
  • Image similarity-based categorization and retrieval becomes of clinical value once the task involves a diagnostic-level categorization, such as healthy vs. pathology. Optionally, the category models generated as described in the examples above were examined on chest x-rays obtained for various clinical indications in the emergency room of Sheba medical center. 102 frontal chest images have been used; from which 26 diagnosed as normal and 76 as having have one or more pathologies, such as lung infiltrates, left or right pleural effusion or an enlarged heart shadow. X-ray interpretations, made by two radiologists, served as the referral gold standard. Inconclusive results were not included in this set. Four sample images from this data are presented in FIG. 7. A patch-based classifying was implemented using an SVM classifying algorithm with two classes, the classification was conducted for each pathology type, and for healthy vs. any pathology. In order to preserve the generalization ability of the classifiers, system parameters were tuned using the general ImageClef 2007 database and were not specifically tuned to the lung pathology task. A leave one out classification was performed (results averaged over 102 trials). Table 1 summarizes the classification results:
  • Normal Abnormal
    images images Sensitivity Specificity
    Any Pathology 22/26 74/76 94.8 91.7
    Enlarged heart 20/23 43/44 95.3 93.5
    Lung Infiltrates 23/33 27/34 76.7 73.0
    Right pleural effusion 12/23 42/51 57.1 79.2
    Left pleural effustion 15/27 38/47 62.5 76.0
  • The software identified correctly 74 out of 76 abnormal and 22 out of 26 normal x-rays with 4 false positives and 2 false negatives cases, resulting in a sensitivity of 94.87% and specificity of 91.67%. In the task of between-pathology discrimination, the performance depends on the pathology type: it is highly accurate in detecting enlarged hearts, with a sensitivity of 95.24% and specificity of 93.48%. It is less accurate in detecting lung infiltrates and effusions. Briefly stated, in this study a patch-based classification system was applied to a variety of medical image archives, in categorization and retrieval tasks. The exemplary system was tuned to achieve high accuracy, with an average of over 90% correct classification on a publicly available database of 12,000 medical radiographs. In the ImageClef 2008 medical annotation challenge it ranked second. It is a highly efficient, with less than 200 milliseconds training and classification time per image. Using the same methods, an image retrieval utility, which was ranked first in ImageClef 2008 among the visual retrieval systems was developed. Extending the system to pathology-level discrimination showed initial results for lung disease categorization.
  • Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
  • All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.

Claims (19)

What is claimed is:
1. A computerized method of classifying a medical image using a category model, comprising:
using at least one processing unit for executing code instructions for:
receiving an examined medical image;
identifying image coordinates of a plurality of image patches in said examined medical image, each image patch is represented by a plurality of repeatable multidimensional features in a pixel area of said examined medical image;
providing a category model which maps a plurality of visual-words in a space, each said visual-word is represented by a plurality of reference repeatable multidimensional features in a reference pixel area and image coordinates indicative of a location of said reference pixel area in said space and is associated with at least one of a plurality of image categories;
using said plurality of image patches and said image coordinates of a plurality of image patches to identify a group of said plurality of visual-words in said examined medical image; and
categorizing a pathology in said examined medical image according to said group.
2. The computerized method of claim 1, presenting said pathology in a client terminal used to provide said examined medical image.
3. The computerized method of claim 1, wherein said group is identified without segmenting said examined medical image.
4. The computerized method of claim 1, wherein said group is identified is performed without registering said examined medical image.
5. The computerized method of claim 1, further comprising updating said category model according to said pathology.
6. The computerized method of claim 1, wherein said category model comprises less than 700 visual words.
7. The computerized method of claim 1, wherein said category model is generated by an analysis of a training set having more than 10,000 medical images.
8. The computerized method of claim 1, wherein said category model is generated by clustering a plurality of image patches from a plurality of medical images in a plurality of clusters, said plurality of visual words being defined according to said plurality of clusters.
9. The computerized method of claim 8, wherein said clustering is performed according to a principal component analysis (PCA).
10. The computerized method of claim 8, wherein said plurality of medical images are provided from a picture archiving communication system (PACS).
11. The computerized method of claim 1, wherein said category model is modeled using a support vector machine (SVM) training procedure.
12. The computerized method of claim 11, wherein said SVM training procedure is a multi-class SVM with a radial basis function (RBF) kernel.
13. The computerized method of claim 1, wherein the category model is updated upon each usage of the category model.
14. The computerized method of claim 1, further comprising normalizing each image patch, wherein each normalized image patch is formed from a transformation of intensity values from a corresponding image patch, to render the image patch less variant to brightness, and to provide local contrast enhancement.
15. The computerized method of claim 14, wherein said intensity values from the image patch are obtained from pixels of the image patch.
16. The computerized method of claim 1, wherein said repeatable multidimensional features in each said image are from three dimensional images.
17. The computerized method of claim 1, further comprising outputting said category model for facilitating the categorization of an image based on local descriptors thereof including said image from three dimensional images.
18. The computerized method of claim 1, wherein the pathology is selected from the group consisting of enlarged heart, lung infiltrates, right pleural effusion and left pleural effusion.
19. A system of classifying a medical image using a category model, comprising:
an interface adapted for receiving an examined medical image;
a memory adapted to store a category model which maps a plurality of visual-words in a space, each said visual-word is represented by a plurality of reference repeatable multidimensional features in a reference pixel area and image coordinates indicative of a location of said reference pixel area in said space and is associated with at least one of a plurality of image categories;
a code store adapted for a code;
at least one processing unit for executing said code;
wherein said code comprising:
code instructions for identifying image coordinates of a plurality of image patches in said examined medical image, each image patch is represented by a plurality of repeatable multidimensional features in a pixel area of said examined medical image;
code instructions for using said plurality of image patches and said image coordinates of a plurality of image patches to identify a group of said plurality of visual-words in said examined medical image; and
code instructions for categorizing a pathology in said examined medical image according to said group.
US14/833,182 2010-06-28 2015-08-24 Method and system of classifying medical images Abandoned US20150363672A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/833,182 US20150363672A1 (en) 2010-06-28 2015-08-24 Method and system of classifying medical images

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US35897910P 2010-06-28 2010-06-28
US13/170,200 US9122955B2 (en) 2010-06-28 2011-06-28 Method and system of classifying medical images
US14/833,182 US20150363672A1 (en) 2010-06-28 2015-08-24 Method and system of classifying medical images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/170,200 Continuation US9122955B2 (en) 2010-06-28 2011-06-28 Method and system of classifying medical images

Publications (1)

Publication Number Publication Date
US20150363672A1 true US20150363672A1 (en) 2015-12-17

Family

ID=45352602

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/170,200 Expired - Fee Related US9122955B2 (en) 2010-06-28 2011-06-28 Method and system of classifying medical images
US14/833,182 Abandoned US20150363672A1 (en) 2010-06-28 2015-08-24 Method and system of classifying medical images

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/170,200 Expired - Fee Related US9122955B2 (en) 2010-06-28 2011-06-28 Method and system of classifying medical images

Country Status (1)

Country Link
US (2) US9122955B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU178086U1 (en) * 2017-07-10 2018-03-22 Светлана Георгиевна Горохова Automated device for the diagnosis of heart disease
JP2018050671A (en) * 2016-09-26 2018-04-05 カシオ計算機株式会社 Diagnosis support apparatus, image processing method in diagnosis support apparatus, and program
CN109711464A (en) * 2018-12-25 2019-05-03 中山大学 Image Description Methods based on the building of stratification Attributed Relational Graps
CN110287352A (en) * 2019-06-26 2019-09-27 维沃移动通信有限公司 Image display method and terminal device

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8463053B1 (en) 2008-08-08 2013-06-11 The Research Foundation Of State University Of New York Enhanced max margin learning on multimodal data mining in a multimedia database
US20170103537A1 (en) * 2011-11-22 2017-04-13 Mayo Foundation For Medical Education And Research Determining image features for analytical models using s-transform
CN103164713B (en) 2011-12-12 2016-04-06 阿里巴巴集团控股有限公司 Image classification method and device
CN103473569A (en) * 2013-09-22 2013-12-25 江苏美伦影像系统有限公司 Medical image classification method based on SVM
CN103488977A (en) * 2013-09-22 2014-01-01 江苏美伦影像系统有限公司 Medical image management system based on SVM
US10628736B2 (en) 2015-09-24 2020-04-21 Huron Technologies International Inc. Systems and methods for barcode annotations for digital images
US10606982B2 (en) 2017-09-06 2020-03-31 International Business Machines Corporation Iterative semi-automatic annotation for workload reduction in medical image labeling
US11042772B2 (en) 2018-03-29 2021-06-22 Huron Technologies International Inc. Methods of generating an encoded representation of an image and systems of operating thereof
WO2020093152A1 (en) * 2018-11-05 2020-05-14 Hamid Reza Tizhoosh Systems and methods of managing medical images
CN109657731A (en) * 2018-12-28 2019-04-19 长沙理工大学 A kind of anti-interference classification method of droplet digital pcr instrument
US10916342B2 (en) 2019-05-16 2021-02-09 Cynerio Israel Ltd. Systems and methods for analyzing network packets
CN111091881B (en) * 2019-12-28 2023-12-19 北京颐圣智能科技有限公司 Medical information classification method, medical classified information storage method and computing device
EP4252190A4 (en) 2020-11-24 2024-09-11 Huron Tech International Inc Systems and methods for generating encoded representations for multiple magnifications of image data
US11816909B2 (en) 2021-08-04 2023-11-14 Abbyy Development Inc. Document clusterization using neural networks

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6571227B1 (en) * 1996-11-04 2003-05-27 3-Dimensional Pharmaceuticals, Inc. Method, system and computer program product for non-linear mapping of multi-dimensional data
US7346209B2 (en) * 2002-09-30 2008-03-18 The Board Of Trustees Of The Leland Stanford Junior University Three-dimensional pattern recognition method to detect shapes in medical images
US7458936B2 (en) * 2003-03-12 2008-12-02 Siemens Medical Solutions Usa, Inc. System and method for performing probabilistic classification and decision support using multidimensional medical image databases
JP2005044330A (en) * 2003-07-24 2005-02-17 Univ Of California San Diego Weak hypothesis generation device and method, learning device and method, detection device and method, expression learning device and method, expression recognition device and method, and robot device
WO2006002320A2 (en) * 2004-06-23 2006-01-05 Strider Labs, Inc. System and method for 3d object recognition using range and intensity
US7751602B2 (en) * 2004-11-18 2010-07-06 Mcgill University Systems and methods of classification utilizing intensity and spatial data
JP4496943B2 (en) * 2004-11-30 2010-07-07 日本電気株式会社 Pathological diagnosis support apparatus, pathological diagnosis support program, operation method of pathological diagnosis support apparatus, and pathological diagnosis support system
WO2006062958A2 (en) * 2004-12-10 2006-06-15 Worcester Polytechnic Institute Image-based computational mechanical analysis and indexing for cardiovascular diseases
EP1828961A2 (en) * 2004-12-17 2007-09-05 Koninklijke Philips Electronics N.V. Method and apparatus for automatically developing a high performance classifier for producing medically meaningful descriptors in medical diagnosis imaging
US7653264B2 (en) * 2005-03-04 2010-01-26 The Regents Of The University Of Michigan Method of determining alignment of images in high dimensional feature space
US7738683B2 (en) * 2005-07-22 2010-06-15 Carestream Health, Inc. Abnormality detection in medical images
US7756309B2 (en) * 2005-07-27 2010-07-13 Bioimagene, Inc. Method and system for storing, indexing and searching medical images using anatomical structures of interest
EP1922999B1 (en) * 2005-09-05 2011-08-03 Konica Minolta Medical & Graphic, Inc. Image processing method and image processing device
EP2412300B1 (en) * 2005-12-28 2014-03-26 Olympus Medical Systems Corp. Image processing device and image processing method in image processing device
US7986827B2 (en) * 2006-02-07 2011-07-26 Siemens Medical Solutions Usa, Inc. System and method for multiple instance learning for computer aided detection
US7949186B2 (en) * 2006-03-15 2011-05-24 Massachusetts Institute Of Technology Pyramid match kernel and related techniques
US7864989B2 (en) * 2006-03-31 2011-01-04 Fujifilm Corporation Method and apparatus for adaptive context-aided human classification
US8467570B2 (en) * 2006-06-14 2013-06-18 Honeywell International Inc. Tracking system with fused motion and object detection
WO2008017991A2 (en) * 2006-08-11 2008-02-14 Koninklijke Philips Electronics, N.V. Methods and apparatus to integrate systematic data scaling into genetic algorithm-based feature subset selection
US8098889B2 (en) * 2007-01-18 2012-01-17 Siemens Corporation System and method for vehicle detection and tracking
US8340437B2 (en) * 2007-05-29 2012-12-25 University Of Iowa Research Foundation Methods and systems for determining optimal features for classifying patterns or objects in images
US8126274B2 (en) * 2007-08-30 2012-02-28 Microsoft Corporation Visual language modeling for image classification
US8160323B2 (en) * 2007-09-06 2012-04-17 Siemens Medical Solutions Usa, Inc. Learning a coarse-to-fine matching pursuit for fast point search in images or volumetric data using multi-class classification
US8131039B2 (en) * 2007-09-26 2012-03-06 Siemens Medical Solutions Usa, Inc. System and method for multiple-instance learning for computer aided diagnosis
US7929804B2 (en) * 2007-10-03 2011-04-19 Mitsubishi Electric Research Laboratories, Inc. System and method for tracking objects with a synthetic aperture
US8295575B2 (en) * 2007-10-29 2012-10-23 The Trustees of the University of PA. Computer assisted diagnosis (CAD) of cancer using multi-functional, multi-modal in-vivo magnetic resonance spectroscopy (MRS) and imaging (MRI)
US8487991B2 (en) * 2008-04-24 2013-07-16 GM Global Technology Operations LLC Clear path detection using a vanishing point
WO2009142758A1 (en) * 2008-05-23 2009-11-26 Spectral Image, Inc. Systems and methods for hyperspectral medical imaging
US7949167B2 (en) * 2008-06-12 2011-05-24 Siemens Medical Solutions Usa, Inc. Automatic learning of image features to predict disease
EP2332087B1 (en) * 2008-07-25 2020-04-22 Fundação D. Anna Sommer Champalimaud E Dr. Carlos Montez Champalimaud Systems and methods of treating, diagnosing and predicting the occurrence of a medical condition
US8407267B2 (en) * 2009-02-06 2013-03-26 Siemens Aktiengesellschaft Apparatus, method, system and computer-readable medium for storing and managing image data
US8330819B2 (en) * 2009-04-13 2012-12-11 Sri International Method for pose invariant vessel fingerprinting
WO2011052826A1 (en) * 2009-10-30 2011-05-05 주식회사 유진로봇 Map generating and updating method for mobile robot position recognition
EP2507743A2 (en) * 2009-12-02 2012-10-10 QUALCOMM Incorporated Fast subspace projection of descriptor patches for image recognition
US8681222B2 (en) * 2010-12-08 2014-03-25 GM Global Technology Operations LLC Adaptation for clear path detection with additional classifiers
US8565482B2 (en) * 2011-02-28 2013-10-22 Seiko Epson Corporation Local difference pattern based local background modeling for object detection

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018050671A (en) * 2016-09-26 2018-04-05 カシオ計算機株式会社 Diagnosis support apparatus, image processing method in diagnosis support apparatus, and program
RU178086U1 (en) * 2017-07-10 2018-03-22 Светлана Георгиевна Горохова Automated device for the diagnosis of heart disease
CN109711464A (en) * 2018-12-25 2019-05-03 中山大学 Image Description Methods based on the building of stratification Attributed Relational Graps
CN110287352A (en) * 2019-06-26 2019-09-27 维沃移动通信有限公司 Image display method and terminal device

Also Published As

Publication number Publication date
US9122955B2 (en) 2015-09-01
US20110317892A1 (en) 2011-12-29

Similar Documents

Publication Publication Date Title
US9122955B2 (en) Method and system of classifying medical images
Li et al. Large-scale retrieval for medical image analytics: A comprehensive review
Yousef et al. A holistic overview of deep learning approach in medical imaging
Avni et al. X-ray categorization and retrieval on the organ and pathology level, using patch-based visual words
Chan et al. Effective pneumothorax detection for chest X‐ray images using local binary pattern and support vector machine
Ma et al. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion
Ben Ahmed et al. Classification of Alzheimer’s disease subjects from MRI using hippocampal visual features
US8885898B2 (en) Matching of regions of interest across multiple views
Anavi et al. Visualizing and enhancing a deep learning framework using patients age and gender for chest x-ray image retrieval
Xu et al. Texture-specific bag of visual words model and spatial cone matching-based method for the retrieval of focal liver lesions using multiphase contrast-enhanced CT images
Zhang et al. Dictionary pruning with visual word significance for medical image retrieval
Wei et al. A content-based approach to medical image database retrieval
JP2010079398A (en) Similar image providing device and program
Wang et al. An interactive system for computer-aided diagnosis of breast masses
Depeursinge et al. Comparative performance analysis of state-of-the-art classification algorithms applied to lung tissue categorization
de Nazaré Silva et al. Automatic detection of masses in mammograms using quality threshold clustering, correlogram function, and SVM
Pino Peña et al. Automatic emphysema detection using weakly labeled HRCT lung images
Aggarwal et al. Semantic and content-based medical image retrieval for lung cancer diagnosis with the inclusion of expert knowledge and proven pathology
Chowdhury et al. An efficient radiographic image retrieval system using convolutional neural network
Singh et al. Content-based mammogram retrieval using wavelet based complete-LBP and K-means clustering for the diagnosis of breast cancer
Saraswat et al. Bypassing confines of feature extraction in brain tumor retrieval via MR images by CBIR
Avni et al. X-ray categorization and spatial localization of chest pathologies
Tang et al. Medical image retrieval using multi-texton assignment
Yang et al. Learning distance metrics for interactive search-assisted diagnosis of mammograms
Li et al. Tbidoc: 3d content-based ct image retrieval system for traumatic brain injury

Legal Events

Date Code Title Description
AS Assignment

Owner name: TEL HASHOMER MEDICAL RESEARCH INFRASTRUCTURE AND S

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KONEN, ELI;SHARON, MICHAL;SIGNING DATES FROM 20110810 TO 20110912;REEL/FRAME:036826/0593

Owner name: RAMOT AT TEL-AVIV UNIVERSITY LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GREENSPAN, HAYIT;AVNI, URI;SIGNING DATES FROM 20110523 TO 20110726;REEL/FRAME:036826/0582

Owner name: BAR-ILAN UNIVERSITY, ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOLDBERGER, JACOB;REEL/FRAME:036826/0596

Effective date: 20110626

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION