WO2016195683A1 - Medical pattern classification using non-linear and nonnegative sparse representations - Google Patents
Medical pattern classification using non-linear and nonnegative sparse representations Download PDFInfo
- Publication number
- WO2016195683A1 WO2016195683A1 PCT/US2015/034097 US2015034097W WO2016195683A1 WO 2016195683 A1 WO2016195683 A1 WO 2016195683A1 US 2015034097 W US2015034097 W US 2015034097W WO 2016195683 A1 WO2016195683 A1 WO 2016195683A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- linear
- dictionaries
- dictionary
- sparse coding
- training
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2136—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2155—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present disclosure relates generally to methods, systems, and apparatuses, for medical classification where non-linear and non-negative sparse representations are used to assign class labels to test signals.
- the disclosed techniques may be applied, for example, to the classification of magnetic resonance (MR) images.
- CAD computer-aided diagnosis
- CAD systems automatically scan medical image data (e.g., gathered via imaging modalities such as X-Ray, MRI, or Ultrasound) and identify conspicuous structures and sections that may be indicative of a disease.
- imaging modalities such as X-Ray, MRI, or Ultrasound
- classification is often done using popular methods such as support vector machine (SVM), boosting, and neural networks.
- SR sparse representation
- an SR framework is capable of handling occlusion and corruption by exploiting the property that these artifacts are often sparse in terms of pixel basis.
- the classification is often done by first learning a good sparse representation for each pattern class.
- Some examples of effective algorithms for learning sparse representation are the method of optimal direction, KSVD, and online dictionary learning.
- KSVD optimal direction
- KSVD maximum likelihood function
- SR holds a great promise for CAD applications, they have certain disadvantages that can be improved.
- the linear model traditionally employed by SR-based systems is often inadequate to represent nonlinear information associated with the complex underlying physics of medical imaging. For instance, contrast agents and variation of the dose in computed tomography nonlinearly change the appearance of the resulting image.
- Medical images are also subjected to other common sources of nonlinear variations such as rotation and shape deformation.
- the traditional sparse representation framework would need much larger number of dictionary atoms to accurately represent these nonlinear effects. This, in turn, requires a larger number of training samples which might be expensive to collect, especially for medial settings.
- Embodiments of the present invention address and overcome one or more of the above shortcomings and drawbacks, by providing methods, systems, and apparatuses that perform a medical pattern classification using non-linear and non-negative sparse
- This technology is particularly well-suited for, but by no means limited to, imaging modalities such as Magnetic Resonance Imaging (MRI).
- imaging modalities such as Magnetic Resonance Imaging (MRI).
- a method of classifying signals using non-linear sparse representations includes learning non-linear dictionaries based on training signals, each respective non-linear dictionary corresponding to one of class labels.
- a non-linear sparse coding process is performed on a test signal for each of the non-linear dictionaries, thereby associating each of the non-linear dictionaries with a distinct sparse coding of the test signal.
- a reconstruction error is measured using the test signal and the distinct sparse coding corresponding to the respective nonlinear dictionary.
- the non-linear dictionary corresponding with the smallest value for the reconstruction error among the non-linear dictionaries is identified and a class label
- the method further includes displaying an image corresponding to the test signal with an indication of the class label.
- the method further comprises cropping a subset of the training signals prior to building the non-linear dictionaries. This cropping may be performed, for example, by identifying a region of interest in the respective training signal and discarding portions of the respective training signal outside the region of interest.
- the training signals comprise anatomical images and the method further comprises identifying the region of interest in the respective training signal based on a user-supplied indication of an anatomical area of interest.
- a non-negative constraint is applied to the non-linear dictionaries during learning.
- the non-negative constraint may be applied to the distinct sparse coding of the test signal associated with each of the non-linear dictionaries during the non-linear sparse coding process.
- the training signals and the test signal each comprise a k-space dataset acquired using magnetic resonance imaging.
- the class labels described in the aforementioned method may include an indication of disease present in an anatomical area of interest depicted in the test signal.
- a second method of classifying signals using nonlinear sparse representations includes receiving non-linear dictionaries (each respective nonlinear dictionary corresponding to one of class labels) and acquiring a test image dataset of a subject using a magnetic resonance imaging device.
- a non-linear sparse coding process is performed on the test image dataset for each of the non-linear dictionaries, thereby associating each of the non-linear dictionaries with a distinct sparse coding of the test image dataset.
- a reconstruction error is measured using the test image dataset and the distinct sparse coding corresponding to the respective non-linear dictionary.
- a particular non-linear dictionary corresponding a smallest value for the reconstruction error among the non-linear dictionaries is identified. Then, clinical diagnosis is provided for the subject based on a particular class label corresponding to the particular non-linear dictionary to the test image dataset. In some embodiments, the method further includes displaying the test image dataset simultaneously with the clinical diagnosis.
- the aforementioned second method of classifying signals using non-linear sparse representations may include additional features in different embodiments.
- the method includes a step wherein an optimization process is used to learn the non-linear dictionaries based on training images.
- the method may also include cropping a subset of the training images prior to using the optimization process (e.g., by identifying a region of interest in the image and discarding portions outside that region).
- the optimization process applies a non-negative constraint to the non-linear dictionaries during learning.
- the non-negative constraint may be applied to the distinct sparse coding of the test image dataset associated with each of the non-linear dictionaries during the non-linear sparse coding process.
- a system for classifying image data for clinical diagnosis comprises an imaging device configured to acquire test image dataset of a subject and an image processing computer.
- the image processing computer is configured to receive nonlinear dictionaries (each respective non-linear dictionary corresponding to one of class labels) and perform a non-linear sparse coding process on the test image dataset for each of the nonlinear dictionaries, thereby associating each of the non-linear dictionaries with a distinct sparse coding of the test image dataset.
- the image processing computer is further configured to measure a reconstruction error for each respective non-linear dictionary included in the nonlinear dictionaries using the test image dataset and the distinct sparse coding corresponding to the respective non-linear dictionary.
- the image processing computer identifies a particular nonlinear dictionary corresponding to a smallest value for the reconstruction error among the nonlinear dictionaries, and generates a clinical diagnosis for the subject based on a particular class label corresponding to the particular non-linear dictionary to the test image dataset.
- the aforementioned system further comprises a display configured to present the clinical diagnosis for the subject.
- the image processing computer is further configured to perform an optimization process to learn the non-linear dictionaries based on training images. Additionally, in some embodiments, the image processing computer is further configured to apply a non-negative constraint to the non-linear sparse coding process and the optimization process.
- FIG. 1 provides an overview of a system for performing medical pattern
- FIG. 2 provides a non-linear sparse coding and dictionary learning process, according to some embodiments
- FIG. 3 illustrates a process for classification using non-linear dictionary as may be implemented in some embodiments
- FIG. 4 provides additional detail on a method medical pattern classification using non-linear sparse representations as it may be applied to classification of MRI images, according to some embodiments;
- FIG. 5 shows a table provided illustrating a comparison of classification accuracy with respect to different methods described herein;
- FIG. 6 provides an illustration of some of the dictionary atoms learned from the data, according to one example application described herein;
- FIG. 7 provides a comparison of classification accuracy between different approaches described herein; and [21] FIG. 8 illustrates an exemplary computing environment within which embodiments of the invention may be implemented.
- FIG. 1 provides an overview of a system 100 for performing medical pattern classification using non-linear and non-negative sparse representations, according to some embodiments.
- the system 100 illustrated in FIG. 1 includes an Image Processing Computer 1 15 which is operably coupled to an Imaging Device 105 which acquires signals of a subject.
- the Imaging Device 105 is a Magnetic Resonance Imaging (MRI) device, however it should be understood that other imaging devices may alternatively be used in different embodiments.
- the Image Processing Computer 1 15 creates dictionaries for class labels based on a training signals retrieved from Training Database 120. Based on these dictionaries, the Image Processing Computer 1 15 can classify the signal acquired from the Imaging Device 105 according to one or more class labels.
- a User Interface Computer 1 10 provides parameters to the Image Processing Computer 1 15 which helps guide the dictionary learning and classification processes. Additionally, the Image Processing Computer 1 15 may output the acquired signal, along with any generated class labels, on the User Interface Computer 1 10.
- the Image Processing Computer 1 15 is configured to execute a framework that makes use of sparse representation for both feature extraction and classification. To formulate the problem, let ⁇ : M 1 ⁇ T a . n be a non-linear transformation from n into a dot product space T .
- the Image Processing Computer 1 15 retrieves training signals from the Training Database 120 and forms a matrix Y whose columns are the retrieved training signals. It should be noted that database retrieval is only one way that the training signals may be received by the Image Processing Computer 1 15. In other embodiments, for example, a user may directly upload them to the Image Processing Computer 1 15.
- each column of Y may be transformed onto another Hilbert space where nonlinear structures can be expressed using simple Euclidean geometry.
- ⁇ W 1 ⁇ T a M 1 be a non-linear transformation from n into a dot product space T .
- the problem of learning a sparse representation in the transformed space may then be posed as follows:
- Equation 1 The 11 -norm regularization on coefficients X promotes sparsity in the optimization.
- the optimal sparse representation corresponding to Equation 1 has the following form:
- Equation 3 does not explicitly depend on the transformation ⁇ (. ), but the induced kernel matrix (Y . Y ⁇ ⁇ ⁇ ( ⁇ ) (4)
- the Image Processing Computer 115 may optimize the objective function in (3) in an iterative fashion. Note that the induced kernel matrix shown in Equation 4 could be efficiently approximated using Nystrom method. When the sparse coefficient is fixed, the dictionary can be updated via A as follows:
- the objective function in (3) becomes convex with respect to coefficient matrix X and can be solved efficiently using specialized sparse coding methods such as, for example, proximal optimization and iterative shrinkage.
- a positivity constraint is added, i.e. X > 0.
- the objective function is still convex and the sparse coding could also be done using these same techniques.
- FIG. 2 provides a non-linear sparse coding and dictionary learning process 200 that may be performed by the Dictionary Learning Component 115 A, according to some
- a set of training signals Y, a constant coefficient ⁇ , and a non-linear kernel function K are received, for example, from the Training Database 120 and the User Interface Computer 110, respectively.
- a kernel dictionary represented by coefficient matrix A is learned by optimizing the cost function in (3).
- each column of A is initialized. In some embodiments, each column of A is randomly initialized to have only one non-zero coefficient at random position.
- Equation 3 is solved for X given that A is fixed.
- a non-negative constraint on X may be added. Since the problem becomes convex when fixing A, a solution can be obtained using any convex optimization toolbox generally known in the art.
- Equation 3 is solved for A when X is fixed using Equation 5.
- each column of the kernel dictionary is normalized to the unit norm. Steps 215 - 225 are repeated until convergence.
- the result of this process is a kernel dictionary represented by A.
- FIG. 3 illustrates a process 300 for classification using non-linear dictionary as may be implemented by the Classification Component 1 15B in some embodiments.
- a test signal z, a constant coefficient ⁇ , a non-linear kernel function K are received, for example, from the Imaging Device 105 and the User Interface Computer 110, respectively. Additionally, the dictionary matrix A generated for each class by the Dictionary Learning Component 115 A is assembled.
- the signal z is classified into one of the classes.
- a sparse coefficient of each class, x; is solved by optimizing Equation 3 and replacing A with a class- specific dictionary Aj.
- the label of z is assigned to be the label of the class with smallest
- FIG. 4 provides additional detail on a method 400 medical pattern classification using no n- linear sparse representations as it may be applied to classification of MRI images, according to some embodiments.
- the method can be divided into two general operations:
- a set of training data is received.
- This training data comprises labeled set of images Y acquired using an MRI imaging device (e.g., Imaging Device 105 in FIG.l).
- MRI imaging device e.g., Imaging Device 105 in FIG.l.
- dictionaries are generated for various imaging modalities.
- the labels applied to each respective image are used to categorize the image into a class.
- the class types will vary, for example, based on the portion of anatomy which was imaged and the pathological features being classified.
- a raw labeling strategy may be employed. For example, a 3D MRI image of a brain may be labeled as
- the brain image may be labeled as having markers indicative of Alzheimer's disease. It should be noted that the robustness of the classification will directly depend on the robustness of the labels. Thus, in some embodiments, once the training images are received at 305, they maybe manually labelled with more detailed information to allow for more comprehensive learning.
- each labeled image received in the training set is vectorized.
- Vectorization may be performed, for example, by reshaping the multidimensional signal of the image data into a single dimensional vector.
- the image data may be cropped or scanned prior to vectorization to reduce the size and focus on an area of interest.
- a predefined threshold e.g., set by the user
- non-linear dictionaries O(D) are learned from ⁇ ( ⁇ ) for each class using the set of training examples.
- a non-negative constraint may be incorporated during the learning of D. It should be noted that, if the dictionary is learned with a non-negative constraint, a non-negative constraint should also be used in the test space, as described below.
- a test image y test is received and vectorized, for example, in a manner similar to that discussed above with respect to steps 305 and 310. Then, at step 430, non-linear sparse coding is performed by optimizing the following equation for each class. - (D)x t est ll + ⁇
- a non-negative constraint may be applied to x te s t if suc h a constraint was used during training at step 315.
- FIG. 4 may be optimized via biconvex optimization process, as described above with respect to FIG. 3.
- the reconstruction error r is determined for each class according to the following equation:
- a label is assigned from the class with the smallest reconstruction error determined at step 435.
- the label is presented along with the test image on a display to allow for diagnosis, for example, in clinical settings.
- a sparse representation was learned for 20 MR apex images. There were 5970 images (from 138 patients) for training and 1991 images (from 46 patients) for testing. Images were of 32x32 pixels resolution. In addition, the images were uniformly rotated to generate 10 samples from one positive sample. The negative samples were not rotated while sampling more to match the number of positive samples. In total, there were 1 19,400 training samples. A dictionary of 5000 atoms was learned for each class (i .e. positive and negative) with sparse regularization set to 0.3. Given a new test sample, sparse coding was performed on all dictionaries and their
- FIG. 5 shows the classification accuracy comparison between different methods. It is interesting that sparse method, despite being generative in nature, outperforms other discriminative approaches like Support Vector Machine (SVM) by a significant margin.
- FIG. 6 provides an illustration of some of the dictionary atoms 600 learned from the data, which clearly capture meaningful structures of the apex images.
- FIG. 7 provides a table 700 showing a comparison of classification accuracy between different approaches. It can be noticed that the sparse classification with a non-negative constraint on sparse coefficients outperforms discriminative methods. Both experiments on the apex dataset and the restenosis dataset clearly demonstrate the advantage of our sparse representation based classification approach.
- FIG. 8 illustrates an exemplary computing environment 800 within which
- this computing environment 800 may be used to implement the processes 200, 300, and 400 of image reconstruction described in FIGS. 2 - 4.
- the computing environment 800 may be used to implement one or more of the components illustrated in the system 100 of FIG. 1.
- the computing environment 800 may include computer system 810, which is one example of a computing system upon which embodiments of the invention may be implemented.
- Computers and computing environments, such as computer system 810 and computing environment 800, are known to those of skill in the art and thus are described briefly here.
- the computer system 810 may include a communication mechanism such as a bus 821 or other communication mechanism for communicating information within the computer system 810.
- the computer system 810 further includes one or more processors 820 coupled with the bus 821 for processing the information.
- the processors 820 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art.
- the computer system 810 also includes a system memory 830 coupled to the bus 821 for storing information and instructions to be executed by processors 820.
- the system memory 830 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 831 and/or random access memory (RAM) 832.
- the system memory RAM 832 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM).
- the system memory ROM 831 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM).
- system memory 830 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 820.
- a basic input/output system (BIOS) 833 containing the basic routines that help to transfer information between elements within computer system 810, such as during start-up, may be stored in ROM 831.
- RAM 832 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 820.
- System memory 830 may additionally include, for example, operating system 834, application programs 835, other program modules 836 and program data 837.
- the computer system 810 also includes a disk controller 840 coupled to the bus 821 to control one or more storage devices for storing information and instructions, such as a hard disk 841 and a removable media drive 842 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid state drive).
- the storage devices may be added to the computer system 810 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or Fire Wire).
- SCSI small computer system interface
- IDE integrated device electronics
- USB Universal Serial Bus
- Fire Wire Fire Wire
- the computer system 810 may also include a display controller 865 coupled to the bus 821 to control a display 866, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user.
- the computer system includes an input interface 860 and one or more input devices, such as a keyboard 862 and a pointing device 861 , for interacting with a computer user and providing information to the processor 820.
- the pointing device 861 for example, may be a mouse, a trackball, or a pointing stick for communicating direction information and command selections to the processor 820 and for controlling cursor movement on the display 866.
- the display 866 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 861.
- the computer system 810 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 820 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 830. Such instructions may be read into the system memory 830 from another computer readable medium, such as a hard disk 841 or a removable media drive 842.
- the hard disk 841 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security.
- the processors 820 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 830.
- hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
- the computer system 810 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein.
- the term "computer readable medium” as used herein refers to any medium that participates in providing instructions to the processor 820 for execution.
- a computer readable medium may take many forms including, but not limited to, non-volatile media, volatile media, and transmission media.
- Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 841 or removable media drive 842.
- Non-limiting examples of volatile media include dynamic memory, such as system memory 830.
- Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 821.
- Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
- the computing environment 800 may further include the computer system 810 operating in a networked environment using logical connections to one or more remote computers, such as remote computer 880.
- Remote computer 880 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 810.
- computer system 810 may include modem 872 for establishing communications over a network 871 , such as the Internet. Modem 872 may be connected to bus 821 via user network interface 870, or via another appropriate mechanism.
- Network 871 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a
- LAN local area network
- WAN wide area network
- the network 871 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-1 1 or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 871. [45] The embodiments of the present disclosure may be implemented with any combination of hardware and software.
- the embodiments of the present disclosure may be included in an article of manufacture (e.g., one or more computer program products) having, for example, computer-readable, non-transitory media.
- the media has embodied therein, for instance, computer readable program code for providing and facilitating the mechanisms of the embodiments of the present disclosure.
- the article of manufacture can be included as part of a computer system or sold separately.
- An executable application comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input.
- An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
- a graphical user interface comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions.
- the GUI also includes an executable procedure or executable application.
- the executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user.
- the processor under control of an executable procedure or executable application, manipulates the GUI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Health & Medical Sciences (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
Abstract
A method of classifying signals using non-linear sparse representations includes learning a plurality of non-linear dictionaries based on a plurality of training signals, each respective nonlinear dictionary corresponding to one of a plurality of class labels. A non-linear sparse coding process is performed on a test signal for each of the plurality of non-linear dictionaries, thereby associating each of the plurality of non-linear dictionaries with a distinct sparse coding of the test signal. For each respective non-linear dictionary included in the plurality of non-linear dictionaries, a reconstruction error is measured using the test signal and the distinct sparse coding corresponding to the respective non-linear dictionary. A particular non-linear dictionary corresponding to a smallest value for the reconstruction error among the plurality of non-linear dictionaries is identified and a class label corresponding to the particular non-linear dictionary is assigned to the test signal.
Description
MEDICAL PATTERN CLASSIFICATION USING NON-LINE AR AND NON- NEGATIVE SPARSE REPRESENTATIONS
TECHNICAL FIELD
[1] The present disclosure relates generally to methods, systems, and apparatuses, for medical classification where non-linear and non-negative sparse representations are used to assign class labels to test signals. The disclosed techniques may be applied, for example, to the classification of magnetic resonance (MR) images.
BACKGROUND
[2] Within medical science, pattern classification is the basis for computer-aided diagnosis (CAD) systems. CAD systems automatically scan medical image data (e.g., gathered via imaging modalities such as X-Ray, MRI, or Ultrasound) and identify conspicuous structures and sections that may be indicative of a disease. Traditionally, classification is often done using popular methods such as support vector machine (SVM), boosting, and neural networks. These are discriminative approaches since their objective functions are directly related to the classification errors. However, discriminative classifiers are sensitive to data corruption. Their trainings are also prone to over-fitting when facing the lack of training samples.
[3] Recently, sparse representation (SR) based classification has gained significant interests from researchers across different communities such as signal processing, computer vision, and machine learning. This is due to its superior robustness against different types of noise. For example, an SR framework is capable of handling occlusion and corruption by exploiting the property that these artifacts are often sparse in terms of pixel basis. The classification is often done by first learning a good sparse representation for each pattern class. Some examples of effective algorithms for learning sparse representation are the method of optimal direction, KSVD, and online dictionary learning. A test sample is classified by computing maximum likelihood function given each sparse representation. It has been shown that this approach outperforms the state of the art of discriminative methods on many practical applications.
[4] Although SR holds a great promise for CAD applications, they have certain disadvantages that can be improved. In particular, the linear model traditionally employed by SR-based systems is often inadequate to represent nonlinear information associated with the complex underlying physics of medical imaging. For instance, contrast agents and variation of the dose in computed tomography nonlinearly change the appearance of the resulting image. Medical images are also subjected to other common sources of nonlinear variations such as rotation and shape deformation. The traditional sparse representation framework would need much larger number of dictionary atoms to accurately represent these nonlinear effects. This, in turn, requires a larger number of training samples which might be expensive to collect, especially for medial settings.
SUMMARY
[5] Embodiments of the present invention address and overcome one or more of the above shortcomings and drawbacks, by providing methods, systems, and apparatuses that perform a medical pattern classification using non-linear and non-negative sparse
representations. This technology is particularly well-suited for, but by no means limited to, imaging modalities such as Magnetic Resonance Imaging (MRI).
[6] According to some embodiment, a method of classifying signals using non-linear sparse representations includes learning non-linear dictionaries based on training signals, each respective non-linear dictionary corresponding to one of class labels. A non-linear sparse coding process is performed on a test signal for each of the non-linear dictionaries, thereby associating each of the non-linear dictionaries with a distinct sparse coding of the test signal. For each respective non-linear dictionary included in the non-linear dictionaries, a reconstruction error is measured using the test signal and the distinct sparse coding corresponding to the respective nonlinear dictionary. The non-linear dictionary corresponding with the smallest value for the reconstruction error among the non-linear dictionaries is identified and a class label
corresponding to the particular non-linear dictionary is assigned to the test signal. In some embodiments, the method further includes displaying an image corresponding to the test signal with an indication of the class label.
[7] Various enhancements or other modifications may be made to the aforementioned method in different embodiments. For example, in some embodiments, the method further comprises cropping a subset of the training signals prior to building the non-linear dictionaries. This cropping may be performed, for example, by identifying a region of interest in the respective training signal and discarding portions of the respective training signal outside the region of interest. In some embodiments, the training signals comprise anatomical images and the method further comprises identifying the region of interest in the respective training signal based on a user-supplied indication of an anatomical area of interest. In some embodiments, a non-negative constraint is applied to the non-linear dictionaries during learning. For example, the non-negative constraint may be applied to the distinct sparse coding of the test signal associated with each of the non-linear dictionaries during the non-linear sparse coding process. In some embodiments, the training signals and the test signal each comprise a k-space dataset acquired using magnetic resonance imaging. The class labels described in the aforementioned method may include an indication of disease present in an anatomical area of interest depicted in the test signal.
[8] According to other embodiments, a second method of classifying signals using nonlinear sparse representations includes receiving non-linear dictionaries (each respective nonlinear dictionary corresponding to one of class labels) and acquiring a test image dataset of a subject using a magnetic resonance imaging device. A non-linear sparse coding process is performed on the test image dataset for each of the non-linear dictionaries, thereby associating each of the non-linear dictionaries with a distinct sparse coding of the test image dataset. Next, for each respective non-linear dictionary included in the non-linear dictionaries, a reconstruction error is measured using the test image dataset and the distinct sparse coding corresponding to the respective non-linear dictionary. A particular non-linear dictionary corresponding a smallest value for the reconstruction error among the non-linear dictionaries is identified. Then, clinical diagnosis is provided for the subject based on a particular class label corresponding to the particular non-linear dictionary to the test image dataset. In some embodiments, the method further includes displaying the test image dataset simultaneously with the clinical diagnosis.
[9] The aforementioned second method of classifying signals using non-linear sparse representations may include additional features in different embodiments. For example, in some
embodiments, the method includes a step wherein an optimization process is used to learn the non-linear dictionaries based on training images. The method may also include cropping a subset of the training images prior to using the optimization process (e.g., by identifying a region of interest in the image and discarding portions outside that region). In some embodiments, the optimization process applies a non-negative constraint to the non-linear dictionaries during learning. For example, the non-negative constraint may be applied to the distinct sparse coding of the test image dataset associated with each of the non-linear dictionaries during the non-linear sparse coding process.
[10] According to other embodiments, a system for classifying image data for clinical diagnosis comprises an imaging device configured to acquire test image dataset of a subject and an image processing computer. The image processing computer is configured to receive nonlinear dictionaries (each respective non-linear dictionary corresponding to one of class labels) and perform a non-linear sparse coding process on the test image dataset for each of the nonlinear dictionaries, thereby associating each of the non-linear dictionaries with a distinct sparse coding of the test image dataset. The image processing computer is further configured to measure a reconstruction error for each respective non-linear dictionary included in the nonlinear dictionaries using the test image dataset and the distinct sparse coding corresponding to the respective non-linear dictionary. The image processing computer identifies a particular nonlinear dictionary corresponding to a smallest value for the reconstruction error among the nonlinear dictionaries, and generates a clinical diagnosis for the subject based on a particular class label corresponding to the particular non-linear dictionary to the test image dataset. In some embodiments, the aforementioned system further comprises a display configured to present the clinical diagnosis for the subject.
[11] In some embodiments of the aforementioned system, the image processing computer is further configured to perform an optimization process to learn the non-linear dictionaries based on training images. Additionally, in some embodiments, the image processing computer is further configured to apply a non-negative constraint to the non-linear sparse coding process and the optimization process.
[12] Additional features and advantages of the invention will be made apparent from the following detailed description of illustrative embodiments that proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[13] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there are shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:
[14] FIG. 1 provides an overview of a system for performing medical pattern
classification using non-linear and non-negative sparse representations, according to some embodiments;
[15] FIG. 2 provides a non-linear sparse coding and dictionary learning process, according to some embodiments;
[16] FIG. 3 illustrates a process for classification using non-linear dictionary as may be implemented in some embodiments;
[17] FIG. 4 provides additional detail on a method medical pattern classification using non-linear sparse representations as it may be applied to classification of MRI images, according to some embodiments;
[18] FIG. 5 shows a table provided illustrating a comparison of classification accuracy with respect to different methods described herein;
[19] FIG. 6 provides an illustration of some of the dictionary atoms learned from the data, according to one example application described herein;
[20] FIG. 7 provides a comparison of classification accuracy between different approaches described herein; and
[21] FIG. 8 illustrates an exemplary computing environment within which embodiments of the invention may be implemented.
DETAILED DESCRIPTION
[22] Systems, methods, and apparatuses are described herein which relate generally to medical pattern classification using non-linear and non-negative sparse representations. Briefly, a framework and corresponding techniques are presented herein that make use of sparse representation for both feature extraction and classification. The described techniques are generative, yet, significantly outperform their discriminative counterparts. In particular, the techniques demonstrate how to learn nonlinear sparse representation. In addition, in some embodiments, the techniques allow the possibility of enforcing positivity constraint on the sparse coefficients.
[23] FIG. 1 provides an overview of a system 100 for performing medical pattern classification using non-linear and non-negative sparse representations, according to some embodiments. Briefly, the system 100 illustrated in FIG. 1 includes an Image Processing Computer 1 15 which is operably coupled to an Imaging Device 105 which acquires signals of a subject. In the example of FIG. 1 , the Imaging Device 105 is a Magnetic Resonance Imaging (MRI) device, however it should be understood that other imaging devices may alternatively be used in different embodiments. The Image Processing Computer 1 15 creates dictionaries for class labels based on a training signals retrieved from Training Database 120. Based on these dictionaries, the Image Processing Computer 1 15 can classify the signal acquired from the Imaging Device 105 according to one or more class labels. A User Interface Computer 1 10 provides parameters to the Image Processing Computer 1 15 which helps guide the dictionary learning and classification processes. Additionally, the Image Processing Computer 1 15 may output the acquired signal, along with any generated class labels, on the User Interface Computer 1 10.
[24] The Image Processing Computer 1 15 is configured to execute a framework that makes use of sparse representation for both feature extraction and classification. To formulate the problem, let Φ : M 1→ T a .n be a non-linear transformation from ninto a dot product space T . The Image Processing Computer 1 15 retrieves training signals from the Training
Database 120 and forms a matrix Y whose columns are the retrieved training signals. It should be noted that database retrieval is only one way that the training signals may be received by the Image Processing Computer 1 15. In other embodiments, for example, a user may directly upload them to the Image Processing Computer 1 15.
[25] In order to learn nonlinearities within the data, each column of Y may be transformed onto another Hilbert space where nonlinear structures can be expressed using simple Euclidean geometry. Let Φ■ W1→ T a M 1 be a non-linear transformation from ninto a dot product space T . The problem of learning a sparse representation in the transformed space may then be posed as follows:
{! * , * > - ar κτΐϋ1| (Υ) - DXill + A||X{ (1)
.x
The 11 -norm regularization on coefficients X promotes sparsity in the optimization. The optimal sparse representation corresponding to Equation 1 has the following form:
D* - (Υ)Α (2) for some A e M.NxK . In other words, dictionary atoms lie within the span of the transformed signals. This allows Equation 1 to be rewritten in an alternative way. The optimal sparse representation may be found by minimizing the objective function with respect to the coefficient matrix A instead of D:
Note that, unlike Equation 1 , Equation 3 does not explicitly depend on the transformation Φ(. ), but the induced kernel matrix (Y . Y} ~ Φ(Υ) (4)
This eliminates the need to explicitly map signals to high dimensional non-linear space and, therefore, makes the computation more feasible. More specifically, it is more efficient to
optimize the objective function in (3) since it only involves a kernel matrix of finite dimension e WxW , instead of dealing with a possibly infinite dimensional dictionary in Equation 1.
[26] The Image Processing Computer 115 may optimize the objective function in (3) in an iterative fashion. Note that the induced kernel matrix shown in Equation 4 could be efficiently approximated using Nystrom method. When the sparse coefficient is fixed, the dictionary can be updated via A as follows:
A - XT(XXrV""'' (5)
Similarly, when the dictionary is fixed, the objective function in (3) becomes convex with respect to coefficient matrix X and can be solved efficiently using specialized sparse coding methods such as, for example, proximal optimization and iterative shrinkage. In some embodiments, a positivity constraint is added, i.e. X > 0. However, even in these embodiments, the objective function is still convex and the sparse coding could also be done using these same techniques.
[27] FIG. 2 provides a non-linear sparse coding and dictionary learning process 200 that may be performed by the Dictionary Learning Component 115 A, according to some
embodiments. At step 205, a set of training signals Y, a constant coefficient λ, and a non-linear kernel function K are received, for example, from the Training Database 120 and the User Interface Computer 110, respectively. Next, a kernel dictionary represented by coefficient matrix A is learned by optimizing the cost function in (3). At step 210, each column of A is initialized. In some embodiments, each column of A is randomly initialized to have only one non-zero coefficient at random position.
[28] At step 215, Equation 3 is solved for X given that A is fixed. In some embodiments, a non-negative constraint on X may be added. Since the problem becomes convex when fixing A, a solution can be obtained using any convex optimization toolbox generally known in the art. Next, at step 220, Equation 3 is solved for A when X is fixed using Equation 5. Then, at step 225, each column of the kernel dictionary is normalized to the unit norm. Steps 215 - 225 are repeated until convergence. The result of this process is a kernel dictionary represented by A.
[29] FIG. 3 illustrates a process 300 for classification using non-linear dictionary as may be implemented by the Classification Component 1 15B in some embodiments. At step 305, a test signal z, a constant coefficient λ, a non-linear kernel function K, are received, for example, from the Imaging Device 105 and the User Interface Computer 110, respectively. Additionally, the dictionary matrix A generated for each class by the Dictionary Learning Component 115 A is assembled.
[30] Next, the signal z is classified into one of the classes. At step 310, a sparse coefficient of each class, x;, is solved by optimizing Equation 3 and replacing A with a class- specific dictionary Aj. At step 315, the reconstruction errors for all classes are determined (i.e. how well dictionary of i-th class reconstruct the test signal z). These errors may be calculated according to the following equation: = ||Φ(ζ) - 0(Yi)AiXi |li (6)
Then, at step 320, the label of z is assigned to be the label of the class with smallest
reconstruction error.
[31] FIG. 4 provides additional detail on a method 400 medical pattern classification using no n- linear sparse representations as it may be applied to classification of MRI images, according to some embodiments. The method can be divided into two general operations:
creating a dictionary using training data and using the dictionary to perform medical pattern classification. Starting at step 405, a set of training data is received. This training data comprises labeled set of images Y acquired using an MRI imaging device (e.g., Imaging Device 105 in FIG.l). In some embodiments, dictionaries are generated for various imaging modalities. Thus, later during the classification step, the system can be flexible enough to be able to respond to any type of test image. The labels applied to each respective image are used to categorize the image into a class. The class types will vary, for example, based on the portion of anatomy which was imaged and the pathological features being classified. In some cases, a raw labeling strategy may be employed. For example, a 3D MRI image of a brain may be labeled as
"abnormal" or "normal." In other embodiments, a more detailed labeled strategy may be used. Continuing with the previous example, the brain image may be labeled as having markers
indicative of Alzheimer's disease. It should be noted that the robustness of the classification will directly depend on the robustness of the labels. Thus, in some embodiments, once the training images are received at 305, they maybe manually labelled with more detailed information to allow for more comprehensive learning.
[32] Continuing with reference to FIG. 4, at step 410, each labeled image received in the training set is vectorized. Vectorization may be performed, for example, by reshaping the multidimensional signal of the image data into a single dimensional vector. In some
embodiments, if the size of the image is greater than a predefined threshold (e.g., set by the user), the image data may be cropped or scanned prior to vectorization to reduce the size and focus on an area of interest. Once each image has been vectorized, at step 415, non-linear dictionaries O(D) are learned from Φ(Υ) for each class using the set of training examples. Optionally, a non-negative constraint may be incorporated during the learning of D. It should be noted that, if the dictionary is learned with a non-negative constraint, a non-negative constraint should also be used in the test space, as described below.
[33] At steps 420 and 425, a test image ytest is received and vectorized, for example, in a manner similar to that discussed above with respect to steps 305 and 310. Then, at step 430, non-linear sparse coding is performed by optimizing the following equation for each class.
- (D)xtest ll + λ|| Atest lll
In some embodiments, a non-negative constraint may be applied to xtest if such a constraint was used during training at step 315. FIG. 4 may be optimized via biconvex optimization process, as described above with respect to FIG. 3. At step 435, the reconstruction error r is determined for each class according to the following equation:
Γ = latest) - < D)Xtest ll i
Then at step 440, a label is assigned from the class with the smallest reconstruction error determined at step 435. In some embodiments, the label is presented along with the test image on a display to allow for diagnosis, for example, in clinical settings.
[34] To illustrate the medical classification technique described herein, a sparse representation was learned for 20 MR apex images. There were 5970 images (from 138 patients) for training and 1991 images (from 46 patients) for testing. Images were of 32x32 pixels resolution. In addition, the images were uniformly rotated to generate 10 samples from one positive sample. The negative samples were not rotated while sampling more to match the number of positive samples. In total, there were 1 19,400 training samples. A dictionary of 5000 atoms was learned for each class (i .e. positive and negative) with sparse regularization set to 0.3. Given a new test sample, sparse coding was performed on all dictionaries and their
corresponding residual errors were computed. The sample was assigned a label of the class with smallest residual error. Note that, in some embodiments, residual may be combined with sparse codes to improve the performance. The table 500 provided in FIG. 5 shows the classification accuracy comparison between different methods. It is interesting that sparse method, despite being generative in nature, outperforms other discriminative approaches like Support Vector Machine (SVM) by a significant margin. FIG. 6 provides an illustration of some of the dictionary atoms 600 learned from the data, which clearly capture meaningful structures of the apex images.
[35] Additionally, a sparse representation based classification approach was evaluated on a more challenging dataset. This dataset contained more than 120 patients' records with both continuous and categorical variables. The goal was to predict the risk of restenosis based on the patient's health condition and medical intervention indicated in his/her record. A successful solution to this problem, i.e. higher classification accuracy, would enable doctors to design more personalized treatments for their patients. 4000 samples in the datasets were split equally for training and testing. FIG. 7 provides a table 700 showing a comparison of classification accuracy between different approaches. It can be noticed that the sparse classification with a non-negative constraint on sparse coefficients outperforms discriminative methods. Both experiments on the apex dataset and the restenosis dataset clearly demonstrate the advantage of our sparse representation based classification approach.
[36] FIG. 8 illustrates an exemplary computing environment 800 within which
embodiments of the invention may be implemented. For example, this computing environment 800 may be used to implement the processes 200, 300, and 400 of image reconstruction
described in FIGS. 2 - 4. In some embodiments, the computing environment 800 may be used to implement one or more of the components illustrated in the system 100 of FIG. 1. The computing environment 800 may include computer system 810, which is one example of a computing system upon which embodiments of the invention may be implemented. Computers and computing environments, such as computer system 810 and computing environment 800, are known to those of skill in the art and thus are described briefly here.
[37] As shown in FIG. 8, the computer system 810 may include a communication mechanism such as a bus 821 or other communication mechanism for communicating information within the computer system 810. The computer system 810 further includes one or more processors 820 coupled with the bus 821 for processing the information. The processors 820 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art.
[38] The computer system 810 also includes a system memory 830 coupled to the bus 821 for storing information and instructions to be executed by processors 820. The system memory 830 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 831 and/or random access memory (RAM) 832. The system memory RAM 832 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The system memory ROM 831 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 830 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 820. A basic input/output system (BIOS) 833 containing the basic routines that help to transfer information between elements within computer system 810, such as during start-up, may be stored in ROM 831. RAM 832 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 820. System memory 830 may additionally include, for example, operating system 834, application programs 835, other program modules 836 and program data 837.
[39] The computer system 810 also includes a disk controller 840 coupled to the bus 821 to control one or more storage devices for storing information and instructions, such as a hard
disk 841 and a removable media drive 842 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid state drive). The storage devices may be added to the computer system 810 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or Fire Wire).
[40] The computer system 810 may also include a display controller 865 coupled to the bus 821 to control a display 866, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The computer system includes an input interface 860 and one or more input devices, such as a keyboard 862 and a pointing device 861 , for interacting with a computer user and providing information to the processor 820. The pointing device 861 , for example, may be a mouse, a trackball, or a pointing stick for communicating direction information and command selections to the processor 820 and for controlling cursor movement on the display 866. The display 866 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 861.
[41] The computer system 810 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 820 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 830. Such instructions may be read into the system memory 830 from another computer readable medium, such as a hard disk 841 or a removable media drive 842. The hard disk 841 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security. The processors 820 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 830. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
[42] As stated above, the computer system 810 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term "computer readable medium" as used herein refers to any medium that participates in
providing instructions to the processor 820 for execution. A computer readable medium may take many forms including, but not limited to, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 841 or removable media drive 842. Non-limiting examples of volatile media include dynamic memory, such as system memory 830. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 821. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
[43] The computing environment 800 may further include the computer system 810 operating in a networked environment using logical connections to one or more remote computers, such as remote computer 880. Remote computer 880 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 810. When used in a networking environment, computer system 810 may include modem 872 for establishing communications over a network 871 , such as the Internet. Modem 872 may be connected to bus 821 via user network interface 870, or via another appropriate mechanism.
[44] Network 871 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a
metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 810 and other computers (e.g., remote computer 880). The network 871 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-1 1 or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 871.
[45] The embodiments of the present disclosure may be implemented with any combination of hardware and software. In addition, the embodiments of the present disclosure may be included in an article of manufacture (e.g., one or more computer program products) having, for example, computer-readable, non-transitory media. The media has embodied therein, for instance, computer readable program code for providing and facilitating the mechanisms of the embodiments of the present disclosure. The article of manufacture can be included as part of a computer system or sold separately.
[46] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and
embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
[47] An executable application, as used herein, comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
[48] A graphical user interface (GUI), as used herein, comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions. The GUI also includes an executable procedure or executable application. The executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user. The processor, under control of an executable procedure or executable application, manipulates the GUI display images in response to signals received from the input devices. In
this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
[49] The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity. Also, while some method steps are described as separate steps for ease of understanding, any such steps should not be construed as necessarily distinct nor order dependent in their performance.
[50] The system and processes of the figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of the invention to accomplish the same objectives. Although this invention has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be
implemented by those skilled in the art, without departing from the scope of the invention. As described herein, the various systems, subsystems, agents, managers and processes can be implemented using hardware components, software components, and/or combinations thereof. No claim element herein is to be construed under the provisions of 35 U.S.C. 1 12, sixth paragraph, unless the element is expressly recited using the phrase "means for."
Claims
1. A method of classifying signals using non-linear sparse representations, the method comprising: learning a plurality of non-linear dictionaries based on a plurality of training signals, each respective non-linear dictionary corresponding to one of a plurality of class labels; performing a non-linear sparse coding process on a test signal for each of the plurality of non-linear dictionaries, thereby associating each of the plurality of non-linear dictionaries with a distinct sparse coding of the test signal; for each respective non-linear dictionary included in the plurality of non-linear dictionaries, measuring a reconstruction error using the test signal and the distinct sparse coding corresponding to the respective non-linear dictionary; identifying a particular non-linear dictionary corresponding to a smallest value for the reconstruction error among the plurality of non-linear dictionaries; and assigning a class label corresponding to the particular non-linear dictionary to the test signal.
2. The method of claim 1, further comprising: cropping a subset of the plurality of training signals prior to building the plurality of nonlinear dictionaries.
3. The method of claim 2, wherein each respective test signal included in the subset of the plurality of training signals is cropped by: identifying a region of interest in the respective training signal; and discarding portions of the respective training signal outside the region of interest.
4. The method of claim 3, wherein the plurality of training signals comprise a plurality of anatomical images and the method further comprises: identifying the region of interest in the respective training signal based on a user-supplied indication of an anatomical area of interest.
5. The method of claim 1, wherein a non-negative constraint is applied to the plurality of non-linear dictionaries during learning.
6. The method of claim 5, wherein the non-negative constraint is applied to the distinct sparse coding of the test signal associated with each of the plurality of non-linear dictionaries during the non-linear sparse coding process.
7. The method of claim 1 , wherein the plurality of training signals and the test signal each comprise a k-space dataset acquired using magnetic resonance imaging.
8. The method of claim 7, wherein the plurality of class labels comprises an indication of disease present in an anatomical area of interest depicted in the test signal.
9. The method of claim 1, further comprising: displaying an image corresponding to the test signal with an indication of the class label.
10. A method of classifying signals using non-linear sparse representations, the method comprising: receiving a plurality of non-linear dictionaries, each respective non-linear dictionary corresponding to one of a plurality of class labels; acquiring a test image dataset of a subject using a magnetic resonance imaging device; performing a non-linear sparse coding process on the test image dataset for each of the plurality of non-linear dictionaries, thereby associating each of the plurality of non-linear dictionaries with a distinct sparse coding of the test image dataset;
for each respective non-linear dictionary included in the plurality of non-linear dictionaries, measuring a reconstruction error using the test image dataset and the distinct sparse coding corresponding to the respective non-linear dictionary; identifying a particular non-linear dictionary corresponding to a smallest value for the reconstruction error among the plurality of non-linear dictionaries; providing a clinical diagnosis for the subject based on a particular class label
corresponding to the particular non-linear dictionary to the test image dataset.
11. The method of claim 10, further comprising: using an optimization process to learn the plurality of non-linear dictionaries based on a plurality of training images.
12. The method of claim 1 1, further comprising: cropping a subset of the plurality of training images prior to using the optimization process.
13. The method of claim 12, wherein each respective test image dataset included in the subset of the plurality of training images is cropped by: identifying a region of interest in the respective training image; and discarding portions of the respective training image outside the region of interest.
14. The method of claim 13, wherein the method further comprises: identifying the region of interest in the respective training image based on a user-supplied indication of an anatomical area of interest.
15. The method of claim 1 1, wherein the optimization process applies a non-negative constraint to the plurality of non-linear dictionaries during learning.
16. The method of claim 15, wherein the non- negative constraint is applied to the distinct sparse coding of the test image dataset associated with each of the plurality of non-linear dictionaries during the non-linear sparse coding process.
17. The method of claim 16, further comprising: displaying the test image dataset simultaneously with the clinical diagnosis.
18. A system for classifying image data for clinical diagnosis, the system comprising: an imaging device configured to acquire test image dataset of a subject; and an image processing computer configured to: receive a plurality of non-linear dictionaries, each respective non-linear dictionary corresponding to one of a plurality of class labels, perform a non-linear sparse coding process on the test image dataset for each of the plurality of non-linear dictionaries, thereby associating each of the plurality of non-linear dictionaries with a distinct sparse coding of the test image dataset, for each respective non-linear dictionary included in the plurality of nonlinear dictionaries, measure a reconstruction error using the test image dataset and the distinct sparse coding corresponding to the respective non-linear dictionary, identify a particular non-linear dictionary corresponding to a smallest value for the reconstruction error among the plurality of non-linear dictionaries, and generate a clinical diagnosis for the subject based on a particular class label corresponding to the particular non-linear dictionary to the test image dataset; and a display configured to present the clinical diagnosis for the subject.
19. The system of claim 18, wherein the image processing computer is further configured to perform an optimization process to learn the plurality of non-linear dictionaries based on a plurality of training images.
20. The system of claim 19, wherein the image processing computer is further configured to apply a non-negative constraint to the non-linear sparse coding process and the optimization process.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15731772.8A EP3304425A1 (en) | 2015-06-04 | 2015-06-04 | Medical pattern classification using non-linear and nonnegative sparse representations |
US15/563,970 US10410093B2 (en) | 2015-06-04 | 2015-06-04 | Medical pattern classification using non-linear and nonnegative sparse representations |
CN201580080661.2A CN107667381B (en) | 2015-06-04 | 2015-06-04 | Medical mode classification using non-linear and non-negative sparse representations |
PCT/US2015/034097 WO2016195683A1 (en) | 2015-06-04 | 2015-06-04 | Medical pattern classification using non-linear and nonnegative sparse representations |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2015/034097 WO2016195683A1 (en) | 2015-06-04 | 2015-06-04 | Medical pattern classification using non-linear and nonnegative sparse representations |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016195683A1 true WO2016195683A1 (en) | 2016-12-08 |
Family
ID=53488452
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2015/034097 WO2016195683A1 (en) | 2015-06-04 | 2015-06-04 | Medical pattern classification using non-linear and nonnegative sparse representations |
Country Status (4)
Country | Link |
---|---|
US (1) | US10410093B2 (en) |
EP (1) | EP3304425A1 (en) |
CN (1) | CN107667381B (en) |
WO (1) | WO2016195683A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803105A (en) * | 2017-02-09 | 2017-06-06 | 北京工业大学 | A kind of image classification method based on rarefaction representation dictionary learning |
CN107274462A (en) * | 2017-06-27 | 2017-10-20 | 哈尔滨理工大学 | The many dictionary learning MR image reconstruction methods of classification based on entropy and geometric direction |
CN110033001A (en) * | 2019-04-17 | 2019-07-19 | 华夏天信(北京)智能低碳技术研究院有限公司 | Mine leather belt coal piling detection method based on sparse dictionary study |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11586905B2 (en) * | 2017-10-11 | 2023-02-21 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems and methods for customizing kernel machines with deep neural networks |
CN108694715A (en) * | 2018-05-15 | 2018-10-23 | 清华大学 | One camera RGB-NIR imaging systems based on convolution sparse coding |
CN111105420A (en) * | 2019-10-22 | 2020-05-05 | 湖北工业大学 | Multi-map label fusion method based on map combination information sparse representation |
CN111860356B (en) * | 2020-07-23 | 2022-07-01 | 中国电子科技集团公司第五十四研究所 | Polarization SAR image classification method based on nonlinear projection dictionary pair learning |
CN112100987A (en) * | 2020-09-27 | 2020-12-18 | 中国建设银行股份有限公司 | Transcoding method and device for multi-source data dictionary |
CN112464836A (en) * | 2020-12-02 | 2021-03-09 | 珠海涵辰科技有限公司 | AIS radiation source individual identification method based on sparse representation learning |
CN114241233B (en) * | 2021-11-30 | 2023-04-28 | 电子科技大学 | Nonlinear class group sparse representation real and false target one-dimensional range profile identification method |
CN114722699A (en) * | 2022-03-17 | 2022-07-08 | 清华大学 | Intelligent fault diagnosis method and system for mechanical equipment and storage medium |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7333040B1 (en) * | 2005-05-27 | 2008-02-19 | Cypress Semiconductor Corporation | Flash ADC with sparse codes matched to input noise |
US7783459B2 (en) * | 2007-02-21 | 2010-08-24 | William Marsh Rice University | Analog system for computing sparse codes |
US8374442B2 (en) * | 2008-11-19 | 2013-02-12 | Nec Laboratories America, Inc. | Linear spatial pyramid matching using sparse coding |
CN102088606B (en) * | 2011-02-28 | 2012-12-05 | 西安电子科技大学 | Sparse representation-based deblocking method |
CN103077544B (en) * | 2012-12-28 | 2016-11-16 | 深圳先进技术研究院 | Magnetic resonance parameter matching method and device and medical image processing equipment |
US9380221B2 (en) * | 2013-02-27 | 2016-06-28 | Massachusetts Institute Of Technology | Methods and apparatus for light field photography |
WO2014152919A1 (en) * | 2013-03-14 | 2014-09-25 | Arizona Board Of Regents, A Body Corporate Of The State Of Arizona For And On Behalf Of Arizona State University | Kernel sparse models for automated tumor segmentation |
CN103116762B (en) * | 2013-03-20 | 2015-10-14 | 南京大学 | A kind of image classification method based on self-modulation dictionary learning |
CN103258210B (en) * | 2013-05-27 | 2016-09-14 | 中山大学 | A kind of high-definition image classification method based on dictionary learning |
US9275078B2 (en) * | 2013-09-05 | 2016-03-01 | Ebay Inc. | Estimating depth from a single image |
US10776606B2 (en) * | 2013-09-22 | 2020-09-15 | The Regents Of The University Of California | Methods for delineating cellular regions and classifying regions of histopathology and microanatomy |
KR102307356B1 (en) * | 2014-12-11 | 2021-09-30 | 삼성전자주식회사 | Apparatus and method for computer aided diagnosis |
-
2015
- 2015-06-04 EP EP15731772.8A patent/EP3304425A1/en not_active Ceased
- 2015-06-04 WO PCT/US2015/034097 patent/WO2016195683A1/en active Application Filing
- 2015-06-04 US US15/563,970 patent/US10410093B2/en active Active
- 2015-06-04 CN CN201580080661.2A patent/CN107667381B/en active Active
Non-Patent Citations (4)
Title |
---|
HIEN VAN NGUYEN ET AL: "Kernel dictionary learning", 2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2012) : KYOTO, JAPAN, 25 - 30 MARCH 2012 ; [PROCEEDINGS], IEEE, PISCATAWAY, NJ, 25 March 2012 (2012-03-25), pages 2021 - 2024, XP032227545, ISBN: 978-1-4673-0045-2, DOI: 10.1109/ICASSP.2012.6288305 * |
ROY SNEHASHIS ET AL: "Subject Specific Sparse Dictionary Learning for Atlas Based Brain MRI Segmentation", 14 September 2014, CORRECT SYSTEM DESIGN; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER INTERNATIONAL PUBLISHING, CHAM, PAGE(S) 248 - 255, ISBN: 978-3-642-27256-1, ISSN: 0302-9743, XP047298876 * |
TONG TONG ET AL: "Segmentation of MR images via discriminative dictionary learning and sparse coding: Application to hippocampus labeling. Segmentation of MR images via Discriminative Dictionary Learning and Sparse Coding: Application to Hippocampus Labeling the Alzheimer's Disease Neuroimaging Initiative", 30 March 2013 (2013-03-30), XP055248152, Retrieved from the Internet <URL:https://hal.archives-ouvertes.fr/hal-00806384/document> [retrieved on 20160205], DOI: 10.1016/j.neuroimage.2013.02.069> * |
YUCHEN XIE ET AL: "On A Nonlinear Generalization of Sparse Coding and Dictionary Learning", MACHINE LEARNING : PROCEEDINGS OF THE INTERNATIONAL CONFERENCE. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, 16 June 2013 (2013-06-16), pages 1480 - 1488, XP055248049, Retrieved from the Internet <URL:http://www.jmlr.org/proceedings/papers/v28/ho13a.pdf> [retrieved on 20160205] * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803105A (en) * | 2017-02-09 | 2017-06-06 | 北京工业大学 | A kind of image classification method based on rarefaction representation dictionary learning |
CN106803105B (en) * | 2017-02-09 | 2020-02-21 | 北京工业大学 | Image classification method based on sparse representation dictionary learning |
CN107274462A (en) * | 2017-06-27 | 2017-10-20 | 哈尔滨理工大学 | The many dictionary learning MR image reconstruction methods of classification based on entropy and geometric direction |
CN107274462B (en) * | 2017-06-27 | 2020-06-23 | 哈尔滨理工大学 | Classified multi-dictionary learning magnetic resonance image reconstruction method based on entropy and geometric direction |
CN110033001A (en) * | 2019-04-17 | 2019-07-19 | 华夏天信(北京)智能低碳技术研究院有限公司 | Mine leather belt coal piling detection method based on sparse dictionary study |
Also Published As
Publication number | Publication date |
---|---|
CN107667381A (en) | 2018-02-06 |
US10410093B2 (en) | 2019-09-10 |
CN107667381B (en) | 2022-02-11 |
EP3304425A1 (en) | 2018-04-11 |
US20180137393A1 (en) | 2018-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10410093B2 (en) | Medical pattern classification using non-linear and nonnegative sparse representations | |
JP7309605B2 (en) | Deep learning medical systems and methods for image acquisition | |
US10950026B2 (en) | Systems and methods for displaying a medical image | |
US10489907B2 (en) | Artifact identification and/or correction for medical imaging | |
US10387765B2 (en) | Image correction using a deep generative machine-learning model | |
EP3462373A1 (en) | Automated classification and taxonomy of 3d teeth data using deep learning methods | |
CN109460756B (en) | Medical image processing method and device, electronic equipment and computer readable medium | |
WO2019103912A2 (en) | Content based image retrieval for lesion analysis | |
CN112055879B (en) | Method and system for generating medical images based on text data in medical reports | |
JP6636506B2 (en) | Image fingerprint generation | |
US20230018833A1 (en) | Generating multimodal training data cohorts tailored to specific clinical machine learning (ml) model inferencing tasks | |
US11720647B2 (en) | Synthetic training data generation for improved machine learning model generalizability | |
JP7503213B2 (en) | Systems and methods for evaluating pet radiological images | |
US10949966B2 (en) | Detecting and classifying medical images based on continuously-learning whole body landmarks detections | |
Leite et al. | Etiology-based classification of brain white matter hyperintensity on magnetic resonance imaging | |
Bian et al. | An optimization-based meta-learning model for mri reconstruction with diverse dataset | |
JP2021536330A (en) | Determining the Growth Rate of Objects in a 3D Dataset Using Deep Learning | |
EP4248356A1 (en) | Representation learning | |
Tsymbal et al. | Towards cloud-based image-integrated similarity search in big data | |
Aoyama et al. | Automatic aortic valve cusps segmentation from CT images based on the cascading multiple deep neural networks | |
Zhang et al. | Nonsmooth nonconvex LDCT image reconstruction via learned descent algorithm | |
Wimmer et al. | Fully automatic cross-modality localization and labeling of vertebral bodies and intervertebral discs in 3D spinal images | |
US9538920B2 (en) | Standalone annotations of axial-view spine images | |
US11460528B2 (en) | MRI reconstruction with image domain optimization | |
US10803593B2 (en) | Method and system for image compression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15731772 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15563970 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |