EP3293736B1 - Tissue characterization based on machine learning in medical imaging - Google Patents
Tissue characterization based on machine learning in medical imaging Download PDFInfo
- Publication number
- EP3293736B1 EP3293736B1 EP17189559.2A EP17189559A EP3293736B1 EP 3293736 B1 EP3293736 B1 EP 3293736B1 EP 17189559 A EP17189559 A EP 17189559A EP 3293736 B1 EP3293736 B1 EP 3293736B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- features
- classifying
- tissue
- data
- values
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002059 diagnostic imaging Methods 0.000 title claims description 24
- 238000012512 characterization method Methods 0.000 title claims description 19
- 238000010801 machine learning Methods 0.000 title description 6
- 238000002560 therapeutic procedure Methods 0.000 claims description 66
- 206010028980 Neoplasm Diseases 0.000 claims description 58
- 238000011282 treatment Methods 0.000 claims description 53
- 230000004044 response Effects 0.000 claims description 52
- 238000000034 method Methods 0.000 claims description 31
- 238000005259 measurement Methods 0.000 claims description 17
- 238000013135 deep learning Methods 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 15
- 239000000284 extract Substances 0.000 claims description 8
- 238000002591 computed tomography Methods 0.000 claims description 7
- 238000002600 positron emission tomography Methods 0.000 claims description 7
- 238000002203 pretreatment Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 238000002603 single-photon emission computed tomography Methods 0.000 claims description 5
- 238000002604 ultrasonography Methods 0.000 claims description 5
- 230000003211 malignant effect Effects 0.000 claims description 4
- 238000009792 diffusion process Methods 0.000 claims description 3
- 230000004083 survival effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims 3
- 210000001519 tissue Anatomy 0.000 description 77
- 230000015654 memory Effects 0.000 description 22
- 238000003384 imaging method Methods 0.000 description 18
- 238000013459 approach Methods 0.000 description 17
- 238000012545 processing Methods 0.000 description 15
- 238000003745 diagnosis Methods 0.000 description 14
- 238000004393 prognosis Methods 0.000 description 14
- 230000003993 interaction Effects 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 6
- 230000003902 lesion Effects 0.000 description 5
- 210000002307 prostate Anatomy 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 238000009258 post-therapy Methods 0.000 description 3
- 206010006187 Breast cancer Diseases 0.000 description 2
- 208000026310 Breast neoplasm Diseases 0.000 description 2
- 206010016654 Fibrosis Diseases 0.000 description 2
- 208000032843 Hemorrhage Diseases 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000000090 biomarker Substances 0.000 description 2
- 210000004369 blood Anatomy 0.000 description 2
- 239000008280 blood Substances 0.000 description 2
- 201000011510 cancer Diseases 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 239000002872 contrast media Substances 0.000 description 2
- 230000004761 fibrosis Effects 0.000 description 2
- 210000005228 liver tissue Anatomy 0.000 description 2
- 230000036210 malignancy Effects 0.000 description 2
- 230000017074 necrotic cell death Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 210000005166 vasculature Anatomy 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000009534 blood test Methods 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 230000010109 chemoembolization Effects 0.000 description 1
- 229940044683 chemotherapy drug Drugs 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002594 fluoroscopy Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000010412 perfusion Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 210000002966 serum Anatomy 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
- A61B6/02—Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computerised tomographs
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
- A61B6/02—Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computerised tomographs
- A61B6/037—Emission tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Definitions
- the present embodiments relate to tissue characterization in medical imaging.
- Magnetic resonance images are widely used in medical diagnosis and therapy.
- magnetic resonance is used for breast tumor diagnosis following the guidelines of the Breast Imaging-Reporting and Data System (BIRADS), which are based on clinically descriptive tags like mass (shape, margin, mass enhancement), symmetry or asymmetry, non-mess-like enhancement in an area that is not a mass (distribution modifiers, internal enhancement), kinetic curve assessment, and other findings.
- the Prostate Imaging and Reporting and Data System (PIRADS) specifies the clinically descriptive tags for special prostate regions, such as peripheral zone, central zone, and transition zone.
- fibrosis staging is possible based on reading of the magnetic resonance images. Similar approaches are used in other imaging modalities, such as ultrasound, computed tomography, positron emission tomography, or single photon emission computed tomography.
- multimodal magnetic resonance scans are acquired before and after therapy.
- a simple morphological (e.g., size-based ) scoring is commonly performed in tumor treatment assessment, such as the Response Evaluation Criteria in Solid Tumors (RECIST) criteria.
- RECIST Response Evaluation Criteria in Solid Tumors
- the assessment of treatment response is critical in determining the course of continuing treatment since chemotherapy drugs may have adverse effects on the patient.
- treatment assessment is done morphologically with tumor size. Due to this simple approach, it can take longer to determine if a treatment is succeeding.
- the decision to stop therapy may occur earlier by employing functional magnetic resonance information than with the RECIST criteria.
- treatment effectiveness may be determined earlier by using image-based functional measurements, such as intensity based histograms of the functional measures. These histogram-based intensity values are manually analyzed in clinical practice and may not necessarily capture subtleties related to image texture and local dissimilarity that may better represent cell density, vasculature, necrosis, or hemorrhage characteristics important to clinical diagnosis.
- WO 2015/048103 A1 discloses therapy response assessment based on texture features of medical images using a machine-learnt classifier.
- Tissue is characterized using machine-learnt classification.
- the prognosis, diagnosis or evidence in the form of a similar case is found by machine-learnt classification from features extracted from frames of medical scan data with or without external data such as age, gender, and blood biomarkers.
- the texture or other features for tissue characterization may be learned using deep learning.
- therapy response is predicted from magnetic resonance functional measures before and after treatment in one example.
- the machine-learnt classification the number and time between measures after treatment may be reduced as compared to RECIST for predicting the outcome of the treatment, allowing earlier termination or alteration of the therapy.
- a method for tissue characterization in medical imaging is provided.
- a medical scanner scans a patient where the scanning provides multiple frames of data representing a tissue region of interest in the patient.
- a processor extracts values for features from the frames of data.
- a machine-learnt classifier implemented by the processor classifies a therapy response of the tissue of the tissue region from the values of the features as input to the machine-learnt classifier. The therapy response is transmitted.
- a method is advantageously, wherein scanning comprises scanning with a magnetic resonance scanner, computed tomography scanner, ultrasound scanner, positron emission tomography scanner, single photon emission computed tomography scanner, or combinations thereof, and/or wherein scanning comprises scanning with a magnetic resonance scanner, the frames of data comprising functional measurements by the magnetic resonance scanner.
- extracting values comprises extracting the values as scale-invariant feature transformation, histogram of oriented gradients, local binary pattern, gray-level co-occurrence matrix, or combinations thereof.
- classifying can comprise classifying from the values of the features extracted from the frames of data and values of clinical measurements for the patient.
- scanning comprises scanning at different times, different contrasts, or different types, the frames corresponding to the different times, contrasts, or types, and wherein classifying comprises classifying the therapy response based due to differences in the frames.
- extracting comprises extracting the values for the features with the features comprising deep-learnt features from deep learning
- classifying comprises classifying by the machine-learnt classifier learnt with the deep learning
- extracting and classifying can comprise extracting and classifying based on Siamese convolution networks.
- classifying the therapy response can comprise inferring therapy success for therapy applied to the tissue region, and/or classifying the therapy response can comprise identifying an example treatment and outcome for another patient with similar extracted values of the features,
- transmitting the therapy response comprises displaying the therapy response in a report.
- a method is advantageously, further comprising identifying, by a user input device responsive to user selection, the tissue region, wherein the extracting, classifying, and transmitting are performed without user input after the identifying.
- the extracting and classifying are performed by the processor remote from the medical scanner.
- therapy response can comprise a responder classification, non-responder classification, or a prediction of survival time.
- transmitting the therapy response comprises displaying the therapy response as an overlay over images of the tissue region.
- a method for tissue characterization in medical imaging is provided.
- a medical scanner scans a patient where the scanning provides different frames of data representing different types of measurements for tissue in the patient.
- a processor extracts values for features from the frames of data.
- a machine-learnt classifier implemented by the processor classifies the tissue of the patient from the values of the features as input to the machine-learnt classifier. The tissue classification is transmitted.
- a method is advantageously, wherein scanning comprises scanning with a magnetic resonance scanner, the different types of measurements including an apparent diffusion coefficient.
- extracting can comprise extracting the values for a texture of the tissue represented by at least one of the frames of data.
- classifying the tissue comprises classifying a predication of response of the tissue to therapy, classifying a benign or malignant, classifying a staging tag, a similarity to a case, or combinations thereof, and/or wherein extracting comprises extracting the values for features that are deep-learnt features, and wherein classifying comprises classifying by the machine-learnt classifier being a deep-learnt classifier, the deep-learnt features and deep-learnt classifier being performed on multiple convolution networks.
- a method for tissue characterization in medical imaging A patient is scanned with a medical scanner where the scanning provides a frame of data representing a tumor in the patient.
- a processor extracts values for deep-learnt features from the frame of data.
- a deep-machine-learnt classifier implemented by the processor classifies the tumor from the values of the features as input to a machine-learnt classifier. The classification of the tumor is transmitted.
- Preferred the deep-learnt features and the deep-machine-learnt classifier are from a Siamese convolution network.
- a generic pipeline and user interface is capable of finding and processing complex features from medical image data.
- the first approach uses robust image textural and/or other image features extracted by the computer as a feature set for further analysis.
- the second approach uses a deep learning pipeline involving a Siamese network to automatically create and classify features in a parallel convolutional network for the same patient at two different time points and/or different types of measures.
- the deep learning network and interface are provided for image analytics for lesion and/or tissue characterization and therapy prediction.
- a series of features is extracted from multiple image contrasts and/or multiple examination time-points.
- the textural and/or other image features are used in a classifier.
- a Siamese deep-learning neural network may be used to identify the features.
- Non-image data such blood test results age, gender, or blood serum biomarkers, may also be used as features.
- the extracted features are computed against each other using a machine-learnt classifier to determine the diagnosis, prognosis, or find similar cases.
- the images from the similar cases are obtained from a database of previously document cases.
- the previously documented cases are used to determine the reference case or cases.
- the diagnosis, prognosis, or finding of similar cases may be performed via a cloud based service that delivers results.
- a user marks the tumor of interest in one or more images of a time series, possibly with a single click, and the computer produces the diagnosis, prognosis, or similar cases report in response.
- This clinical product and interface may minimize interaction.
- the single click per image assists in segmentation and/or more precise identification of regions. By only requiring a single click or region designation in the pre and post treatment images, the interaction is minimized, increasing the clinical feasibility of the approach.
- the output report may include clinical evidence, in the form of numbers (e.g., quantifying the textural features that lead to the decision) or in the form of references to previous clinical cases.
- a single-click interface determines results, allowing for a clinically feasible approach to incorporate complex image features into tumor prognosis and therapy.
- Figure 1 shows one embodiment of a flow chart of a method for tissue characterization in medical imaging.
- tissue characterization such as tumor type, tumor response to treatment, identification of similar tumors in other patients, tumor prognosis, or tumor diagnosis.
- a machine-learnt classifier is applied for tissue characterization.
- the machine-learnt classifier uses information from one or more frames of data to classify the tumor. Frames of data from different times or different types of measures may be used to classify.
- the features extracted from the frames as the information are manually defined.
- deep learning identifies the features that best characterize the tumor.
- the features learned with deep learning may be texture and/or non-texture, such as features from the frame or clinical data.
- the acts are performed in the order shown (e.g., top to bottom) or other orders. Additional, different, or fewer acts may be provided.
- the method is performed without transmitting the classification in act 20.
- the segmentation of act 14 is not performed, instead the classification is applied to the entire frame of data.
- the medical image is a frame of data representing the patient.
- the data may be in any format. While the terms "image” and "imaging” are used, the image or imaging data may be in a format prior to actual display of the image.
- the medical image may be a plurality of scalar values representing different locations in a Cartesian or polar coordinate format different than a display format.
- the medical image may be a plurality red, green, blue (e.g., RGB) values output to a display for generating the image in the display format.
- the medical image may not yet be a displayed image, may be a currently displayed image, or may be previously displayed image in the display or other format.
- the image or imaging is a dataset that may be used for imaging, such as scan data representing the patient.
- magnetic resonance frames of data representing a patient are acquired.
- Magnetic resonance data is acquired by scanning with a magnetic resonance system. Using an imaging sequence, the magnetic resonance system scans the patient. Data representing an interior region of a patient is acquired. The magnetic resonance data is k-space data. Fourier analysis is performed to reconstruct the data from the k-space into a three-dimensional object or image space, providing the frame of data.
- x-ray, computed tomography, ultrasound, positron emission tomography, single photon emission computed tomography, or other medical imaging scanner scans the patient.
- Combination scanners such as magnetic resonance and positron emission tomography or computed tomography and positron emission tomography systems may be used to scan.
- the scan results in a frame of data acquired by the medical imaging scanner and provided directly for further processing or stored for subsequent access and processing.
- the frame of data represents a one, two, or three-dimensional region of the patient.
- the multi-dimensional frame of data represents an area (e.g., slice) or volume of the patient. Values are provided for each of multiple locations distributed in two or three dimensions.
- a tumor or suspicious tissue within the patient is represented by the values of the frame of data.
- the frame of data represents the scan region at a given time or period.
- the dataset may represent the area or volume over time, such as providing a 4D representation of the patient.
- the different frames of data may represent the same or overlapping region of the patient at different times. For example, one or more frames of data represent the patient prior to treatment, and one or more frames of data represent the patient after treatment or interleaved with on-going treatment.
- the different frames may represent different contrasts.
- different types of contrast agents are injected or provided in the patient.
- different frames of data representing the different contrast agents are provided.
- the different frames may represent different types of measures (multi-modal or multi-parametric frames of data).
- different types of measurements of the tissue may be performed. For example in magnetic resonance, both anatomical and functional measurements are performed. As another example in magnetic resonance, different anatomical or different functional measurements are performed.
- T1 and T2 are two examples.
- apparent diffusion coefficient (ADC), venous perfusions, and high B-value are three examples.
- ADC apparent diffusion coefficient
- a T2 frame of data and an ADC frame of data are computed (e.g., different b-value images are acquired to compute a frame of ADC data). Frames of data from different types of scanners may be used.
- a combination of different times and types of measures may be used.
- one set of frames of data represents different types of measures (e.g., T2 and ADC) for pre-treatment
- another set of frames of data represent the same different types of measures (e.g., T2 and ADC) for post-treatment.
- Multi-dimensional and multi-modal image data is provided for each time.
- a single frame of data representing just one type of measure for one time is acquired by scanning.
- the frames are spatially registered.
- the registration removes translation, rotation, and/or scaling between the frames. Alternatively, registration is not used.
- the tissue of interest is identified.
- the tissue of interest is identified as a region around and/or including the tumor. For example, a box or other shape that includes the tumor is located.
- the tissue of interest is identified as tissue representing the tumor, and the identified tumor tissue is segmented for further analysis.
- the identification is performed by the user.
- the user using a user input (e.g., mouse, trackball, keyboard, buttons, sliders, and/or touch screen), identifies the tissue region to be used for feature extraction, classification, and transmission of the classification results. For example, the user selects a center of the tumor about which a processor places a box or other region designator. The user may size or position a region designator with or without center selection. In other approaches, the user indicates a location on the suspicious tissue, and the processor segments the suspicious tissue based on the user placed seed. Alternatively, a processor automatically identifies the tissue region of interest without user selection.
- a user input e.g., mouse, trackball, keyboard, buttons, sliders, and/or touch screen
- Figure 2 shows an example graphic user interface or approach with minimal user interaction for tissue characterization.
- the classification is for reporting on therapy.
- Two images corresponding to two frames of data at two times are provided, one pre-treatment and one post-treatment.
- a bounding box centered at the user selected point is placed, designating the tissue region of interest.
- the computer In response to the selection, the computer then returns the predicted success of treatment in an automated report.
- the report may also contain diagnosis of similar cases that have been previously reviewed in a database as references.
- the treatment outcome or other classification is determined via a single-click on each image or frame of data without the need to perform any manual and/or automatic segmentation.
- a "single-click" or simple user input is provided for tumor diagnosis, treatment planning, and/or treatment response assessment.
- the same approach and technology can be used in any medical imaging product that examines tumor prognosis and treatment based on data acquired from a scanner.
- the tumor tissue with or without surrounding tissue is segmented.
- the data is extracted from the frame for further processing.
- the pixel or voxel values for the region of interest are isolated. Alternatively, the locations in the region are flagged or marked without being separated from other locations in the frame of data.
- a processor extracts values for features.
- the processor is part of the medical imaging scanner, a separate workstation, or a server.
- the processor extracting the values is a server remote from the medical scanner, such as a server in the cloud.
- a manufacturer of the medical scanner or a third party provides classification as a service, so the frames of data are communicated through a computer network to the server for extraction of the features and classification from the extracted values.
- Values for any number of features are extracted from the frame or frames of data.
- the values for a texture of the tissue represented by at least one of the frames of data are extracted.
- the texture of the tissue is represented by the measures of the frame of data.
- the extraction of the values for each feature is performed for the tissue region of interest, avoiding application to other tissue outside the region of interest. Alternatively, the values for other regions outside the region of interest are extracted.
- Each feature defines a kernel for convolution with the data.
- the results of the convolution are a value of the feature.
- values for that feature at different locations are provided. Given one feature, the values of that feature at different locations are calculated.
- Features for other texture information than convolution may be used, such as identifying a maximum or minimum. Other features than texture information may be used.
- the features are manually designed.
- the feature or features to be used are pre-determined based on a programmer's experience or testing.
- Example features include scaled invariant feature transformation, histogram of oriented gradients, local binary pattern, gray-level co-occurrence matrix, Haar wavelets, steerable, or combinations thereof.
- a feature extraction module computes features from images to better capture essential subtleties related to cell density, vasculature, necrosis, and/or hemorrhage that are important to clinical diagnosis or prognosis of tissue.
- Figure 3 shows an example using manually programmed features.
- One or more images are acquired from memory or directly from a scanner.
- the textural features are extracted from the images.
- the values of the features are used for classification to provide outcomes.
- deep-learnt features are used.
- the values are extracted from frames of data for features learned from machine learning.
- Deep machine learning learns features represented in training data as well as training the classifier rather than just training the classifier from the manually designated features.
- the extracting and classifying of acts 16 and 18 are based on a twin or Siamese convolution network.
- Figure 4 shows an example for deep learning.
- the twin or Siamese convolutional networks are trained to extract features from given multi-modal, multi-dimensional images, IM1 and IM2, such as multi-modal frames representing pre and post treatment, respectively.
- the relevant features are automatically determined as part of training. This ability allows for the generic training on arbitrary data (i.e., training data with known outcomes) that can internally determine features, such as textures.
- the Siamese convolution networks are linked so that the same weights (W) for the parameters defining the networks are used in both branches..
- the Siamese network uses two input images (e.g., one branch for pre-treatment and another branch for post-treatment). Kernels weights for convolution are learnt using both branches of the network, and optimized to provide values to Gw..
- the Siamese deep learning network is also trained to classify small, large or absence of changes between time points.
- the definition of such features is based on a specific loss function Ew that minimizes difference between time points when there is no or small changes and maximizes the differences when there are large changes between them..
- the features indicating relevant differences between the two inputs are learned from the training data. By training the network with labeled outcomes, the network learns what features are relevant or can be ignored for determining the prognosis, diagnosis, or finding similar cases.
- low-level features and invariants are learned by the convolutional networks that have exactly the same parameters. These networks determine the core feature set that differs between the two input datasets based on feedback during learning of the difference network.
- FIG. 5 shows an example convolution layer-based network for learning to extract features.
- Each branch or twin has the same layers or network structure.
- the networks themselves are shown as layers of convolutional, sub-sampling (e.g., max pooling), and fully connected layers.
- FC fully connected layers
- the fully connected layers (FC) in Figure 5 operate to fully connect the features as limited by the convolution layer (CL) after maximum pooling.
- Other features may be added to the FC layers, such as non-imaging or clinical information. Any combination of layers may be provided. Additional, different, or fewer layers may be provided.
- a fully connected network is used instead of a convolution network.
- the two parallel networks process the pre and post therapy data, respectively.
- the networks are trained with exactly the same parameters in the Siamese network.
- the features that optimally discriminate based on the loss function, Ew, are automatically developed during training of the network.
- the multi-modal input frames are for T2 and ADC for each time.
- the features related to textural information for the T2 image and local deviations in the ADC image highlighting the differences from pre and post treatment are learned. These learnt features are then applied to frames of data for a specific patient, resulting in Gw values for the specific patient.
- the machine-learnt classifier classifies the tissue of the patient from the extracted values of the features.
- the values are input to the machine-learnt classifier implemented by the processor.
- the tissue is classified. For example, a therapy response of the tissue in the tissue region is classified from the values of the features as input to the machine-learnt classifier.
- any machine-learnt classifier may be used.
- the classifier is trained to associate the categorical labels (output) to the extracted values of one or more features.
- the machine-learning of the classifier uses training data with ground truth, such as values for features extracted from frames of data for patients with known outcomes, to learn to classify based on the input feature vector.
- the resulting machine-learnt classifier is a matrix for inputs, weighting, and combination to output a classification. Using the matrix or matrices, the processor inputs the extracted values for features and outputs the classification.
- Any machine learning or training may be used.
- a probabilistic boosting tree, support vector machine, neural network, sparse auto-encoding classifier, Bayesian network, or other now known or later developed machine learning may be used.
- Any semi-supervised, supervised, or unsupervised learning may be used. Hierarchal or other approaches may be used.
- the classification is by a machine-learnt classifier learnt with the deep learning. As part of identifying features that distinguish between different outcomes, the classifier is also machine learnt.
- the classifier 18 is trained to classify the tissues based on the feature values Gw, obtained from a Siamese network that is already optimized / trained to maximize differences of images from different categories and minimize differences of images from the same categories.
- the classifier categorizes the tumor from the feature values, such as classifying a type of tumor, a tumor response to therapy, or other tissue characteristic.
- the classification is based on features from Gw, which are optimized from the loss Ew.
- the Siamese network trains and/or defines kernels such that the feature vectors Gw can help discriminate for all categories.
- the classifier 18 uses Gw and then defines probabilities that two images from different time points have zero, small or large differences.
- additional information may be used for extracting and/or classifying.
- values of clinical measurements for the patient are used.
- the classifier is trained to classify based on the extracted values for the features in the frames of data as well as the additional measurements. Genetic data, blood-based diagnostics, family history, sex, weight, and/or other information are input as a feature for classification.
- the classifier is trained to classify the tumor.
- the classifier is trained to classify the tissue into one of two or more classes.
- the machine-learnt classifier classifies the tumor for that patient into one of the classes. Any of various applications may be used.
- the classifier identifies a similar case.
- the similar case includes an example treatment and outcome for another patient with a tissue region similar to the tissue region of the current patient. Any number of these reference cases may be identified. A database of possible reference cases is used. The most similar case or cases to the current patient are identified by the classifier. Using the extracted values for texture features with or without other features, the classifier identifies the class as a reference case or cases.
- decision-making is optimized by emphasizing the use of evidence from well designed and conducted research. One key component is to retrieve the evidence in the form of other cases using the current case.
- the values for the textural features are extracted and used by the classifier to retrieve the closest cases.
- These closest cases provide the evidence, such as a list of similar cases with thumbnail images of the tumors for the reference cases.
- Another class is therapy response.
- the success or failure of therapy is predicted as a diagnosis or prognosis.
- a range providing an amount and/or probability of success or failure is output as the class. Whether the patient is likely to respond (i.e., responder), not likely to respond (i.e., non-responder), or may partially respond (i.e., partial or semi-responder) is output.
- the predicted survival time of the patient may be the output.
- Figure 7 shows two examples of classifying the tissue as successful or not successful therapy.
- the therapy response of the tumor is classified.
- a trans arterial chemo embolization (TACE) is used as the example therapy.
- multi-parametric (e.g., T2 and ADC) data for one time e.g., 1 month after treatment) is used.
- T2 and ADC multi-parametric data for one time (e.g., 1 month after treatment) is used.
- T2 and ADC multi-parametric data for one time (e.g., 1 month after treatment) is used.
- the classification is applied to determine whether the tumor is responding to the treatment.
- the level or rate of response may be output, informing a decision on any continued treatment and level of treatment.
- treatment is not applied.
- the frames of data prior to treatment are used to classify whether the treatment is expected to be successful based on the extracted textural and/or other features prior to treatment.
- Figure 8A shows an example of predicting therapy outcome.
- RECIST is used, so the size change of the tumor is measured N times post-therapy. Eventually, a sufficient trend is determined by the clinician to predict the outcome.
- Figure 8B shows an alternative approach using the machine-learnt classifier. Less information is needed, such as just one set of post therapy frames of data (e.g., from one time or appointment for scanning). With textural and/or other feature analysis and machine-learnt classification, the therapy success or failure decision may be inferred without manual perception of trends. A lesser number of scans and lesser amount of time are needed to make therapy decisions. The success of the therapy applied to the tumor is inferred and used to optimize treatment.
- the classifier may be machine trained to classify the tumor (e.g., suspicious tissue region) as benign or malignant. Once the lesion is segmented or identified, values for textural features are computed for the lesion. The values are fed into the machine-learnt classifier for labeling of malignancy.
- tumor e.g., suspicious tissue region
- the classifier may be trained to output values for staging the tumor. Using advanced tissue characterization provided by the machine-learnt classifier, the stage is output. For example in liver tissue, the extracted textural features are used by the classifier to output a measure of fibrosis staging. In other examples, the classifier is trained to output tags used for staging, such as outputting the measures used for staging the tumor. The values for the features are used by the classifier to provide the tag or staging measure. In quantitative BIRADS for beast examination, the textural features are extracted, and then the classifier associates the categorical labels of clinically descriptive tags (e.g., measures of mass, symmetry, and non-mess-like enhancement) to the extracted features. The inferred tags are then used to manually or automatically stage the breast tumor.
- clinically descriptive tags e.g., measures of mass, symmetry, and non-mess-like enhancement
- the textural features are extracted, and then the classifier associates the categorical labels of clinically descriptive tags (e.g., tags for the peripheral zone, central zone, and transition zone) to the extracted features.
- clinically descriptive tags e.g., tags for the peripheral zone, central zone, and transition zone
- the inferred tags are then used to manually or automatically stage the prostate.
- the classifier is trained to output any information useful for diagnosis or prognosis. For example, information to enhance therapy monitoring is output. An intensity histogram, histogram of difference over time in the intensities representing the tumor, and/or a difference of histograms of intensities representing the tumor at different times are calculated and output without the classifier. The classifier supplements these or other image intensity statistics or histograms. Information derived from the textual features and/or other features is used to provide any information useful to clinicians.
- More than one machine-trained classifier may be used.
- the same or different features are used by different classifiers to output the same or different information.
- a classifier is trained for predicting therapy response, and another classifier is trained to output tags for staging.
- one classifier is trained to output different types of information, such as using a hierarchal classifier.
- Figure 7 shows four example use cases of classification using multi-parametric magnetic resonance frames of data.
- "Localize Lesion” represents identifying a tissue region of interest (ROI) by segmentation or by user region designation (e.g., act 14).
- the input to the system is a set of multi-parametric magnetic resonance images from a single time point, and the system performs one or more of three different tasks: in application #1, the task is to predict whether the tumor is benign or malignant. The outcome may be either discrete (e.g., Yes vs No) or continuous (e.g., giving an indication of the severity of the malignancy, such as a number between 0 and 1).
- the task is to predict whether the tumor is low- or high-grade.
- the grading may be discrete (e.g., Low vs High) or continuous (e.g., a number between 0 to the N, where N is the highest grade possible).
- the task is to predict whether the patient responds to the therapy.
- the response may be discrete (e.g., responding vs not-responding) or continuous (a number indicating the degree of response, such as between 0 and 1).
- the input to the system is a set of multi-parametric magnetic resonance images from two or more time points, and the system performs the same task as in applications #3, which is to predict the response, based on the values extracted from each of the images.
- the tissue classification is transmitted. Any of the tissue classifications output by the classifier are transmitted. Alternatively, information derived from the output of the classification is transmitted, such as a stage derived from classification of tags.
- the transmission is to a display, such as a monitor, workstation, printer, handheld, or computer.
- the transmission is to a memory, such as a database of patient records, or to a network, such as a computer network.
- the tissue classification is output to assist with prognosis, diagnosis, or evidence-based medicine. For example, a list of similar patients, including their treatment regime and outcome, is output. As another example, a predicted therapy response is output in a report for the patient.
- the tissue classification is output as text.
- An image of the tumor is annotated or labeled with alphanumeric text to indicate the classification.
- an image of the tissue is displayed, and the classification is communicated as a symbol, coloring, highlighting or other information added onto the image.
- the classification is output in a report without the image of the tumor or separated (e.g., spaced away) from the image of the tumor.
- the tissue may also be classified very locally (e.g., independent classification of every voxel).
- the resulting classification is output as a colored or highlighted overlay onto images of the tissue, visually indicating, spatially, possible regions of likely response or non-response.
- Other information may be output as well.
- Other information includes values for the features, clinical measures, values from image processing, treatment regime, or other information (e.g., lab results).
- Figure 9 shows a system for tissue characterization in medical imaging.
- the system includes an imaging system 80, a memory 84, a user input 85, a processor 82, a display 86, a server 88, and a database 90. Additional, different, or fewer components may be provided.
- a network or network connection is provided, such as for networking with a medical imaging network or data archival system.
- the user input 85 is not provided.
- the server 88 and database 90 are not provided.
- the server 88 connects through a network with many imaging systems 80 and/or processors 82.
- the processor 82, memory 84, user input 85, and display 86 are part of the medical imaging system 80.
- the processor 82, memory 84, user input 85, and display 86 are part of an archival and/or image processing system, such as associated with a medical records database workstation or server, separate from the imaging system 80.
- the processor 82, memory 84, user input 85, and display 86 are a personal computer, such as desktop or laptop, a workstation, a server, a network, or combinations thereof.
- the processor 82, display 86, user input 85, and memory 84 may be provided without other components for acquiring data by scanning a patient.
- the imaging system 80 is a medical diagnostic imaging scanner. Ultrasound, computed tomography (CT), x-ray, fluoroscopy, positron emission tomography, single photon emission computed tomography, and/or magnetic resonance (MR) systems may be used.
- the imaging system 80 may include a transmitter and includes a detector for scanning or receiving data representative of the interior of the patient.
- the imaging system 80 is a magnetic resonance system.
- the magnetic resonance system includes a main field magnet, such as a cryomagnet, and gradient coils.
- a whole body coil is provided for transmitting and/or receiving.
- Local coils may be used, such as for receiving electromagnetic energy emitted by atoms in response to pulses.
- Other processing components may be provided, such as for planning and generating transmit pulses for the coils based on the sequence and for receiving and processing the received k-space data.
- the received k-space data is converted into object or image space data with Fourier processing.
- Anatomical and/or functional scanning sequences may be used to scan the patient, resulting in frames of anatomical and/or functional data representing the tissue.
- the memory 84 may be a graphics processing memory, a video random access memory, a random access memory, system memory, cache memory, hard drive, optical media, magnetic media, flash drive, buffer, database, combinations thereof, or other now known or later developed memory device for storing data or video information.
- the memory 84 is part of the imaging system 80, part of a computer associated with the processor 82, part of a database, part of another system, a picture archival memory, or a standalone device.
- the memory 84 stores medical imaging data representing the patient, segmentation or tissue region information, feature kernels, extracted values for features, classification results, a machine-learnt matrix, and/or images.
- the memory 84 may alternatively or additionally store data during processing, such as storing seed locations, detected boundaries, graphic overlays, quantities, or other information discussed herein.
- the memory 84 or other memory is alternatively or additionally a non-transitory computer readable storage medium storing data representing instructions executable by the programmed processor 82 for tissue classification in medical imaging.
- the instructions for implementing the processes, methods and/or techniques discussed herein are provided on non-transitory computer-readable storage media or memories, such as a cache, buffer, RAM, removable media, hard drive or other computer readable storage media.
- Non-transitory computer readable storage media include various types of volatile and nonvolatile storage media.
- the functions, acts or tasks illustrated in the figures or described herein are executed in response to one or more sets of instructions stored in or on computer readable storage media.
- processing strategies may include multiprocessing, multitasking, parallel processing, and the like.
- the instructions are stored on a removable media device for reading by local or remote systems.
- the instructions are stored in a remote location for transfer through a computer network or over telephone lines.
- the instructions are stored within a given computer, CPU, GPU, or system.
- the user input 85 is a keyboard, mouse, trackball, touch pad, buttons, sliders, combinations thereof, or other input device.
- the user input 85 may be a touch screen of the display 86.
- User interaction is received by the user input, such as a designation of a region of tissue (e.g., a click or click and drag to place a region of interest). Other user interaction may be received, such as for activating the classification.
- the processor 82 is a general processor, central processing unit, control processor, graphics processor, digital signal processor, three-dimensional rendering processor, image processor, application specific integrated circuit, field programmable gate array, digital circuit, analog circuit, combinations thereof, or other now known or later developed device for segmentation, extracting feature values, and/or classifying tissue.
- the processor 82 is a single device or multiple devices operating in serial, parallel, or separately.
- the processor 82 may be a main processor of a computer, such as a laptop or desktop computer, or may be a processor for handling some tasks in a larger system, such as in an imaging system 80.
- the processor 82 is configured by instructions, design, hardware, and/or software to perform the acts discussed herein.
- the processor 82 is configured to perform the acts discussed above.
- the processor 82 is configured to identify a region of interest based on user input, extract values for features for the region, classify the tumor in the region (e.g., apply the machine-learnt classifier), and output results of the classification.
- the processor 82 is configured to transmit the acquired frames of data or extracted values of features to the server 88 and receive classification results from the server 88.
- the server 88 rather than the processor 82 performs the machine-learnt classification.
- the processor 82 may be configured to generate a user interface for receiving seed points or designation of a region of interest on one or more images.
- the display 86 is a monitor, LCD, projector, plasma display, CRT, printer, or other now known or later developed devise for outputting visual information.
- the display 86 receives images, graphics, text, quantities, or other information from the processor 82, memory 84, imaging system 80, or server 88.
- One or more medical images are displayed.
- the images are of a region of the patient.
- the images are of a tumor, such as three-dimensional rendering of the liver with the tumor highlighted by opacity or color.
- the image includes an indication, such as a text, a graphic or colorization, of the classification of the tumor.
- the image includes a quantity based on the classification, such as a tag value.
- the quantity or classification output may be displayed as the image without the medical image representation of the patient.
- a report with the classification is output.
- the server 88 is a processor or group of processors. More than one server 88 may be provided.
- the server 88 is configured by hardware and/or software to receive frames of data (e.g., multi-parametric images), extracted features from frames of data, and/or other clinical information for a patient, and return the classification.
- the server 88 may extract values for the features from received frames of data.
- the server 88 applies a machine-learnt classifier to the received information. Where the classification identifies one or more reference cases similar to the case for a given patient, the server 88 interacts with the database 90.
- the database 90 is a memory, such as a bank of memories, for storing reference cases including treatments for tumor, frames of data and/or extracted values for features, and outcomes for evidence-based medicine.
- the server 88 uses the database 90 to identify the cases in the database most or sufficiently similar to a current case for a current patient.
- the server 88 transmits the identity of the reference and/or the reference information to the processor 82.
- the server 88 and database 90 are not provided, such as where the processor 82 and memory 84 extract, classify, and output the classification.
Description
- The present embodiments relate to tissue characterization in medical imaging.
- Magnetic resonance images are widely used in medical diagnosis and therapy. For example, magnetic resonance is used for breast tumor diagnosis following the guidelines of the Breast Imaging-Reporting and Data System (BIRADS), which are based on clinically descriptive tags like mass (shape, margin, mass enhancement), symmetry or asymmetry, non-mess-like enhancement in an area that is not a mass (distribution modifiers, internal enhancement), kinetic curve assessment, and other findings. Similarly for prostate, the Prostate Imaging and Reporting and Data System (PIRADS) specifies the clinically descriptive tags for special prostate regions, such as peripheral zone, central zone, and transition zone. For liver tissue characterization, fibrosis staging is possible based on reading of the magnetic resonance images. Similar approaches are used in other imaging modalities, such as ultrasound, computed tomography, positron emission tomography, or single photon emission computed tomography.
- To assess therapy, multimodal magnetic resonance scans are acquired before and after therapy. A simple morphological (e.g., size-based) scoring is commonly performed in tumor treatment assessment, such as the Response Evaluation Criteria in Solid Tumors (RECIST) criteria. The assessment of treatment response is critical in determining the course of continuing treatment since chemotherapy drugs may have adverse effects on the patient. In basic clinical settings, treatment assessment is done morphologically with tumor size. Due to this simple approach, it can take longer to determine if a treatment is succeeding.
- The decision to stop therapy may occur earlier by employing functional magnetic resonance information than with the RECIST criteria. For example, treatment effectiveness may be determined earlier by using image-based functional measurements, such as intensity based histograms of the functional measures. These histogram-based intensity values are manually analyzed in clinical practice and may not necessarily capture subtleties related to image texture and local dissimilarity that may better represent cell density, vasculature, necrosis, or hemorrhage characteristics important to clinical diagnosis.
-
WO 2015/048103 A1 discloses therapy response assessment based on texture features of medical images using a machine-learnt classifier. - The invention is defined by the appended claims.
- By way of introduction, the preferred embodiments described below include methods, systems, instructions, and non-transitory computer readable media for tissue characterization in medical imaging. Tissue is characterized using machine-learnt classification. The prognosis, diagnosis or evidence in the form of a similar case is found by machine-learnt classification from features extracted from frames of medical scan data with or without external data such as age, gender, and blood biomarkers. The texture or other features for tissue characterization may be learned using deep learning. Using the features, therapy response is predicted from magnetic resonance functional measures before and after treatment in one example. Using the machine-learnt classification, the number and time between measures after treatment may be reduced as compared to RECIST for predicting the outcome of the treatment, allowing earlier termination or alteration of the therapy.
- In a first aspect, a method is provided for tissue characterization in medical imaging. A medical scanner scans a patient where the scanning provides multiple frames of data representing a tissue region of interest in the patient. A processor extracts values for features from the frames of data. A machine-learnt classifier implemented by the processor classifies a therapy response of the tissue of the tissue region from the values of the features as input to the machine-learnt classifier. The therapy response is transmitted.
- According to a preferred embodiment of the invention a method is advantageously, wherein scanning comprises scanning with a magnetic resonance scanner, computed tomography scanner, ultrasound scanner, positron emission tomography scanner, single photon emission computed tomography scanner, or combinations thereof, and/or wherein scanning comprises scanning with a magnetic resonance scanner, the frames of data comprising functional measurements by the magnetic resonance scanner.
- Further, a method is advantageously, wherein extracting values comprises extracting the values as scale-invariant feature transformation, histogram of oriented gradients, local binary pattern, gray-level co-occurrence matrix, or combinations thereof.
- According to a preferred method classifying can comprise classifying from the values of the features extracted from the frames of data and values of clinical measurements for the patient.
- Further, a method is advantageously, wherein scanning comprises scanning at different times, different contrasts, or different types, the frames corresponding to the different times, contrasts, or types, and wherein classifying comprises classifying the therapy response based due to differences in the frames.
- Further, a method is advantageously, wherein extracting comprises extracting the values for the features with the features comprising deep-learnt features from deep learning, and wherein classifying comprises classifying by the machine-learnt classifier learnt with the deep learning.
- According to a preferred method, extracting and classifying can comprise extracting and classifying based on Siamese convolution networks.
- According to another preferred method, classifying the therapy response can comprise inferring therapy success for therapy applied to the tissue region, and/or classifying the therapy response can comprise identifying an example treatment and outcome for another patient with similar extracted values of the features,
- Further, a method is advantageously, wherein transmitting the therapy response comprises displaying the therapy response in a report.
- Further, a method is advantageously, further comprising identifying, by a user input device responsive to user selection, the tissue region, wherein the extracting, classifying, and transmitting are performed without user input after the identifying.
- According to another preferred method, the extracting and classifying are performed by the processor remote from the medical scanner.
- According to another preferred method, therapy response can comprise a responder classification, non-responder classification, or a prediction of survival time.
- Further, a method is advantageously, wherein transmitting the therapy response comprises displaying the therapy response as an overlay over images of the tissue region.
- In a second aspect, a method is provided for tissue characterization in medical imaging. A medical scanner scans a patient where the scanning provides different frames of data representing different types of measurements for tissue in the patient. A processor extracts values for features from the frames of data. A machine-learnt classifier implemented by the processor classifies the tissue of the patient from the values of the features as input to the machine-learnt classifier. The tissue classification is transmitted.
- A method is advantageously, wherein scanning comprises scanning with a magnetic resonance scanner, the different types of measurements including an apparent diffusion coefficient.
- Further, according to preferred method, extracting can comprise extracting the values for a texture of the tissue represented by at least one of the frames of data.
- Further, a method is advantageously, wherein classifying the tissue comprises classifying a predication of response of the tissue to therapy, classifying a benign or malignant, classifying a staging tag, a similarity to a case, or combinations thereof, and/or wherein extracting comprises extracting the values for features that are deep-learnt features, and wherein classifying comprises classifying by the machine-learnt classifier being a deep-learnt classifier, the deep-learnt features and deep-learnt classifier being performed on multiple convolution networks.
- In a third aspect, a method is provided for tissue characterization in medical imaging. A patient is scanned with a medical scanner where the scanning provides a frame of data representing a tumor in the patient. A processor extracts values for deep-learnt features from the frame of data. A deep-machine-learnt classifier implemented by the processor classifies the tumor from the values of the features as input to a machine-learnt classifier. The classification of the tumor is transmitted.
- Preferred the deep-learnt features and the deep-machine-learnt classifier are from a Siamese convolution network.
- The present invention is defined by the following claims, and nothing in this section should be taken as a limitation on those claims. Further aspects and advantages of the invention are discussed below in conjunction with the preferred embodiments.
- The components and the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.
-
Figure 1 is a flow chart diagram of one embodiment of a method for characterizing tissue in medical imaging; -
Figure 2 represents user interaction for tissue characterization according to one embodiment; -
Figure 3 is a flow chart diagram of a machine-learnt classifier embodiment of the method for characterizing tissue in medical imaging; -
Figure 4 is a flow chart diagram of a deep-learning embodiment of the method for characterizing tissue in medical imaging; -
Figure 5 is an example of a deep-learning convolution layer-based classification network incorporating non-imaging features.; -
Figure 6 illustrates an example output of a related case identified through classification; -
Figure 7 illustrates four example applications of machine-learnt classification of tissue characteristics; -
Figure 8A shows an example time line for measuring therapy response relying on trending over many times post treatment, andFigure 8B shows an example time line for measuring the therapy response relying on measurement for just one time post treatment; and -
Figure 9 is one embodiment of a system for tissue characterization in medical imaging. - In advanced image analytics, complex features, such as image texture, are automatically found and included in an automated format capable of being employed in clinical use cases. Using machine-learnt classification to interrogate the image has the potential to extract texture and/or other image-based information for classification in clinical use cases. A generic pipeline and user interface is capable of finding and processing complex features from medical image data.
- Several approaches for tumor prognosis and outcomes allow such analytics along with an interface to make the analytics readily accessible to physicians and support staff. The first approach uses robust image textural and/or other image features extracted by the computer as a feature set for further analysis. The second approach uses a deep learning pipeline involving a Siamese network to automatically create and classify features in a parallel convolutional network for the same patient at two different time points and/or different types of measures. The deep learning network and interface are provided for image analytics for lesion and/or tissue characterization and therapy prediction.
- In one embodiment for tumor prognosis, diagnosis or finding of similar cases, a series of features is extracted from multiple image contrasts and/or multiple examination time-points. The textural and/or other image features are used in a classifier. A Siamese deep-learning neural network may be used to identify the features. Non-image data, such blood test results age, gender, or blood serum biomarkers, may also be used as features. The extracted features are computed against each other using a machine-learnt classifier to determine the diagnosis, prognosis, or find similar cases. For finding similar cases, the images from the similar cases are obtained from a database of previously document cases. The previously documented cases are used to determine the reference case or cases. The diagnosis, prognosis, or finding of similar cases may be performed via a cloud based service that delivers results.
- For the user interface, a user marks the tumor of interest in one or more images of a time series, possibly with a single click, and the computer produces the diagnosis, prognosis, or similar cases report in response. This clinical product and interface may minimize interaction. The single click per image assists in segmentation and/or more precise identification of regions. By only requiring a single click or region designation in the pre and post treatment images, the interaction is minimized, increasing the clinical feasibility of the approach. The output report may include clinical evidence, in the form of numbers (e.g., quantifying the textural features that lead to the decision) or in the form of references to previous clinical cases. A single-click interface determines results, allowing for a clinically feasible approach to incorporate complex image features into tumor prognosis and therapy.
-
Figure 1 shows one embodiment of a flow chart of a method for tissue characterization in medical imaging. For tissue characterization, such as tumor type, tumor response to treatment, identification of similar tumors in other patients, tumor prognosis, or tumor diagnosis, a machine-learnt classifier is applied. The machine-learnt classifier uses information from one or more frames of data to classify the tumor. Frames of data from different times or different types of measures may be used to classify. In one approach, the features extracted from the frames as the information are manually defined. In another approach, deep learning identifies the features that best characterize the tumor. The features learned with deep learning may be texture and/or non-texture, such as features from the frame or clinical data. - The acts are performed in the order shown (e.g., top to bottom) or other orders. Additional, different, or fewer acts may be provided. For example, the method is performed without transmitting the classification in
act 20. As another example, the segmentation ofact 14 is not performed, instead the classification is applied to the entire frame of data. - In
act 12, one or more medical images or datasets are acquired. The medical image is a frame of data representing the patient. The data may be in any format. While the terms "image" and "imaging" are used, the image or imaging data may be in a format prior to actual display of the image. For example, the medical image may be a plurality of scalar values representing different locations in a Cartesian or polar coordinate format different than a display format. As another example, the medical image may be a plurality red, green, blue (e.g., RGB) values output to a display for generating the image in the display format. The medical image may not yet be a displayed image, may be a currently displayed image, or may be previously displayed image in the display or other format. The image or imaging is a dataset that may be used for imaging, such as scan data representing the patient. - Any type of medical image may be used. In one embodiment, magnetic resonance frames of data representing a patient are acquired. Magnetic resonance data is acquired by scanning with a magnetic resonance system. Using an imaging sequence, the magnetic resonance system scans the patient. Data representing an interior region of a patient is acquired. The magnetic resonance data is k-space data. Fourier analysis is performed to reconstruct the data from the k-space into a three-dimensional object or image space, providing the frame of data. In other embodiments, x-ray, computed tomography, ultrasound, positron emission tomography, single photon emission computed tomography, or other medical imaging scanner scans the patient. Combination scanners, such as magnetic resonance and positron emission tomography or computed tomography and positron emission tomography systems may be used to scan. The scan results in a frame of data acquired by the medical imaging scanner and provided directly for further processing or stored for subsequent access and processing.
- The frame of data represents a one, two, or three-dimensional region of the patient. For example, the multi-dimensional frame of data represents an area (e.g., slice) or volume of the patient. Values are provided for each of multiple locations distributed in two or three dimensions. A tumor or suspicious tissue within the patient is represented by the values of the frame of data.
- The frame of data represents the scan region at a given time or period. The dataset may represent the area or volume over time, such as providing a 4D representation of the patient. Where more than one frame is acquired, the different frames of data may represent the same or overlapping region of the patient at different times. For example, one or more frames of data represent the patient prior to treatment, and one or more frames of data represent the patient after treatment or interleaved with on-going treatment.
- Where more than one frame is acquired, the different frames may represent different contrasts. For example, different types of contrast agents are injected or provided in the patient. By scanning tuned to or specific to the different types of contrast, different frames of data representing the different contrast agents are provided.
- Where more than one frame is acquired, the different frames may represent different types of measures (multi-modal or multi-parametric frames of data). By configuring the medical scanner, different types of measurements of the tissue may be performed. For example in magnetic resonance, both anatomical and functional measurements are performed. As another example in magnetic resonance, different anatomical or different functional measurements are performed. For different anatomical measurements, T1 and T2 are two examples. For different functional measurements, apparent diffusion coefficient (ADC), venous perfusions, and high B-value are three examples. In one embodiment, a T2 frame of data and an ADC frame of data are computed (e.g., different b-value images are acquired to compute a frame of ADC data). Frames of data from different types of scanners may be used.
- A combination of different times and types of measures may be used. For example, one set of frames of data represents different types of measures (e.g., T2 and ADC) for pre-treatment, and another set of frames of data represent the same different types of measures (e.g., T2 and ADC) for post-treatment. Multi-dimensional and multi-modal image data is provided for each time. In other embodiments, a single frame of data representing just one type of measure for one time is acquired by scanning.
- Where multiple frames represent the tissue at different times, the frames are spatially registered. The registration removes translation, rotation, and/or scaling between the frames. Alternatively, registration is not used.
- In
act 14, the tissue of interest is identified. The tissue of interest is identified as a region around and/or including the tumor. For example, a box or other shape that includes the tumor is located. Alternatively, the tissue of interest is identified as tissue representing the tumor, and the identified tumor tissue is segmented for further analysis. - The identification is performed by the user. The user, using a user input (e.g., mouse, trackball, keyboard, buttons, sliders, and/or touch screen), identifies the tissue region to be used for feature extraction, classification, and transmission of the classification results. For example, the user selects a center of the tumor about which a processor places a box or other region designator. The user may size or position a region designator with or without center selection. In other approaches, the user indicates a location on the suspicious tissue, and the processor segments the suspicious tissue based on the user placed seed. Alternatively, a processor automatically identifies the tissue region of interest without user selection.
-
Figure 2 shows an example graphic user interface or approach with minimal user interaction for tissue characterization. In the example ofFigure 2 , the classification is for reporting on therapy. Two images corresponding to two frames of data at two times are provided, one pre-treatment and one post-treatment. Given both the pre and post treatment image sets, the user clicks on the tumor center (represented by the arrow tip) in each time point. A bounding box centered at the user selected point is placed, designating the tissue region of interest. - In response to the selection, the computer then returns the predicted success of treatment in an automated report. The report may also contain diagnosis of similar cases that have been previously reviewed in a database as references. The treatment outcome or other classification is determined via a single-click on each image or frame of data without the need to perform any manual and/or automatic segmentation. A "single-click" or simple user input is provided for tumor diagnosis, treatment planning, and/or treatment response assessment. The same approach and technology can be used in any medical imaging product that examines tumor prognosis and treatment based on data acquired from a scanner.
- The tumor tissue with or without surrounding tissue is segmented. The data is extracted from the frame for further processing. The pixel or voxel values for the region of interest are isolated. Alternatively, the locations in the region are flagged or marked without being separated from other locations in the frame of data.
- In
act 16 ofFigure 1 , a processor extracts values for features. The processor is part of the medical imaging scanner, a separate workstation, or a server. In one embodiment, the processor extracting the values is a server remote from the medical scanner, such as a server in the cloud. A manufacturer of the medical scanner or a third party provides classification as a service, so the frames of data are communicated through a computer network to the server for extraction of the features and classification from the extracted values. - Values for any number of features are extracted from the frame or frames of data. The values for a texture of the tissue represented by at least one of the frames of data are extracted. The texture of the tissue is represented by the measures of the frame of data. The extraction of the values for each feature is performed for the tissue region of interest, avoiding application to other tissue outside the region of interest. Alternatively, the values for other regions outside the region of interest are extracted.
- Each feature defines a kernel for convolution with the data. The results of the convolution are a value of the feature. By placing the kernel at different locations, values for that feature at different locations are provided. Given one feature, the values of that feature at different locations are calculated. Features for other texture information than convolution may be used, such as identifying a maximum or minimum. Other features than texture information may be used.
- In one embodiment, the features are manually designed. The feature or features to be used are pre-determined based on a programmer's experience or testing. Example features include scaled invariant feature transformation, histogram of oriented gradients, local binary pattern, gray-level co-occurrence matrix, Haar wavelets, steerable, or combinations thereof. A feature extraction module computes features from images to better capture essential subtleties related to cell density, vasculature, necrosis, and/or hemorrhage that are important to clinical diagnosis or prognosis of tissue.
-
Figure 3 shows an example using manually programmed features. One or more images are acquired from memory or directly from a scanner. The textural features are extracted from the images. The values of the features are used for classification to provide outcomes. - In another embodiment, deep-learnt features are used. The values are extracted from frames of data for features learned from machine learning. Deep machine learning learns features represented in training data as well as training the classifier rather than just training the classifier from the manually designated features.
- Any deep learning approach or architecture may be used. In one embodiment, the extracting and classifying of
acts Figure 4 shows an example for deep learning. The twin or Siamese convolutional networks are trained to extract features from given multi-modal, multi-dimensional images, IM1 and IM2, such as multi-modal frames representing pre and post treatment, respectively. The relevant features are automatically determined as part of training. This ability allows for the generic training on arbitrary data (i.e., training data with known outcomes) that can internally determine features, such as textures. The Siamese convolution networks are linked so that the same weights (W) for the parameters defining the networks are used in both branches.. The Siamese network uses two input images (e.g., one branch for pre-treatment and another branch for post-treatment). Kernels weights for convolution are learnt using both branches of the network, and optimized to provide values to Gw.. - The Siamese deep learning network is also trained to classify small, large or absence of changes between time points. The definition of such features is based on a specific loss function Ew that minimizes difference between time points when there is no or small changes and maximizes the differences when there are large changes between them.. The features indicating relevant differences between the two inputs are learned from the training data. By training the network with labeled outcomes, the network learns what features are relevant or can be ignored for determining the prognosis, diagnosis, or finding similar cases. During training, low-level features and invariants are learned by the convolutional networks that have exactly the same parameters. These networks determine the core feature set that differs between the two input datasets based on feedback during learning of the difference network.
-
Figure 5 shows an example convolution layer-based network for learning to extract features. Each branch or twin has the same layers or network structure. The networks themselves are shown as layers of convolutional, sub-sampling (e.g., max pooling), and fully connected layers. By using convolution, the number of possible features to be tested is limited. The fully connected layers (FC) inFigure 5 operate to fully connect the features as limited by the convolution layer (CL) after maximum pooling. Other features may be added to the FC layers, such as non-imaging or clinical information. Any combination of layers may be provided. Additional, different, or fewer layers may be provided. In one alternative, a fully connected network is used instead of a convolution network. - Returning to
Figure 4 , the two parallel networks process the pre and post therapy data, respectively. The networks are trained with exactly the same parameters in the Siamese network. The features that optimally discriminate based on the loss function, Ew, are automatically developed during training of the network. For example, the multi-modal input frames are for T2 and ADC for each time. The features related to textural information for the T2 image and local deviations in the ADC image highlighting the differences from pre and post treatment are learned. These learnt features are then applied to frames of data for a specific patient, resulting in Gw values for the specific patient. - In
act 18 ofFigure 1 , the machine-learnt classifier classifies the tissue of the patient from the extracted values of the features. The values are input to the machine-learnt classifier implemented by the processor. By applying the classifier, the tissue is classified. For example, a therapy response of the tissue in the tissue region is classified from the values of the features as input to the machine-learnt classifier. - In the approach of
Figure 3 , any machine-learnt classifier may be used. The classifier is trained to associate the categorical labels (output) to the extracted values of one or more features. The machine-learning of the classifier uses training data with ground truth, such as values for features extracted from frames of data for patients with known outcomes, to learn to classify based on the input feature vector. The resulting machine-learnt classifier is a matrix for inputs, weighting, and combination to output a classification. Using the matrix or matrices, the processor inputs the extracted values for features and outputs the classification. - Any machine learning or training may be used. A probabilistic boosting tree, support vector machine, neural network, sparse auto-encoding classifier, Bayesian network, or other now known or later developed machine learning may be used. Any semi-supervised, supervised, or unsupervised learning may be used. Hierarchal or other approaches may be used.
- In one embodiment, the classification is by a machine-learnt classifier learnt with the deep learning. As part of identifying features that distinguish between different outcomes, the classifier is also machine learnt. For example in
Figure 4 , theclassifier 18 is trained to classify the tissues based on the feature values Gw, obtained from a Siamese network that is already optimized / trained to maximize differences of images from different categories and minimize differences of images from the same categories. For example, the classifier categorizes the tumor from the feature values, such as classifying a type of tumor, a tumor response to therapy, or other tissue characteristic. In the example ofFigure 4 , the classification is based on features from Gw, which are optimized from the loss Ew. First, the Siamese network trains and/or defines kernels such that the feature vectors Gw can help discriminate for all categories. Once, the network is trained, theclassifier 18 uses Gw and then defines probabilities that two images from different time points have zero, small or large differences.. - In either approach (e.g.,
Figure 3 orFigure 4 ), additional information may be used for extracting and/or classifying. For example, values of clinical measurements for the patient are used. The classifier is trained to classify based on the extracted values for the features in the frames of data as well as the additional measurements. Genetic data, blood-based diagnostics, family history, sex, weight, and/or other information are input as a feature for classification. - The classifier is trained to classify the tumor. The classifier is trained to classify the tissue into one of two or more classes. By inputting extracted values for a specific patient, the machine-learnt classifier classifies the tumor for that patient into one of the classes. Any of various applications may be used.
- In one embodiment, the classifier identifies a similar case. The similar case includes an example treatment and outcome for another patient with a tissue region similar to the tissue region of the current patient. Any number of these reference cases may be identified. A database of possible reference cases is used. The most similar case or cases to the current patient are identified by the classifier. Using the extracted values for texture features with or without other features, the classifier identifies the class as a reference case or cases. In evidence-based medicine, decision-making is optimized by emphasizing the use of evidence from well designed and conducted research. One key component is to retrieve the evidence in the form of other cases using the current case. For example, as in
Figure 6 , once a lesion is marked by the user, the values for the textural features are extracted and used by the classifier to retrieve the closest cases. These closest cases provide the evidence, such as a list of similar cases with thumbnail images of the tumors for the reference cases. - Another class is therapy response. The success or failure of therapy is predicted as a diagnosis or prognosis. In an alternative, rather than binary indication of success or failure, a range providing an amount and/or probability of success or failure is output as the class. Whether the patient is likely to respond (i.e., responder), not likely to respond (i.e., non-responder), or may partially respond (i.e., partial or semi-responder) is output. The predicted survival time of the patient may be the output.
-
Figure 7 shows two examples of classifying the tissue as successful or not successful therapy. Inapplications # 3 and #4, the therapy response of the tumor is classified. A trans arterial chemo embolization (TACE) is used as the example therapy. Inapplication # 3, multi-parametric (e.g., T2 and ADC) data for one time (e.g., 1 month after treatment) is used. After identifying the tissue region of interest for the tumor, the classification is applied to determine whether the tumor is responding to the treatment. The level or rate of response may be output, informing a decision on any continued treatment and level of treatment. In an alternative, treatment is not applied. The frames of data prior to treatment are used to classify whether the treatment is expected to be successful based on the extracted textural and/or other features prior to treatment. - In
application # 4, multiple treatments are performed. Frames of data after each treatment are used. The response of the tumor is measured or predicted based on the classification from the data over time. In an alternative, only the second treatment is performed and the first frames of data at 1 month are pre-treatment frames. The features from the different times are used to predict or measure therapy response. - By inferring the therapy success or level of success for therapy applied to the tissue region, a decision on whether to continue therapy and/or to change the therapy may be more informed and/or performed earlier.
Figure 8A shows an example of predicting therapy outcome. In this example, RECIST is used, so the size change of the tumor is measured N times post-therapy. Eventually, a sufficient trend is determined by the clinician to predict the outcome.Figure 8B shows an alternative approach using the machine-learnt classifier. Less information is needed, such as just one set of post therapy frames of data (e.g., from one time or appointment for scanning). With textural and/or other feature analysis and machine-learnt classification, the therapy success or failure decision may be inferred without manual perception of trends. A lesser number of scans and lesser amount of time are needed to make therapy decisions. The success of the therapy applied to the tumor is inferred and used to optimize treatment. - Other classes for machine-learnt classification may be used. The classifier may be machine trained to classify the tumor (e.g., suspicious tissue region) as benign or malignant. Once the lesion is segmented or identified, values for textural features are computed for the lesion. The values are fed into the machine-learnt classifier for labeling of malignancy.
- The classifier may be trained to output values for staging the tumor. Using advanced tissue characterization provided by the machine-learnt classifier, the stage is output. For example in liver tissue, the extracted textural features are used by the classifier to output a measure of fibrosis staging. In other examples, the classifier is trained to output tags used for staging, such as outputting the measures used for staging the tumor. The values for the features are used by the classifier to provide the tag or staging measure. In quantitative BIRADS for beast examination, the textural features are extracted, and then the classifier associates the categorical labels of clinically descriptive tags (e.g., measures of mass, symmetry, and non-mess-like enhancement) to the extracted features. The inferred tags are then used to manually or automatically stage the breast tumor. In quantitative PIRADS for prostate examination, the textural features are extracted, and then the classifier associates the categorical labels of clinically descriptive tags (e.g., tags for the peripheral zone, central zone, and transition zone) to the extracted features. The inferred tags are then used to manually or automatically stage the prostate.
- In another embodiment, the classifier is trained to output any information useful for diagnosis or prognosis. For example, information to enhance therapy monitoring is output. An intensity histogram, histogram of difference over time in the intensities representing the tumor, and/or a difference of histograms of intensities representing the tumor at different times are calculated and output without the classifier. The classifier supplements these or other image intensity statistics or histograms. Information derived from the textual features and/or other features is used to provide any information useful to clinicians.
- More than one machine-trained classifier may be used. The same or different features are used by different classifiers to output the same or different information. For example, a classifier is trained for predicting therapy response, and another classifier is trained to output tags for staging. In alternative embodiments, one classifier is trained to output different types of information, such as using a hierarchal classifier.
-
Figure 7 shows four example use cases of classification using multi-parametric magnetic resonance frames of data. "Localize Lesion" represents identifying a tissue region of interest (ROI) by segmentation or by user region designation (e.g., act 14). Inapplications # 1, #2, and #3, the input to the system is a set of multi-parametric magnetic resonance images from a single time point, and the system performs one or more of three different tasks: inapplication # 1, the task is to predict whether the tumor is benign or malignant. The outcome may be either discrete (e.g., Yes vs No) or continuous (e.g., giving an indication of the severity of the malignancy, such as a number between 0 and 1). Inapplication # 2, the task is to predict whether the tumor is low- or high-grade. The grading may be discrete (e.g., Low vs High) or continuous (e.g., a number between 0 to the N, where N is the highest grade possible). Inapplication # 3, the task is to predict whether the patient responds to the therapy. The response may be discrete (e.g., responding vs not-responding) or continuous (a number indicating the degree of response, such as between 0 and 1). Inapplication # 4, the input to the system is a set of multi-parametric magnetic resonance images from two or more time points, and the system performs the same task as inapplications # 3, which is to predict the response, based on the values extracted from each of the images. - In
act 20 ofFigure 1 , the tissue classification is transmitted. Any of the tissue classifications output by the classifier are transmitted. Alternatively, information derived from the output of the classification is transmitted, such as a stage derived from classification of tags. - The transmission is to a display, such as a monitor, workstation, printer, handheld, or computer. Alternatively or additionally, the transmission is to a memory, such as a database of patient records, or to a network, such as a computer network.
- The tissue classification is output to assist with prognosis, diagnosis, or evidence-based medicine. For example, a list of similar patients, including their treatment regime and outcome, is output. As another example, a predicted therapy response is output in a report for the patient.
- The tissue classification is output as text. An image of the tumor is annotated or labeled with alphanumeric text to indicate the classification. In other embodiments, an image of the tissue is displayed, and the classification is communicated as a symbol, coloring, highlighting or other information added onto the image. Alternatively, the classification is output in a report without the image of the tumor or separated (e.g., spaced away) from the image of the tumor.
- The tissue may also be classified very locally (e.g., independent classification of every voxel). The resulting classification is output as a colored or highlighted overlay onto images of the tissue, visually indicating, spatially, possible regions of likely response or non-response.
- Other information may be output as well. Other information includes values for the features, clinical measures, values from image processing, treatment regime, or other information (e.g., lab results).
-
Figure 9 shows a system for tissue characterization in medical imaging. The system includes animaging system 80, amemory 84, auser input 85, aprocessor 82, adisplay 86, aserver 88, and adatabase 90. Additional, different, or fewer components may be provided. For example, a network or network connection is provided, such as for networking with a medical imaging network or data archival system. In another example, theuser input 85 is not provided. As another example, theserver 88 anddatabase 90 are not provided. In other examples, theserver 88 connects through a network withmany imaging systems 80 and/orprocessors 82. - The
processor 82,memory 84,user input 85, anddisplay 86 are part of themedical imaging system 80. Alternatively, theprocessor 82,memory 84,user input 85, anddisplay 86 are part of an archival and/or image processing system, such as associated with a medical records database workstation or server, separate from theimaging system 80. In other embodiments, theprocessor 82,memory 84,user input 85, anddisplay 86 are a personal computer, such as desktop or laptop, a workstation, a server, a network, or combinations thereof. Theprocessor 82,display 86,user input 85, andmemory 84 may be provided without other components for acquiring data by scanning a patient. - The
imaging system 80 is a medical diagnostic imaging scanner. Ultrasound, computed tomography (CT), x-ray, fluoroscopy, positron emission tomography, single photon emission computed tomography, and/or magnetic resonance (MR) systems may be used. Theimaging system 80 may include a transmitter and includes a detector for scanning or receiving data representative of the interior of the patient. - In one embodiment, the
imaging system 80 is a magnetic resonance system. The magnetic resonance system includes a main field magnet, such as a cryomagnet, and gradient coils. A whole body coil is provided for transmitting and/or receiving. Local coils may be used, such as for receiving electromagnetic energy emitted by atoms in response to pulses. Other processing components may be provided, such as for planning and generating transmit pulses for the coils based on the sequence and for receiving and processing the received k-space data. The received k-space data is converted into object or image space data with Fourier processing. Anatomical and/or functional scanning sequences may be used to scan the patient, resulting in frames of anatomical and/or functional data representing the tissue. - The
memory 84 may be a graphics processing memory, a video random access memory, a random access memory, system memory, cache memory, hard drive, optical media, magnetic media, flash drive, buffer, database, combinations thereof, or other now known or later developed memory device for storing data or video information. Thememory 84 is part of theimaging system 80, part of a computer associated with theprocessor 82, part of a database, part of another system, a picture archival memory, or a standalone device. - The
memory 84 stores medical imaging data representing the patient, segmentation or tissue region information, feature kernels, extracted values for features, classification results, a machine-learnt matrix, and/or images. Thememory 84 may alternatively or additionally store data during processing, such as storing seed locations, detected boundaries, graphic overlays, quantities, or other information discussed herein. - The
memory 84 or other memory is alternatively or additionally a non-transitory computer readable storage medium storing data representing instructions executable by the programmedprocessor 82 for tissue classification in medical imaging. The instructions for implementing the processes, methods and/or techniques discussed herein are provided on non-transitory computer-readable storage media or memories, such as a cache, buffer, RAM, removable media, hard drive or other computer readable storage media. Non-transitory computer readable storage media include various types of volatile and nonvolatile storage media. The functions, acts or tasks illustrated in the figures or described herein are executed in response to one or more sets of instructions stored in or on computer readable storage media. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone, or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing, and the like. - In one embodiment, the instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the instructions are stored in a remote location for transfer through a computer network or over telephone lines. In yet other embodiments, the instructions are stored within a given computer, CPU, GPU, or system.
- The
user input 85 is a keyboard, mouse, trackball, touch pad, buttons, sliders, combinations thereof, or other input device. Theuser input 85 may be a touch screen of thedisplay 86. User interaction is received by the user input, such as a designation of a region of tissue (e.g., a click or click and drag to place a region of interest). Other user interaction may be received, such as for activating the classification. - The
processor 82 is a general processor, central processing unit, control processor, graphics processor, digital signal processor, three-dimensional rendering processor, image processor, application specific integrated circuit, field programmable gate array, digital circuit, analog circuit, combinations thereof, or other now known or later developed device for segmentation, extracting feature values, and/or classifying tissue. Theprocessor 82 is a single device or multiple devices operating in serial, parallel, or separately. Theprocessor 82 may be a main processor of a computer, such as a laptop or desktop computer, or may be a processor for handling some tasks in a larger system, such as in animaging system 80. Theprocessor 82 is configured by instructions, design, hardware, and/or software to perform the acts discussed herein. - The
processor 82 is configured to perform the acts discussed above. In one embodiment, theprocessor 82 is configured to identify a region of interest based on user input, extract values for features for the region, classify the tumor in the region (e.g., apply the machine-learnt classifier), and output results of the classification. In other embodiments, theprocessor 82 is configured to transmit the acquired frames of data or extracted values of features to theserver 88 and receive classification results from theserver 88. Theserver 88 rather than theprocessor 82 performs the machine-learnt classification. Theprocessor 82 may be configured to generate a user interface for receiving seed points or designation of a region of interest on one or more images. - The
display 86 is a monitor, LCD, projector, plasma display, CRT, printer, or other now known or later developed devise for outputting visual information. Thedisplay 86 receives images, graphics, text, quantities, or other information from theprocessor 82,memory 84,imaging system 80, orserver 88. One or more medical images are displayed. The images are of a region of the patient. In one embodiment, the images are of a tumor, such as three-dimensional rendering of the liver with the tumor highlighted by opacity or color. The image includes an indication, such as a text, a graphic or colorization, of the classification of the tumor. Alternatively or additionally, the image includes a quantity based on the classification, such as a tag value. The quantity or classification output may be displayed as the image without the medical image representation of the patient. Alternatively or additionally, a report with the classification is output. - The
server 88 is a processor or group of processors. More than oneserver 88 may be provided. Theserver 88 is configured by hardware and/or software to receive frames of data (e.g., multi-parametric images), extracted features from frames of data, and/or other clinical information for a patient, and return the classification. Theserver 88 may extract values for the features from received frames of data. To classify, theserver 88 applies a machine-learnt classifier to the received information. Where the classification identifies one or more reference cases similar to the case for a given patient, theserver 88 interacts with thedatabase 90. - The
database 90 is a memory, such as a bank of memories, for storing reference cases including treatments for tumor, frames of data and/or extracted values for features, and outcomes for evidence-based medicine. Theserver 88 uses thedatabase 90 to identify the cases in the database most or sufficiently similar to a current case for a current patient. Theserver 88 transmits the identity of the reference and/or the reference information to theprocessor 82. In alternative embodiments, theserver 88 anddatabase 90 are not provided, such as where theprocessor 82 andmemory 84 extract, classify, and output the classification.
Claims (17)
- A non-transitory computer readable media for tissue characterization in medical imaging, the non-transitory computer readable media comprising instructions which, when executed by a computer, cause the computer to carry out at least the following steps:receiving scanning data of a patient scanned with a medical scanner, the scanning data providing multiple frames of data representing a tissue region of interest in the patient;extracting, by a processor, values for features from the frames of data;classifying, by a machine-learnt classifier implemented by the processor, a therapy response of the tissue of the tissue region from the values of the features as input to the machine-learnt classifier;wherein extracting and classifying comprise extracting and classifying based on Siamese convolutional networks comprising two parallel networks which process pre and post treatment data,wherein the Siamese convolutional networks are trained to extract features from given multi-modal, multi-dimensional images, IM1 and IM2, which are used as input images, wherein one branch of the Siamese convolutional network is used to input IM1 representing pre-treatment and another branch is used to input IM2 representing post-treatment, wherein relevant features are automatically determined as part of training, wherein the two parallel networks are trained with exactly the same parameters, and the Siamese convolutional networks are linked so that the same weights (W) for the parameters defining the networks are used in both branches,wherein classifying the therapy response comprises inferring therapy success for therapy applied to the tissue region,
and/orwherein classifying the therapy response comprises identifying an example treatment and outcome for another patient with similar extracted values of the features,
andtransmitting the therapy response. - The non-transitory computer readable media according to claim 1,wherein scanning comprises scanning with a magnetic resonance scanner, computed tomography scanner, ultrasound scanner, positron emission tomography scanner, single photon emission computed tomography scanner, or combinations thereof,
and/orwherein scanning comprises scanning with a magnetic resonance scanner, the frames of data comprising functional measurements by the magnetic resonance scanner. - The non-transitory computer readable media according to any of the claims 1 or 2, wherein extracting values comprises extracting the values as scale-invariant feature transformation, histogram of oriented gradients, local binary pattern, gray-level co-occurrence matrix, or combinations thereof.
- The non-transitory computer readable media according to any of the preceding claims, wherein classifying comprises classifying from the values of the features extracted from the frames of data and values of clinical measurements for the patient.
- The non-transitory computer readable media according to any of the preceding claims, wherein scanning comprises scanning at different times, different contrasts, or different types, the frames corresponding to the different times, contrasts, or types, and wherein classifying comprises classifying the therapy response based due to differences in the frames.
- The non-transitory computer readable media according to any of the preceding claims, wherein extracting comprises extracting the values for the features with the features comprising deep-learnt features from deep learning, and wherein classifying comprises classifying by the machine-learnt classifier learnt with the deep learning.
- The non-transitory computer readable media according to any of the preceding claims, wherein transmitting the therapy response comprises displaying the therapy response in a report.
- The non-transitory computer readable media according to any of the preceding claims, further comprising identifying, by a user input device responsive to user selection, the tissue region, wherein the extracting, classifying, and transmitting are performed without user input after the identifying.
- The non-transitory computer readable media according to any of the preceding claims, wherein the extracting and classifying are performed by the processor remote from the medical scanner.
- The non-transitory computer readable media according to any of the preceding claims,, where therapy response comprises a responder classification, non-responder classification, or a prediction of survival time.
- The non-transitory computer readable media according to any of the preceding claims, wherein transmitting the therapy response comprises displaying the therapy response as an overlay over images of the tissue region.
- A tissue classification system for tissue characterization in medical imaging, the tissue classification system comprising:a medical scanner, which scans a patient, the scanning providing different frames of data representing different types of measurements for tissue in the patient;a processor, which extracts values for features from the frames of data;a machine-learnt classifier, which classifies the tissue of the patient from the values of the features as input to the machine-learnt classifier; wherein extracting and classifying comprise extracting and classifying based on Siamese convolutional networks comprising two parallel networks which process pre and post treatment data,wherein the Siamese convolutional networks are trained to extract features from given multi-modal, multi-dimensional images, IM1 and IM2, which are used as input images, wherein one branch of the Siamese convolutional network is used to input IM1 representing pre-treatment and another branch is used to input IM2 representing post-treatment, wherein relevant features are automatically determined as part of training, wherein the two parallel networks are trained with exactly the same parameters, and the Siamese convolutional networks are linked so that the same weights (W) for the parameters defining the networks are used in both branches,wherein classifying the therapy response comprises inferring therapy success for therapy applied to the tissue region,
and/orwherein classifying the therapy response comprises identifying an example treatment and outcome for another patient with similar extracted values of the features,
andtransmits the tissue classification. - The tissue classification system according to claim 12, wherein scanning comprises scanning with a magnetic resonance scanner, the different types of measurements including an apparent diffusion coefficient.
- The tissue classification system according to claims 12 or 13, wherein extracting comprises extracting the values for a texture of the tissue represented by at least one of the frames of data.
- The tissue classification system according to claim 14, whereinclassifying the tissue comprises classifying a predication of response of the tissue to therapy, classifying a benign or malignant, classifying a staging tag, a similarity to a case, or combinations thereof,
and/orwherein extracting comprises extracting the values for features that are deep-learnt features, and wherein classifying comprises classifying by the machine-learnt classifier being a deep-learnt classifier, the deep-learnt features and deep-learnt classifier being performed on multiple convolution networks. - A computer program for tissue characterization in medical imaging, the computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out at least the following steps:receiving scanning data of a patient scanned with a medical scanner, the scanning data providing a frame of data representing a tumor in the patient;extracting, by a processor, values for deep-learnt features from the frame of data;classifying, by a deep-machine-learnt classifier implemented by the processor, the tumor from the values of the features as input to a machine-learnt classifier;wherein extracting and classifying comprise extracting and classifying based on Siamese convolutional networks comprising two parallel networks which process pre and post treatment data,wherein the Siamese convolutional networks are trained to extract features from given multi-modal, multi-dimensional images, IM1 and IM2, which are used as input images, wherein one branch of the Siamese convolutional network is used to input IM1 representing pre-treatment and another branch is used to input IM2 representing post-treatment, wherein relevant features are automatically determined as part of training, wherein the two parallel networks are trained with exactly the same parameters, and the Siamese convolutional networks are linked so that the same weights (W) for the parameters defining the networks are used in both branches,wherein classifying the therapy response comprises inferring therapy success for therapy applied to the tissue region,
and/orwherein classifying the therapy response comprises identifying an example treatment and outcome for another patient with similar extracted values of the features,
andtransmitting the classification of the tumor. - The computer program according to claim 16, wherein the deep-learnt features and the deep-machine-learnt classifier are from a Siamese convolution network.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/261,124 US10748277B2 (en) | 2016-09-09 | 2016-09-09 | Tissue characterization based on machine learning in medical imaging |
Publications (3)
Publication Number | Publication Date |
---|---|
EP3293736A1 EP3293736A1 (en) | 2018-03-14 |
EP3293736B1 true EP3293736B1 (en) | 2023-08-23 |
EP3293736C0 EP3293736C0 (en) | 2023-08-23 |
Family
ID=59811106
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17189559.2A Active EP3293736B1 (en) | 2016-09-09 | 2017-09-06 | Tissue characterization based on machine learning in medical imaging |
Country Status (3)
Country | Link |
---|---|
US (1) | US10748277B2 (en) |
EP (1) | EP3293736B1 (en) |
CN (1) | CN107818821A (en) |
Families Citing this family (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6818424B2 (en) * | 2016-04-13 | 2021-01-20 | キヤノン株式会社 | Diagnostic support device, information processing method, diagnostic support system and program |
US10283221B2 (en) | 2016-10-27 | 2019-05-07 | International Business Machines Corporation | Risk assessment based on patient similarity determined using image analysis |
US10282843B2 (en) * | 2016-10-27 | 2019-05-07 | International Business Machines Corporation | System and method for lesion analysis and recommendation of screening checkpoints for reduced risk of skin cancer |
US10242442B2 (en) | 2016-10-27 | 2019-03-26 | International Business Machines Corporation | Detection of outlier lesions based on extracted features from skin images |
WO2018127497A1 (en) * | 2017-01-05 | 2018-07-12 | Koninklijke Philips N.V. | Ultrasound imaging system with a neural network for deriving imaging data and tissue information |
WO2018178212A1 (en) * | 2017-03-28 | 2018-10-04 | Koninklijke Philips N.V. | Ultrasound clinical feature detection and associated devices, systems, and methods |
US20180310918A1 (en) * | 2017-04-27 | 2018-11-01 | Siemens Medical Solutions Usa, Inc. | Variable focus for shear wave imaging |
CN107562805B (en) * | 2017-08-08 | 2020-04-03 | 浙江大华技术股份有限公司 | Method and device for searching picture by picture |
JP2019049884A (en) * | 2017-09-11 | 2019-03-28 | 株式会社東芝 | Image processing device and failure diagnosis control method |
US11893729B2 (en) * | 2017-11-22 | 2024-02-06 | GE Precision Healthcare LLC | Multi-modal computer-aided diagnosis systems and methods for prostate cancer |
CN111971688A (en) * | 2018-04-09 | 2020-11-20 | 皇家飞利浦有限公司 | Ultrasound system with artificial neural network for retrieving imaging parameter settings of relapsing patients |
EP3564906A1 (en) * | 2018-05-04 | 2019-11-06 | Siemens Healthcare GmbH | Method for generating image data in a computer tomography unit, image generating computer, computer tomography unit, computer program product and computer-readable medium |
US10957041B2 (en) | 2018-05-14 | 2021-03-23 | Tempus Labs, Inc. | Determining biomarkers from histopathology slide images |
US11348240B2 (en) | 2018-05-14 | 2022-05-31 | Tempus Labs, Inc. | Predicting total nucleic acid yield and dissection boundaries for histology slides |
US11348661B2 (en) | 2018-05-14 | 2022-05-31 | Tempus Labs, Inc. | Predicting total nucleic acid yield and dissection boundaries for histology slides |
EP3794551A4 (en) | 2018-05-14 | 2022-02-09 | Tempus Labs, Inc. | A generalizable and interpretable deep learning framework for predicting msi from histopathology slide images |
US11348239B2 (en) | 2018-05-14 | 2022-05-31 | Tempus Labs, Inc. | Predicting total nucleic acid yield and dissection boundaries for histology slides |
CN109584995B (en) * | 2018-06-20 | 2023-07-18 | 新影智能科技(昆山)有限公司 | TACE treatment result image analysis method, system, equipment and storage medium |
CN109124660B (en) * | 2018-06-25 | 2022-06-10 | 南方医科大学南方医院 | Gastrointestinal stromal tumor postoperative risk detection method and system based on deep learning |
CN108846840B (en) * | 2018-06-26 | 2021-11-09 | 张茂 | Lung ultrasonic image analysis method and device, electronic equipment and readable storage medium |
JP7167515B2 (en) * | 2018-07-17 | 2022-11-09 | 大日本印刷株式会社 | Identification device, program, identification method, information processing device and identification device |
CN112513674A (en) * | 2018-07-26 | 2021-03-16 | 皇家飞利浦有限公司 | Ultrasonic system for automatically and dynamically setting imaging parameters based on organ detection |
CN110910340A (en) * | 2018-08-28 | 2020-03-24 | 奥林巴斯株式会社 | Annotation device and annotation method |
JP6987721B2 (en) * | 2018-08-31 | 2022-01-05 | 富士フイルム株式会社 | Image processing equipment, methods and programs |
US10835761B2 (en) | 2018-10-25 | 2020-11-17 | Elekta, Inc. | Real-time patient motion monitoring using a magnetic resonance linear accelerator (MR-LINAC) |
US11083913B2 (en) | 2018-10-25 | 2021-08-10 | Elekta, Inc. | Machine learning approach to real-time patient motion monitoring |
CN109584209B (en) * | 2018-10-29 | 2023-04-28 | 深圳先进技术研究院 | Vascular wall plaque recognition apparatus, system, method, and storage medium |
US10803987B2 (en) * | 2018-11-16 | 2020-10-13 | Elekta, Inc. | Real-time motion monitoring using deep neural network |
CN109740626A (en) * | 2018-11-23 | 2019-05-10 | 杭州电子科技大学 | The detection method of cancerous area in breast cancer pathological section based on deep learning |
JP6915604B2 (en) | 2018-11-28 | 2021-08-04 | 横河電機株式会社 | Equipment, methods and programs |
US10936160B2 (en) * | 2019-01-11 | 2021-03-02 | Google Llc | System, user interface and method for interactive negative explanation of machine-learning localization models in health care applications |
JP7308258B2 (en) * | 2019-02-19 | 2023-07-13 | 富士フイルム株式会社 | Medical imaging device and method of operating medical imaging device |
CN110491502B (en) * | 2019-03-08 | 2021-03-16 | 腾讯科技(深圳)有限公司 | Microscope video stream processing method, system, computer device and storage medium |
CN110313930B (en) * | 2019-07-24 | 2023-07-04 | 沈阳智核医疗科技有限公司 | Method and device for determining scanning position and terminal equipment |
CN110555827B (en) * | 2019-08-06 | 2022-03-29 | 上海工程技术大学 | Ultrasonic imaging information computer processing system based on deep learning drive |
US11854676B2 (en) * | 2019-09-12 | 2023-12-26 | International Business Machines Corporation | Providing live first aid response guidance using a machine learning based cognitive aid planner |
CN110702792B (en) * | 2019-09-29 | 2023-02-10 | 中国航发北京航空材料研究院 | Alloy tissue ultrasonic detection classification method based on deep learning |
US20230070249A1 (en) * | 2020-02-05 | 2023-03-09 | Hangzhou Yitu Healthcare Technology Co., Ltd. | Medical imaging-based method and device for diagnostic information processing, and storage medium |
US11282193B2 (en) * | 2020-03-31 | 2022-03-22 | Ping An Technology (Shenzhen) Co., Ltd. | Systems and methods for tumor characterization |
US11508066B2 (en) | 2020-08-13 | 2022-11-22 | PAIGE.AI, Inc. | Systems and methods to process electronic images for continuous biomarker prediction |
CN112336381B (en) * | 2020-11-07 | 2022-04-22 | 吉林大学 | Echocardiogram end systole/diastole frame automatic identification method based on deep learning |
CN113240677B (en) * | 2021-05-06 | 2022-08-02 | 浙江医院 | Retina optic disc segmentation method based on deep learning |
CN113362958A (en) * | 2021-06-01 | 2021-09-07 | 深圳睿心智能医疗科技有限公司 | Method and device for predicting effect after application of treatment scheme |
CN113689382B (en) * | 2021-07-26 | 2023-12-01 | 北京知见生命科技有限公司 | Tumor postoperative survival prediction method and system based on medical images and pathological images |
CN113951931B (en) * | 2021-10-25 | 2024-02-13 | 中国人民解放军总医院第一医学中心 | Liver trauma ultrasonic diagnosis equipment and system based on twin prototype network |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1236173A2 (en) | 1999-10-27 | 2002-09-04 | Biowulf Technologies, LLC | Methods and devices for identifying patterns in biological systems |
US9235887B2 (en) * | 2008-02-19 | 2016-01-12 | Elucid Bioimaging, Inc. | Classification of biological tissue by multi-mode data registration, segmentation and characterization |
US9275456B2 (en) * | 2010-10-29 | 2016-03-01 | The Johns Hopkins University | Image search engine |
CN104602638B (en) * | 2012-06-27 | 2017-12-19 | 曼特瑞斯医药有限责任公司 | System for influenceing to treat tissue |
US9177105B2 (en) * | 2013-03-29 | 2015-11-03 | Case Western Reserve University | Quantitatively characterizing disease morphology with co-occurring gland tensors in localized subgraphs |
EP3047415A1 (en) * | 2013-09-20 | 2016-07-27 | Siemens Aktiengesellschaft | Biopsy-free detection and staging of cancer using a virtual staging score |
US9655563B2 (en) * | 2013-09-25 | 2017-05-23 | Siemens Healthcare Gmbh | Early therapy response assessment of lesions |
US9430829B2 (en) * | 2014-01-30 | 2016-08-30 | Case Western Reserve University | Automatic detection of mitosis using handcrafted and convolutional neural network features |
EP3155592B1 (en) * | 2014-06-10 | 2019-09-11 | Leland Stanford Junior University | Predicting breast cancer recurrence directly from image features computed from digitized immunohistopathology tissue slides |
US9537808B1 (en) * | 2015-07-14 | 2017-01-03 | Kyocera Document Solutions, Inc. | Communication between peripheral device and client device |
CN107851194A (en) | 2015-08-04 | 2018-03-27 | 西门子公司 | Visual representation study for brain tumor classification |
US9739783B1 (en) * | 2016-03-15 | 2017-08-22 | Anixa Diagnostics Corporation | Convolutional neural networks for cancer diagnosis |
US20190021677A1 (en) * | 2017-07-18 | 2019-01-24 | Siemens Healthcare Gmbh | Methods and systems for classification and assessment using machine learning |
US10521911B2 (en) * | 2017-12-05 | 2019-12-31 | Siemens Healtchare GmbH | Identification of defects in imaging scans |
-
2016
- 2016-09-09 US US15/261,124 patent/US10748277B2/en active Active
-
2017
- 2017-09-06 EP EP17189559.2A patent/EP3293736B1/en active Active
- 2017-09-08 CN CN201710805375.9A patent/CN107818821A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US10748277B2 (en) | 2020-08-18 |
EP3293736A1 (en) | 2018-03-14 |
US20180075597A1 (en) | 2018-03-15 |
CN107818821A (en) | 2018-03-20 |
EP3293736C0 (en) | 2023-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3293736B1 (en) | Tissue characterization based on machine learning in medical imaging | |
US11551353B2 (en) | Content based image retrieval for lesion analysis | |
Sarvamangala et al. | Convolutional neural networks in medical image understanding: a survey | |
Mazurowski et al. | Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on MRI | |
Dar et al. | Breast cancer detection using deep learning: Datasets, methods, and challenges ahead | |
US20200085382A1 (en) | Automated lesion detection, segmentation, and longitudinal identification | |
US10489907B2 (en) | Artifact identification and/or correction for medical imaging | |
US10499857B1 (en) | Medical protocol change in real-time imaging | |
US10496884B1 (en) | Transformation of textbook information | |
Saba | Automated lung nodule detection and classification based on multiple classifiers voting | |
US10111632B2 (en) | System and method for breast cancer detection in X-ray images | |
US10853449B1 (en) | Report formatting for automated or assisted analysis of medical imaging data and medical diagnosis | |
US9959486B2 (en) | Voxel-level machine learning with or without cloud-based support in medical imaging | |
EP3375376B1 (en) | Source of abdominal pain identification in medical imaging | |
Mahmood et al. | Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach | |
Yeo et al. | Review of deep learning algorithms for the automatic detection of intracranial hemorrhages on computed tomography head imaging | |
US20190150870A1 (en) | Classification of a health state of tissue of interest based on longitudinal features | |
Sanyal et al. | An automated two-step pipeline for aggressive prostate lesion detection from multi-parametric MR sequence | |
Homayoun et al. | Automated segmentation of abnormal tissues in medical images | |
Zhao | Deep learning based medical image segmentation and classification for artificial intelligence healthcare | |
Xi et al. | Computer-aided detection of abnormality in mammography using deep object detectors | |
Shiny | Brain tumor segmentation and classification using optimized U-Net | |
Zhao et al. | Data augmentation for medical image analysis | |
Mehmood et al. | A classifier model for prostate cancer diagnosis using CNNs and transfer learning with multi-parametric MRI | |
Shargunam et al. | An efficient glioma classification and grade detection using hybrid convolutional neural network-based SVM model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20170906 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
R17P | Request for examination filed (corrected) |
Effective date: 20170906 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20210514 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G16H 30/40 20180101ALN20230313BHEP Ipc: G16H 50/70 20180101ALN20230313BHEP Ipc: G16H 50/20 20180101AFI20230313BHEP |
|
INTG | Intention to grant announced |
Effective date: 20230404 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602017073014 Country of ref document: DE |
|
U01 | Request for unitary effect filed |
Effective date: 20230920 |
|
U07 | Unitary effect registered |
Designated state(s): AT BE BG DE DK EE FI FR IT LT LU LV MT NL PT SE SI Effective date: 20230926 |
|
U20 | Renewal fee paid [unitary effect] |
Year of fee payment: 7 Effective date: 20230926 |
|
U1N | Appointed representative for the unitary patent procedure changed [after the registration of the unitary effect] |
Representative=s name: SIEMENS HEALTHINEERS PATENT ATTORNEYS; DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231124 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20231009 Year of fee payment: 7 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231223 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230823 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231123 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231223 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230823 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231124 |
|
RAP2 | Party data changed (patent owner data changed or rights of a patent transferred) |
Owner name: THE JOHNS HOPKINS UNIVERSITY Owner name: SIEMENS HEALTHINEERS AG |
|
U1K | Transfer of rights of the unitary patent after the registration of the unitary effect |
Owner name: THE JOHNS HOPKINS UNIVERSITY; US Owner name: SIEMENS HEALTHINEERS AG; DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230823 |
|
U1N | Appointed representative for the unitary patent procedure changed [after the registration of the unitary effect] |
Representative=s name: SIEMENS HEALTHINEERS PATENT ATTORNEYS; DE |