WO2022203660A1 - Procédé et système de diagnostic de nodules chez des mammifères avec des caractéristiques radiomiques et des caractéristiques descriptives d'imagerie sémantique - Google Patents

Procédé et système de diagnostic de nodules chez des mammifères avec des caractéristiques radiomiques et des caractéristiques descriptives d'imagerie sémantique Download PDF

Info

Publication number
WO2022203660A1
WO2022203660A1 PCT/US2021/023791 US2021023791W WO2022203660A1 WO 2022203660 A1 WO2022203660 A1 WO 2022203660A1 US 2021023791 W US2021023791 W US 2021023791W WO 2022203660 A1 WO2022203660 A1 WO 2022203660A1
Authority
WO
WIPO (PCT)
Prior art keywords
nodule
features
imaging
semantic
classification model
Prior art date
Application number
PCT/US2021/023791
Other languages
English (en)
Inventor
Cheng-Yu Chen
Yung-Chun Chang
Cho-Chiang Shih
Original Assignee
Taipei Medical University
CHEN, David, Carroll
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taipei Medical University, CHEN, David, Carroll filed Critical Taipei Medical University
Priority to PCT/US2021/023791 priority Critical patent/WO2022203660A1/fr
Priority to TW110119789A priority patent/TW202238617A/zh
Publication of WO2022203660A1 publication Critical patent/WO2022203660A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Definitions

  • the present invention relates to a method, a computer storage medium, and a system for diagnosing nodules and, more particularly, to a method, a computer storage medium, and a system for diagnosing nodules in mammals, which input radiomics features and slice information computed from 3D medical images to a machine learning model in generation of semantic imaging descriptive features and subsequently input the radiomics features and the semantic imaging descriptive features to another machine learning model to predict if the nodules are benign or malign.
  • Nodules which arise from growth of abnormal tissue possibly developing just below the skin, in deeper skin tissues, or in internal organs.
  • lung nodules are small masses of tissue in the lung and can be detected as round and white shadows on a chest X-ray or computerized tomography (CT) scan.
  • CT computerized tomography
  • nodules Although most nodules, especially small nodules less than one centimeter, are benign or noncancerous, some nodules turn out to be malign or cancerous.
  • a statistical survey indicates that lung cancer is the second most common cancer and the leading cause of cancer death for men and women. From 2008 to 2017, the death rates for men with lung cancer dropped by 4% each year and the death rates for women with lung cancer declined 3% per year. Research indicates that these declines are potentially attributable to medical advances in diagnosis and treatment.
  • radiologists need to analyze nodules on CT images based on their texture, size, and shape and generate a radiology report for the doctors to provide suggested follow-up, biopsy, and/or treatment.
  • analysis and report generation oftentimes cause radiologists tremendous time and it happens that the radiology report prepared by radiologists are not fully correct.
  • diagnostic error rates ranging from 3% to 5%, there are approximately 40 million diagnostic errors involving imaging annually worldwide. The potential to improve diagnostic performance and reduce patient harm by identifying and learning from these errors and how to alleviate the burden of radiologists in terms of nodule annotation and evaluation becomes substantial.
  • An objective of the present invention is to provide a method, a system, and a computer storage medium for nodule diagnosis capable of alleviating the workloads of radiologists in preparation of routine radiology report and enhancing prediction accuracy of nodule’s benignancy and malignancy.
  • the nodule-diagnosing method includes: generating a voxel with each of at least one nodule segmented on multiple 3D medical images; computing multiple radiomics features associated with the voxel; and inputting the multiple radiomics features and multiple semantic imaging descriptive features associated with the voxel to a first classification model to predict if the nodule is benign or malign, in which the first classification model is trained by deep learning or machine learning.
  • the method further includes: computing slice information of the nodule from the 3D medical images containing the nodule, in which the slice information includes nodule location, start slice, end slice, and slices of the nodule over total slices of the 3D medical images; and inputting the multiple radiomics features and the slice information to a second classification model to predict the multiple semantic imaging descriptive features, in which the second classification model is trained by deep learning or machine learning and the multiple semantic imaging descriptive features predicted by the second classification model include three semantic imaging descriptive features associated with categories of location, texture, and margin respectively.
  • the nodule- diagnosing method further includes: computing a size of the nodule according to the shape-based features of the multiple radiomics features and a scanning setting of the 3D medical images; computing a score of the nodule specified in standards of Lung-RADS (Lung Imaging Reporting and Data System) according to the size of the nodule, and the semantic imaging descriptive feature associated with texture and the semantic feature optionally provided and associated with margin; and generating a radiology report of the nodule with a malignant probability predicted by the first classification model, the score of the nodule, the size of the nodule, and the semantic imaging descriptive features associated with location, margin and texture.
  • Lung-RADS Lung Imaging Reporting and Data System
  • the computer storage medium is communicatively connected to a processor with an embedded memory and includes multiple 3D medical images, a first classification model, and computer-executable instructions.
  • the multiple 3D medical images contain at least one nodule segmented thereon.
  • the first classification model is trained by deep learning or machine learning.
  • the computer-executable instructions When executed by the processor, the computer-executable instructions cause the processor to generate a voxel with each of the at least one nodule, compute multiple radiomics features associated with the voxel, and input the multiple radiomics features and multiple semantic imaging descriptive features associated with the voxel to the first classification model to predict if the nodule is benign or malign.
  • the computer storage medium further includes a second classification model trained by deep learning or machine learning; when executed by the processor, the computer-executable instructions cause the processor to compute slice information of the nodule from the 3D medical images containing the nodule, in which the slice information includes nodule location, start slice, end slice, and slices of the nodule over total slices of the 3D medical images, and input the multiple radiomics features and the slice information to the second classification model to predict the multiple semantic imaging descriptive features including three semantic imaging descriptive features associated with categories of location, texture, and margin respectively.
  • a second classification model trained by deep learning or machine learning
  • the computer-executable instructions when executed by the processor, the computer-executable instructions cause the processor to compute a size of the nodule according to the shape-based features of the multiple radiomics features and a scanning setting of the 3D medical images, compute a score of the nodule specified in standards of Lung-RADS according to the size of the nodule and the semantic imaging descriptive feature associated with texture and the semantic imaging descriptive feature optionally provided and associated with margin, and generate a radiology report of the nodule with a malignant probability predicted by the first classification model, the score of the nodule, the size of the nodule, and the semantic imaging descriptive features associated with the category of location, margin, and texture.
  • the system includes a processor with an embedded memory and a computer storage medium.
  • the computer storage medium is communicatively connected to the processor and stores multiple 3D medical images containing at least one nodule segmented thereon, a first classification model trained by deep learning or machine learning, and computer-executable instructions.
  • the computer-executable instructions When executed by the processor, the computer-executable instructions cause the processor to perform acts including: generating a voxel with each of the at least one nodule; computing multiple radiomics features associated with the voxel; and inputting the multiple radiomics features and multiple semantic imaging descriptive features associated with the nodule to the first classification model to predict if the nodule is benign or malign.
  • the computer storage medium further includes a second classification model trained by deep learning or machine learning.
  • the computer-executable instructions When executed by the processor, the computer-executable instructions cause the processor to further perform acts including: computing slice information of the nodule from the 3D medical images containing the nodule, in which the slice information includes nodule location, start slice, end slice, and slices of the nodule over total slices of the 3D medical images; and inputting the multiple radiomics features and the slice information to the second classification model to predict the multiple semantic imaging descriptive features including three semantic imaging descriptive features associated with categories of location, texture, and margin respectively.
  • the computer-executable instructions when executed by the processor, the computer-executable instructions cause the processor to further perform acts including: computing a size of the nodule according to the shape-based features of the multiple radiomics features and a scanning setting of the 3D medical images; computing a score of the nodule specified in standards of Lung-RADS according to the size of the nodule, the semantic imaging descriptive feature associated with texture, and the semantic imaging descriptive feature optionally provided and associated with margin; and generating a radiology report of the nodule with a malignant probability predicted by the first classification model, the score of the nodule, the size of the nodule, and the semantic imaging descriptive features associated with location, margin and texture.
  • the common features to the method, the system, and the computer storage medium allow are to automatically generate a voxel based on a segmented nodule on 3D medical images, compute the radiomics features according to the voxel, compute the slice information of the nodule from the 3D medical images, input the radiomics features and the slice information to the second classification model to predict the semantic imaging descriptive features associated with location, texture, and margin, input the radiomics features and the semantic imaging descriptive features associated with location, texture, and margin to the first classification model to predict if the nodule is benign or malign, compute the size of the nodule according to the radiomics features and the scanning setting of the 3D medical images, compute a score of the nodule according to the size of the nodule and the semantic imaging descriptive features associated with texture and margin, and generate a radiology report of the nodule with the malignant probability predicted by the first classification model, the score of the nodule, the size of the nodule, and
  • the first classification model with the inputs of the radiomics features and the semantic imaging descriptive features has a better prediction accuracy than that of the first classification model with the inputs of the radiomics features only.
  • Fig. 1 is a flow diagram showing a first embodiment of a nodule-diagnosing method in accordance with the present invention
  • Fig. 2 is a CT image with a human lung nodule segmented thereon;
  • Fig. 3 is a schematic diagram showing network architecture of a first classification model in accordance with the present invention.
  • Fig. 4 is a flow diagram showing a second embodiment of a nodule-diagnosing method in accordance with the present invention.
  • Fig. 5 a schematic diagram showing network architecture of a second classification model in accordance with the present invention
  • Fig. 6 is a flow diagram showing a third embodiment of a nodule-diagnosing method in accordance with the present invention.
  • Fig. 7 is a flow diagram showing a fourth embodiment of a nodule-diagnosing method in accordance with the present invention
  • Fig. 8 is a tree diagram showing a mapping relationship to map a score of nodule with semantic imaging descriptive features associated with texture and optional margin and a size of nodule in accordance with the present invention
  • Fig. 9 is a schematic diagram showing a nodule-diagnosing system in accordance with the present invention.
  • Fig. 10 is a schematic diagram showing a computer storage medium in accordance with the present invention.
  • Such special-purpose circuitry can be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), central processing units (CPUs), graphics processing units (GPUs), tensor processing units (TPUs), vision processor units (VPUs), dataflow processing units (DPUs), intelligent processing units (IPUs), etc.
  • ASICs application-specific integrated circuits
  • PLDs programmable logic devices
  • FPGAs field-programmable gate arrays
  • CPUs central processing units
  • GPUs graphics processing units
  • TPUs tensor processing units
  • VPUs vision processor units
  • DPUs dataflow processing units
  • IPUs intelligent processing units
  • the described embodiments concern one or more methods, systems, and computer storage media storing processor-executable process steps for providing radiomics features and slice information of a nodule segmented on multiple three-dimensional (3D) medical images and utilizing two classification models to predict multiple semantic imaging descriptive features of the nodule and predict if the nodule is benign or malign.
  • the radiomics features are computed from a voxel that is computed from the nodule on the 3D medical images while the slice information is directly computed from slices of the multiple 3D medical images involved with the nodule.
  • the classification model for predicting the semantic imaging descriptive features of the nodule takes radiomics features and the slice information of the nodule as inputs and predicts the semantic imaging descriptive features of the nodule associated with categories of location, margin and texture.
  • the classification model for predicting if the nodule is benign or malign takes the radiomics features and the semantic imaging descriptive features of the nodule as inputs and predicts if the nodule is benign or malign.
  • Each of the two classification models may be trained by machine learning or deep learning.
  • the former classification model may serve to automatically generate the semantic imaging descriptive features associated with fewer categories in location, texture, and margin.
  • the 3D medical images containing the nodule can be observed by radiologists to prepare the semantic imaging descriptive features associated with more categories beforehand.
  • the semantic imaging descriptive features generated by the former classification model or generated beforehand can both serve as inputs to the latter classification model while the latter classification model using the semantic imaging descriptive features from the former classification model as inputs may slightly trade off its prediction performance in comparison with that stemming from the inputs of semantic imaging descriptive features generated beforehand.
  • a radiology report can be generated automatically. More details of the described embodiments are elaborated in the following description.
  • one embodiment of a nodule-diagnosing method in accordance with the present invention includes the following steps.
  • Step SI 10 Generate a voxel with each of at least one nodule segmented on multiple 3D medical images.
  • the at least one nodule may develop in organs, including lungs, thyroid, lymph nodes, vocal cord, and so on and in skin tissue, of mammals.
  • the 3D medical images here are generated by scanning through a portion of a mammal’s body with a medical imaging technique, such as computerized tomography (CT) scan, magnetic resonance imaging (MRI), Positron Emission Tomography (PET) scan, or the like.
  • CT computerized tomography
  • MRI magnetic resonance imaging
  • PET Positron Emission Tomography
  • the CT images are generated by a 5 -mm thick CT scan.
  • the at least one nodule can be segmented by a convolutional neural network (CNN) suitable for semantic segmentation or segmented by experienced radiologists before the current step commences.
  • CNN convolutional neural network
  • the experienced radiologists here are defined as the radiologists with 5 to 10 years of experience in nodule annotation including nodule segmentation and description of radiomics features.
  • Each segmented nodule is in a form of pixel- wise masks across some slices of the 3D medical images.
  • each of the at least one segmented nodule is less than 30 mm in diameter. In one example shown in Fig.
  • one segmented nodule which is observed as an area of haze indicative of ground-glass opacity (GGO) is displayed on a CT image of human lungs.
  • GGO ground-glass opacity
  • the segmented nodule can be reconstructed as a voxel on a three-dimensional grid.
  • a voxel means volumetric pixels which are spaced in a regular grid in a three-dimensional space and are perceived without gaps between them.
  • Step SI 20 Compute multiple radiomics features associated with the voxel.
  • the voxel corresponding to each segmented nodule can be used to calculate multiple radiomics features, which are defined as the quantification of the phenotypic features of a lesion from the voxel and can be potentially used to diagnose cancer, identify mutations, and predict prognosis in an accurate and noninvasive fashion, by way of conducting image acquisition and reconstruction, image segmentation, features extraction and qualification, analysis, and model building on the voxel.
  • Table 1 Types of radiomics features for lung nodules
  • radiomics features associated with categories of intensity, shape, GLCM (Gray Level Co-Occurrence Matrix), GLRLM (Gray Level Run-Length Matrix), GLSZM (Gray Level Size Zone Matrix), NGTDM (Neighboring Gray Tone Difference Matrix), and GLDM (Gray Level Dependence Matrix) respectively.
  • the 18 intensity-based radiomics features describe the distribution of individual voxel values without concern for spatial relationships. These are histogram-based properties reporting the mean, median, maximum, minimum values of the voxel intensities on the image, as well as their skewness (asymmetry), kurtosis (flatness), uniformity, and randomness (entropy).
  • the 14 shape-based radiomics features describe the shape of the traced region of interest (ROI) and its geometric properties such as volume, maximum diameter along different orthogonal directions, maximum surface, nodule compactness, and sphericity. For example, the surface-to-volume ratio of a spiculated nodule will show higher values than that of a round nodule of similar volume.
  • ROI traced region of interest
  • texture-based radiomics features are obtained by calculating the statistical inter-relationships between neighboring voxels. They provide a measure of the spatial arrangement of the voxel intensities, and hence of intra-lesion heterogeneity.
  • These texture-based radiomics features can be derived from the GLCM, which quantifies the incidence of zones in the voxel with same intensities at a predetermined distance along a fixed direction, from the GLRLM, which quantifies consecutive zones in the voxel with the same intensity along fixed directions, from the GLSZM, which quantifies gray level zones in the voxel each of which is defined as the number of connected zones that share the same gray level intensity, from the NGTDM, which quantifies the difference between a gray value of a zone of the voxel and the average gray value of its neighboring zones within a distance, and from the GLDM, which quantifies gray level dependencies in the voxel with each gray level dependency defined as the number of connected zones within a distance
  • Step SI 50 Input the multiple radiomics features and multiple semantic imaging descriptive features associated with the nodule to a first classification model to predict if the nodule is benign or malign.
  • the first classification model may be trained by either machine learning or deep learning. When trained by machine learning, the first classification model may be one of one of Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), Random Forest, Decision Tree, and extreme Gradient Boosting.
  • SVM Support Vector Machine
  • LDA Linear Discriminant Analysis
  • Random Forest Random Forest
  • Decision Tree Decision Tree
  • extreme Gradient Boosting extreme Gradient Boosting.
  • the first classification model may be an artificial neural network which converts the radiomics features and the semantic imaging descriptive features in the form of vectors into two-dimensional input matrices respectively, extracts two feature maps from the respective input matrices, concatenating the two feature maps to form a merged feature map, and predicts if the nodule is benign or malign with the merged feature map. As shown in Fig.
  • an embodiment of the artificial neural network for the first classification model is implemented by two input layers 31, 32 that convert the radiomics features and the semantic imaging descriptive features in the form of vectors into two-dimensional input matrices respectively, three convolutional layers 33, 34, 35 that extract two feature maps from the respective input matrices and concatenating the two feature maps to form a merged feature map, and two dense layers 36, 37 that predict if the nodule is benign or malign.
  • input data to the first classification model include the multiple radiomics features and the multiple semantic imaging descriptive features.
  • the multiple radiomics features are already available in the foregoing step SI 20
  • the multiple semantic imaging descriptive features appear to be the focus to be discussed in the next.
  • the multiple semantic imaging descriptive features can be provided in a comprehensive or partial fashion.
  • the comprehensive semantic imaging descriptive features of a nodule on the 3D medical images can be defined through an in-depth discussion and thorough evaluation among experience radiologists. As the radiologists can perform their nodule evaluation with the 3D medical images in an offline fashion, the comprehensive semantic imaging descriptive features can be made available before the nodule-diagnosing method starts.
  • the comprehensive semantic imaging descriptive features for a lung nodule can be categorized by location, size, intratumoral features, perinodular features, pleural changes, lymph nodes of the nodule.
  • the comprehensive semantic imaging descriptive features can be annotated on the multiple 3D medical images by radiologists to add pathological content to the 3D medical images and facilitate easy and automatic search of the content.
  • Each category of the comprehensive semantic imaging descriptive features may be further divided into multiple sub-categories or include multiple options, and each sub-category may be a numeric value or include multiple options.
  • the category of location further includes two sub-categories of right lung and left lung.
  • the sub-category of right lung includes three options (RUL, RML and RLL) to indicate a location of the nodule appearing at upper, middle and lower locations of right lung respectively.
  • the sub-category of left lung includes three options (LUL, LLL and lingular lobe) to indicate a location of the nodule appearing at upper, lower and lingular lobe of left lung respectively.
  • the category of size includes sub-categories of maximum diameter, volume, and size follow-up.
  • the sub-categories of maximum diameter and volume involve numeric values.
  • the sub-category of size follow-up includes four options, namely, not available, TO, Tl, T2 to indicate how many times the nodule has been followed up, in which TO represents that the nodule was first found in the CT scanning, Tl represents the first follow-up, and T2 represents the second follow-up.
  • the category of intratumoral features include six sub-categories, namely, texture, shape, margin, other features, contrast enhancement, and contrast enhancement pattern.
  • the sub-category of texture includes options of solid, subsolid and pure ground glass opacity (GGO) which means non-solid structure.
  • the sub-category of shape includes options of irregular, round, ovoid and wedged.
  • the sub-category of margin includes options of sharp circumscribed, lobulated, indistinct and spiculated.
  • the sub-category of other features includes options of cavitation, air-bronchogram and necrosis.
  • the sub-category of contrast enhancement includes options of not done, yes and no.
  • the sub-category of contrast enhancement pattern includes options of heterogeneous and homogeneous.
  • the category of perinodular features include options of perinodule fibrosis, interlobular septal thickening, perinodule emphysema, satellite nodule, chest wall involvement, and fissure attachment.
  • the category of pleural changes includes options of pleural retraction, plural nodularity, and pleural effusion.
  • the category of lymph nodes includes sub-categories of no lymphadenopathy and lymphadenopathy.
  • the sub-category of lymphadenopathy includes left hilum, right hilum and Mediastinum.
  • the partial semantic imaging descriptive features associated with a part instead of all of the categories listed in the Table 2 can be acquired from an automatic approach, such as a second classification model, which will be introduced later shortly.
  • Such automatic approach targets at getting rid of the whole lot of time of radiologists spent in generation of the comprehensive semantic imaging descriptive features.
  • the automatic approach may lead to a slightly inferior prediction accuracy of the first classification model relative to that of the first classification model with inputs of the comprehensive semantic imaging descriptive features prepared by radiologists in advance.
  • the comprehensive semantic imaging descriptive features covering more aspects of the nodule are good to be taken as training data to train the first classification model to a better model at the training stage.
  • a malignant probability of the nodule can range from 0% to 100%. Not a customary indicator for lung diagnosis for the time being, the malignant probability can be treated as a reference indicator for medical practitioners involved to statistically analyze malignancy and benignancy of a nodule.
  • the labelled malignant probabilities of the nodule being benign and malign are 0% and 100% respectively. The more the malignant probability outputted from the first classification model approaches 0%, the more likelihood the nodule is benign, and the more the malignant probability outputted from the first classification model approaches 100% the more likelihood the nodule is malign.
  • the automatic approach is implemented by the second classification model trained by machine learning or deep learning.
  • another embodiment of a nodule-diagnosing method in accordance with the present invention further adds the following steps between the steps SI 20 and SI 50 which are similar to those in Fig. 1.
  • SI 30 Compute slice information of the nodule from the 3D medical images containing the nodule. Unlike the radiomics features calculated from the voxel, the slice information is directly acquired and computed from the original 3D medical images. As an aid to further improve the radiomics features in accurately describing location, texture and margin of the nodule, the slice information intends to directly pinpoint information relating to location, margin and texture of the nodule on specific slices of the original 3D medical images.
  • the slice information of a nodule serves to identify continued slices of the 3D medical images on which the nodule is located and includes six data fields, namely, nodule location, start slice of the nodule, end slice of the nodule, slice with a largest cross-section of the nodule, and ratio of a count of the continued slices to a total count of slices of the 3D medical images.
  • Step S140 Input the multiple radiomics features and the slice information to the second classification model to predict the multiple semantic imaging descriptive features.
  • the second classification model can be trained by either machine learning or deep learning.
  • the second classification model may be one of Naive Bayes (NB), K-nearest Neighbors (KNN), Random Forest, and extreme Gradient Boosting (XGB).
  • NB Naive Bayes
  • KNN K-nearest Neighbors
  • XGB extreme Gradient Boosting
  • the second classification model includes three artificial neural networks with the multiple radiomics features and the multiple slice information as inputs to one of the three artificial neural networks to predict the semantic imaging descriptive feature in a category of location and with the multiple radiomics features as inputs to each of the remaining two artificial neural networks to respectively predict the semantic imaging descriptive features in categories of margin and texture.
  • each of the three artificial neural networks 40, 50, 60 includes an input layer 41, 51, 61, two dense layers 42, 43, 52, 53, 62, 63, and an output layer 44, 54, 64.
  • the inputs into the input layer 41 are in the form of a multi-feature fusion vector that integrates 107 -dimensional radiomics features along with the slice information
  • the activation function for the dense layers 42, 53 is chosen to be tank (Hyperbolic Tangent)
  • the activation for the output layer 44 is chosen to be softmax.
  • the inputs into the input layer 61 are in a vector form of the 107-dimensional radiomics features and the activation functions for the dense layers 62, 63 and the output layer 64 are the same as those in the artificial neural network 40 outputting the semantic imaging descriptive feature in the category of location.
  • the inputs into the input layer 51 are in a vector form of the 107 -dimensional radiomics features
  • the activation function for the dense layers 52, 53 is chosen to be ReLU (Rectified Linear Unit)
  • the activation for the output layer 54 is chosen to be softmax.
  • the three artificial neural networks 40, 50, 60 are separated and operated in parallel to predict and output the partial semantic imaging descriptive features.
  • Each of the artificial neural networks 40, 50, 60 is preferred to be a multilayer perceptron (MLP) neural network which is considered appropriate to handle inputs involving both numeric and text data similar to the radiomics features.
  • MLP multilayer perceptron
  • the second classification model can only output the partial semantic imaging descriptive features, which do not include all the six categories of the comprehensive semantic imaging descriptive features but three semantic imaging descriptive features associated with three categories of location, texture, and margin. In spite of lesser categories of semantic imaging descriptive features, the second classification model may totally eliminate the engagement of radiologists for predicting nodule’s benignancy or malignancy while not compromising the prediction accuracy significantly in the step SI 50.
  • the steps S 110 to S 140 can be applied independently as a measure to automatically acquire the partial semantic imaging descriptive features in the categories of location, texture and margin, such that radiologists can carry on from there to evaluate and correct the partial semantic imaging descriptive features predicted by the second classification model and finish the remaining categories toward the comprehensive semantic imaging descriptive features in favor of betterment in training the first and second classification models.
  • a radiology report is usually written by a radiologist by reviewing medical history of an examinee and analyzing the examinee’s diagnostic imaging to describe the diagnostic imaging results.
  • generation of the radiology report that can be automatically prepared in an unintended manner without the radiologists becomes one of the goals of the present invention.
  • a radiology report can be provided with the size, location, texture, and margin of a nodule, and a Lung-RADS (Lung Imaging Reporting and Data System) score.
  • the malignant probability is one indicator additionally provided by the present invention as a reference to the likelihood of a nodule being benign or malign.
  • the partial semantic imaging descriptive features associated with the location, texture, and margin of the nodule have been made available by the second classification model or by radiologists beforehand and the malignant probability has been predicted by the first classification model.
  • the size of the nodule and the Lung-RADS score are yet to be provided in completion of a radiology report.
  • the foregoing nodule-diagnosing method further adds the following steps to Figs. 1 and 4 after the step S150 as illustrated in Figs. 6 and 7.
  • Step SI 60 Compute a size of the nodule according to the shape-based features of the multiple radiomics features and a scanning setting of the 3D medical images.
  • the shape-based radiomics features can be a source of information for computing the size of the nodule in actual scale in conjunction with a scanning setting of the 3D medical images.
  • One of the shape-based features of the multiple radiomics features involved with computation of the size of the nodule is the maximum diameter along different orthogonal directions. For the scanning setting, it entails a spacing between slices and a slice thickness of the 3D medical images.
  • Step S170 Generate a score of the nodule specified in standards of Lung-RADS (Lung Imaging Reporting and Data System) according to the size of the nodule, the semantic imaging descriptive feature associated with the option of texture in the category of intratumoral features as illustrated in Fig. 6 or associated with the category of texture as illustrated in Fig. 7 and the semantic imaging descriptive feature optionally provided and associated with the option of margin in the category of intratumoral features as illustrated in Fig. 6 or associated with the category of margin as illustrated in Fig. 7.
  • Lung-RADS Lung Imaging Reporting and Data System
  • the Lung-RADS score is 4B. It is noted in Fig. 8 that the semantic imaging descriptive feature in the category of margin may not be needed to generate the score of the nodule when the semantic imaging descriptive feature in the category of texture is “Solid” or “GGO”.
  • Step SI 80 Generate a radiology report of the nodule with the malignant probability predicted by the first classification model, the score of the nodule, the size of the nodule, and the semantic imaging descriptive features associated with the category of location and the options of margin and texture of the category of intratumoral features as illustrated in Fig. 6 or the categories of margin and texture as illustrated in Fig. 7.
  • the Lung-RADS score serves as an important indicator which is used to suggest successive treatments including pathological examination, regular follow-up, or others. As shown in the example of Table 3, closely follow up and further biopsy are suggested with the score of the nodule rated as 4 A.
  • Table 4 serves to compare the evaluation metrics associated with the inputs of the radiomics features only with those associated with the inputs of the radiomics features and the semantic imaging descriptive features to the first classification model trained by deep learning.
  • the semantic imaging descriptive features pertain to the comprehensive semantic imaging descriptive features
  • the evaluation metrics are empirical from the applicant
  • AUC Accelesity under curve
  • ROC Receiver Operating Curve
  • the prediction accuracy (AUC) of the first classification model evaluated with inputs of the radiomics features and the semantic imaging descriptive features surpasses that evaluated with inputs of the radiomics features only without exception across various classification models trained by machine learning and deep learning.
  • the prediction accuracy evaluated when the partial semantic imaging descriptive features automatically generated by the second classification model are the inputs to the first classification model can approximately reach 0.94, rendering the partial semantic imaging descriptive features as a decent input choice taking both the automatic nodule annotation and acceptable prediction accuracy of the first classification model into account.
  • a system 10 for nodule diagnosis includes a processor 11 with an embedded memory 111 and a computer storage medium 12.
  • the computer storage medium 12 is communicatively connected to the processor 11 and stores multiple 3D medical images 121 containing at least one nodule 1211 segmented thereon, a first classification model 122 trained by deep learning or machine learning, a second classification model 123 trained by deep learning or machine learning, the comprehensive semantic imaging descriptive features 124, and computer-executable instructions 125, can be provided.
  • the computer-executable instructions 124 cause the processor 11 to perform the steps of the foregoing nodule-diagnosing method in Figs. 6 and 7.
  • the processor may be a CPU, and the computer storage medium may be one of a
  • a computer storage medium 20 that is communicatively connected to a processor 30 with an embedded memory 31 includes multiple 3D medical images 21, a first classification model 22, a second classification model 23, the comprehensive semantic imaging descriptive features 24, and computer-executable instructions 25.
  • the multiple 3D medical images 21 contain at least one module 211 segmented thereon.
  • the first classification model 22 is trained by deep learning or machine learning.
  • the second classification model 23 is trained by deep learning or machine learning.
  • the computer-executable instructions 25 cause the processor 30 to perform the steps of the foregoing nodule-diagnosing method as illustrated in Figs. 6 and 7.
  • the computer storage medium may be one of a GPU, a TPU, a VPU, a DPU, and an IPU

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

Sont divulgués un procédé et un système, qui permettent de diagnostiquer des nodules chez les mammifères, de générer un voxel avec chacun d'au moins un nodule segmenté sur des images médicales 3D, de calculer des caractéristiques radiomiques associées au voxel, de calculer des informations de tranche du nodule à partir des images médicales 3D, d'entrer les caractéristiques radiomiques et les informations de tranche dans un second modèle de classification afin de prédire les caractéristiques descriptives d'imagerie sémantique, d'entrer les caractéristiques radiomiques et les caractéristiques descriptives d'imagerie sémantique associées au nodule à un premier modèle de classification pour prédire si le nodule est bénin ou cancéreux. Un rapport de radiologie du nodule peut en outre être automatiquement préparé avec une probabilité de cancer, le score du nodule, la taille du nodule et les caractéristiques descriptives d'imagerie sémantique associées à l'emplacement, à la texture et à la marge. Du fait de la génération automatique du rapport de radiologie, les tâches routinières et fastidieuses des radiologues sont éliminées et les performances de diagnostic peuvent être améliorées.
PCT/US2021/023791 2021-03-24 2021-03-24 Procédé et système de diagnostic de nodules chez des mammifères avec des caractéristiques radiomiques et des caractéristiques descriptives d'imagerie sémantique WO2022203660A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US2021/023791 WO2022203660A1 (fr) 2021-03-24 2021-03-24 Procédé et système de diagnostic de nodules chez des mammifères avec des caractéristiques radiomiques et des caractéristiques descriptives d'imagerie sémantique
TW110119789A TW202238617A (zh) 2021-03-24 2021-06-01 具有放射學特徵和語義成像描述特徵的哺乳動物結節判定方法、系統和電腦儲存媒體

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2021/023791 WO2022203660A1 (fr) 2021-03-24 2021-03-24 Procédé et système de diagnostic de nodules chez des mammifères avec des caractéristiques radiomiques et des caractéristiques descriptives d'imagerie sémantique

Publications (1)

Publication Number Publication Date
WO2022203660A1 true WO2022203660A1 (fr) 2022-09-29

Family

ID=83397689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/023791 WO2022203660A1 (fr) 2021-03-24 2021-03-24 Procédé et système de diagnostic de nodules chez des mammifères avec des caractéristiques radiomiques et des caractéristiques descriptives d'imagerie sémantique

Country Status (2)

Country Link
TW (1) TW202238617A (fr)
WO (1) WO2022203660A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117174257A (zh) * 2023-11-03 2023-12-05 福建自贸试验区厦门片区Manteia数据科技有限公司 医疗影像的处理装置、电子设备及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9235887B2 (en) * 2008-02-19 2016-01-12 Elucid Bioimaging, Inc. Classification of biological tissue by multi-mode data registration, segmentation and characterization
US20160260211A1 (en) * 2013-10-12 2016-09-08 H. Lee Moffitt Cancer Center And Research Institute, Inc. Systems and methods for diagnosing tumors in a subject by performing a quantitative analysis of texture-based features of a tumor object in a radiological image
US9589374B1 (en) * 2016-08-01 2017-03-07 12 Sigma Technologies Computer-aided diagnosis system for medical images using deep convolutional neural networks
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9235887B2 (en) * 2008-02-19 2016-01-12 Elucid Bioimaging, Inc. Classification of biological tissue by multi-mode data registration, segmentation and characterization
US20160260211A1 (en) * 2013-10-12 2016-09-08 H. Lee Moffitt Cancer Center And Research Institute, Inc. Systems and methods for diagnosing tumors in a subject by performing a quantitative analysis of texture-based features of a tumor object in a radiological image
US9589374B1 (en) * 2016-08-01 2017-03-07 12 Sigma Technologies Computer-aided diagnosis system for medical images using deep convolutional neural networks
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117174257A (zh) * 2023-11-03 2023-12-05 福建自贸试验区厦门片区Manteia数据科技有限公司 医疗影像的处理装置、电子设备及计算机可读存储介质
CN117174257B (zh) * 2023-11-03 2024-02-27 福建自贸试验区厦门片区Manteia数据科技有限公司 医疗影像的处理装置、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
TW202238617A (zh) 2022-10-01

Similar Documents

Publication Publication Date Title
EP1315125B1 (fr) Procédé de traitement d'image et système pour détecter des maladies
US7567696B2 (en) System and method for detecting the aortic valve using a model-based segmentation technique
US10997475B2 (en) COPD classification with machine-trained abnormality detection
CN113240719A (zh) 用于表征医学图像中的解剖特征的方法和系统
US11308611B2 (en) Reducing false positive detections of malignant lesions using multi-parametric magnetic resonance imaging
US8290568B2 (en) Method for determining a property map of an object, particularly of a living being, based on at least a first image, particularly a magnetic resonance image
Armato III et al. Automated detection of lung nodules in CT scans: effect of image reconstruction algorithm
US10970837B2 (en) Automated uncertainty estimation of lesion segmentation
CN111247592B (zh) 用于随时间量化组织的系统和方法
US20100266173A1 (en) Computer-aided detection (cad) of a disease
US20240008801A1 (en) System and method for virtual pancreatography pipepline
US9905002B2 (en) Method and system for determining the prognosis of a patient suffering from pulmonary embolism
WO2022203660A1 (fr) Procédé et système de diagnostic de nodules chez des mammifères avec des caractéristiques radiomiques et des caractéristiques descriptives d'imagerie sémantique
CA2531871C (fr) Systeme et procede permettant de detecter une protuberance sur une image medicale
CN110992312B (zh) 医学图像处理方法、装置、存储介质及计算机设备
Lacerda et al. A parallel method for anatomical structure segmentation based on 3d seeded region growing
JP2023545570A (ja) 形状プライアを用いた及び用いないセグメンテーション結果によって解剖学的異常を検出すること
EP3588378A1 (fr) Procédé pour déterminer au moins une caractéristique améliorée d'un objet d'intérêt
Ilyasova et al. Development of the technique for automatic highlighting ranges of interest in lungs x-ray images
Medina et al. Accuracy of connected confidence left ventricle segmentation in 3-D multi-slice computerized tomography images
Groot Lipman Artificial Intelligence driven assessment of asbestos exposed patients
WO2023228085A1 (fr) Système et procédé permettant de déterminer une valeur de référence de parenchyme pulmonaire et améliorer des lésions de parenchyme pulmonaire
CN115564827A (zh) 确定在医学成像数据中表示给定特征的位置
Uma et al. A Novel Method for Segmentation of Compound Images using the Improved Fuzzy Clustering Technique
Berty Semi-Automated Diagnosis of Pulmonary Hypertension Using PUMA, a Pulmonary Mapping and Analysis Tool

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21933433

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21933433

Country of ref document: EP

Kind code of ref document: A1