NL2029876B1 - Deep residual network-based classification system for thyroid cancer computed tomography (ct) images - Google Patents

Deep residual network-based classification system for thyroid cancer computed tomography (ct) images Download PDF

Info

Publication number
NL2029876B1
NL2029876B1 NL2029876A NL2029876A NL2029876B1 NL 2029876 B1 NL2029876 B1 NL 2029876B1 NL 2029876 A NL2029876 A NL 2029876A NL 2029876 A NL2029876 A NL 2029876A NL 2029876 B1 NL2029876 B1 NL 2029876B1
Authority
NL
Netherlands
Prior art keywords
image
tumor
deep
images
lymph node
Prior art date
Application number
NL2029876A
Other languages
Dutch (nl)
Other versions
NL2029876A (en
Inventor
Song Xicheng
Wang Cai
Zhang Haicheng
Mao Ning
Wu Xinxin
Zhang Wenbin
Li Jingjing
Original Assignee
Yantai Yuhuangding Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai Yuhuangding Hospital filed Critical Yantai Yuhuangding Hospital
Publication of NL2029876A publication Critical patent/NL2029876A/en
Application granted granted Critical
Publication of NL2029876B1 publication Critical patent/NL2029876B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The present disclosure provides a deep residual network-based classification system for thyroid cancer computed tomography (CT) images, including: a thyroid cancer CT image acquisition module, configured to acquire labeled CT images of plurality of thyroid cancer patients; a multi-scale segmentation module, configured to segment CT images of each of the plurality of thyroid cancer patients according to different scales, and sequentially intercept a cubic tumor area, a cubic tumor area expanded by 5 mm and a cubic tumor area expanded by 10 mm, to obtain a tumor image, a tumor image expanded by 5 mm and a tumor image expanded by 10 mm; a preprocessing module, configured to preprocess the images to obtain a training set; a deep residual network training module, configured to train and optimize the deep residual network by using the training set; and a thyroid cancer CT image classification module, configured to input thyroid cancer CT images to be classified into the optimized deep residual network for classification, to obtain a classification result of the thyroid cancer CT images. The present disclosure can accurately classify the thyroid cancer CT images.

Description

DEEP RESIDUAL NETWORK-BASED CLASSIFICATION SYSTEM FOR THYROID
CANCER COMPUTED TOMOGRAPHY (CT) IMAGES
TECHNICAL FIELD
[01] The present disclosure relates to the technical field of medical imaging and artificial intelligence, and in particular to a deep residual network-based classification system for thyroid cancer computed tomography (CT) images.
BACKGROUND ART
[02] In recent years, computer technology has been widely used in the medical field. In particular, computer-aided diagnosis technology can assist radiologist in diagnosis using medical imaging and medical image processing technologies together with computer-related algorithms, to improve the accuracy and efficiency of diagnosis.
[03] Thyroid cancer has a relatively high incidence. It is reported that up to 60-70% of thyroid cancer patients have lymph node metastasis. Therefore, it is necessary to accurately determine the area required for lymph node dissection before the initial surgery to determine the risk of lymph node metastasis. Clinically, the area is generally determined by CT examination, and CT images need to be differentiated to help radiologists make judgements.
[04] At present, artificial intelligence -assisted diagnosis technology mainly includes radiomics-based methods and deep learning-based methods. Radiomics method extracts manually-designed features from medical images, and constructs models through feature selection and traditional machine learning methods. However, the manually-designed features are difficult to accurately characterize the inherent features of the image.
[05] Deep learning method can automatically extract high-dimensional features of the images, has great advantages over traditional machine learning methods, and can avoid problems caused by manually extracting image features. Although there are many frameworks for image classification with the development of deep learning, there is still no deep learning model for classifying CT images of thyroid cancer patients. Due to lesion images contained, the CT images of thyroid cancer patients are more complicated and have more features than ordinary images. The current frameworks for classification of ordinary images cannot accurately classify the thyroid cancer CT images, and thus cannot assist radiologists in determining whether there is lymph node metastasis in thyroid cancer CT images. Therefore, there is an urgent need in the art for a deep learning model for classifying the CT images of thyroid cancer patients to solve the above problems.
SUMMARY
[06] The purpose of the present disclosure is to provide a deep residual network-based classification system for thyroid cancer CT images. The system can accurately classify the thyroid cancer CT images, and thus can assist radiologists in determining whether there is lymph node metastasis through the thyroid cancer CT images.
[07] To achieve the above objective, the present disclosure provides the following solutions:
[08] A deep residual network-based classification system for thyroid cancer CT image includes:
[09] athyroid cancer CT image acquisition module, configured to acquire labeled CT images of multiple thyroid cancer patients;
[19] a multi-scale segmentation module, connected to the thyroid cancer CT image acquisition module, and configured to segment CT images of each of the multiple thyroid cancer patients according to different scales, and sequentially intercept a cubic tumor area, a cubic tumor area expanded by 5 mm and a cubic tumor area expanded by 10 mm, to obtain a tumor image, a tumor image expanded by 5 mm and a tumor image expanded by 10 mm;
[11] a preprocessing module, connected to the multi-scale segmentation module, and configured to preprocess the tumor image, the tumor image expanded by 5 mm and the tumor image expanded by 10 mm, respectively, to obtain a training set;
[12] a deep residual network training module, connected to the preprocessing module, and configured to train and optimize a deep residual network by using the training set to obtain an optimized deep residual network; and
[13] a thyroid cancer CT image classification module, connected to the deep residual network training module, and configured to input thyroid cancer CT images to be classified into the optimized deep residual network for classification, to obtain a classification result of the thyroid cancer CT images; where the classification result includes lymph node metastasis and lymph node non-metastasis through the thyroid cancer CT images.
[14] In some embodiments, the CT image of each of the multiple thyroid cancer patients may be composed of multiple consecutive image slices corresponding to different phases; and the different phases may include a plain scan phase, an arterial phase and a venous phase.
[15] In some embodiments, the CT image of each of the multiple thyroid cancer patients may include a region of interest (ROI); the ROI may be delineated slice by slice along an edge of a thyroid primary lesion in the plain scan phase, the arterial phase and the venous phase; and the ROL in each phase may be superimposed slice by slice to form a three-dimensional volume of interest (VOI).
[16] In some embodiments, the multi-scale segmentation module may specifically include:
[17] a voxel spacing conversion unit, connected to the thyroid cancer CT image acquisition module, and configured to convert a voxel spacing of the CT image of each of the multiple thyroid cancer patients to obtain a converted CT image;
[18] a VOI determination unit, connected to the voxel spacing conversion unit, and configured to determine a length, a width, a height and a center point coordinate of the VOI according to a position of the VOL in the converted CT image; and
[19] a cropping unit, connected to the VOI determination unit, and configured to intercept the cubic tumor area, the cubic tumor area expanded by 5 mm and the cubic tumor area expanded by 10 mm from the converted CT image according to the length, the width, the height and the center point coordinate of the VOI, to obtain the tumor image, the tumor image expanded by 5 mm and the tumor image expanded by 10 mm.
[20] In some embodiments, the preprocessing module may specifically include:
[21] a normalization unit, connected to the multi-scale segmentation module, and configured to normalize each voxel in the tumor image, the tumor image expanded by 5 mm and the tumor image expanded by 10 mm, respectively, to obtain normalized tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm;
[22] a data scaling unit, connected to the normalization unit, and configured to unity the normalized tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm to a set image size, respectively, to obtain image size-set tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm; and
[23] a data augmentation unit, connected to the data scaling unit, and configured to conduct data augmentation on the image size-set tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm via flipping, rotation, translation and zooming, to obtain the training set, where the training set may include tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm after data augmentation.
[24] In some embodiments, the deep residual network training module may specifically include:
[25] a deep residual network construction unit, connected to the preprocessing module, and configured to construct the deep residual network; and
[26] a deep residual network training unit, connected to the deep residual network construction unit, and configured to receive the training set sent by the preprocessing module, and to train and optimize the deep residual network using the training set to obtain the optimized deep residual network.
[27] In some embodiments, the deep residual network may specifically include:
[28] a shallow feature extraction layer, connected to the preprocessing module, and configured to use a 64-channel 3x3x3 convolution kernel and a rectified linear unit (ReLU) connected to the 3x3x3 convolution kernel to extract shallow features of the images in the training set to obtain a shallow feature map with 64-channel;
[29] a deep feature extraction layer, connected to the shallow feature extraction layer, and configured to extract deep features in the shallow feature map to obtain a deep feature map:
[30] a skip connection layer separately connected with the shallow feature extraction layer and the deep feature extraction layer, and configured to connect the shallow feature map and the deep feature map;
[31] a convolutional layer, connected to the skip connection layer, and configured to further extract features from the connected shallow feature map and deep feature map with a 7x7x7 convolution kernel and an ReLU connected to the 7x7x7 convolution kernel, to generate a 128-channel feature map; and
[32] a classification layer, connected to the convolutional layer, and configured to conduct a 3D global average pooling operation on the 128-channel feature map, calculate probability of the lymph node metastasis and the lymph node non-metastasis through the thyroid cancer CT images, and take a category with the highest probability as a classification result. [331 In some embodiments, the deep feature extraction layer may specifically include:
[34] a plurality of residual dense blocks (RDBs) connected to the shallow feature extraction layer, where each RDB may be connected sequentially, and may be configured to extract the deep features in the shallow feature map using nine 3x3x3 convolution kernels and an ReLU separately connected to each 3x3x3 convolution kernel; and
[35] a IxIx1 convolutional layer connected with a plurality of the RDBs, and configured to fuse the deep features extracted by each RDB to obtain the deep feature map.
[36] In some embodiments, the classification layer may specifically include a fully connected (FC) and a Softmax layer that are mutually connected; the FC layer may be connected to the convolutional layer and may be used for conducting the 3D global average pooling operation on the 128-channel feature map; and the Softmax layer may be used for calculating the probability of the lymph node metastasis and the lymph node non-metastasis in the thyroid cancer CT images, and taking the category with the highest probability as the classification result of the thyroid cancer CT images.
[37] Based on specific examples provided in the present disclosure, the present disclosure discloses the following technical effects:
[38] The present invention discloses a deep residual network-based classification system for thyroid cancer CT images. In the system, the multi-scale segmentation module is provided to segment the CT images of each of the multiple thyroid cancer patients according to different scales, and sequentially intercept the cubic tumor area, the cubic tumor area expanded by 5 mm and the cubic tumor area expanded by 10 mm, to obtain the tumor image, the tumor image expanded by 5 mm and the tumor image expanded by 10 mm. The deep residual network training module is provided to train and optimize the deep residual network using the images of different scales, and to classity the thyroid cancer CT images using the optimized deep residual network. The present disclosure extracts multi-scale information of the thyroid cancer tumor in the thyroid cancer CT images through a combination of multi-scale segmentation and deep residual network, and fuses 5 features of the tumor and peritumoral to improve the accuracy of model classification. Compared with traditional deep learning frameworks such as ResNet and DenseNet, the present disclosure further strengthens the fusion and propagation of features, and fully learns the high-frequency and detailed features of the images, such that the thyroid cancer CT images can be accurately classified to assist radiologists in determining whether there is lymph node metastasis through the thyroid cancer
CT images.
BRIEF DESCRIPTION OF THE DRAWINGS
[39] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the accompanying drawings required for the embodiments are briefly described below. Apparently, the accompanying drawings in the following description show merely some embodiments of the present disclosure, and a person of ordinary skill in the art may still derive other accompanying drawings from these accompanying drawings without creative efforts.
[40] FIG. | is a structural diagram of an example of a deep residual network-based classification system for thyroid cancer CT image of the present disclosure.
[41] FIG. 2 is a structural diagram of classification based on a deep residual network of the present disclosure.
[42] FIG. 3 is a structural diagram of a residual dense network of the present disclosure.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[43] The technical solutions of the examples of the present disclosure are clearly and completely described below with reference to the accompanying drawings. Apparently, the described examples are merely a part rather than all of the examples of the present disclosure. All other examples obtained by a person of ordinary skill in the art based on the examples of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure.
[44] The purpose of the present disclosure is to provide a deep residual network-based classification system for thyroid cancer CT images. The system can accurately classify the thyroid cancer CT images, and thus can assist radiologists in determining whether there is lymph node metastasis in thyroid cancer CT images.
[45] To make the above objectives, features, and advantages of the present disclosure clearer and more comprehensible, the present disclosure will be further described in detail below with reference to the accompanying drawings and the specific implementation.
[46] FIG. 1is a structural diagram of an example of the deep residual network-based classification system for thyroid cancer CT image of the present disclosure. As shown in FIG. 1, the deep residual network-based classification system for thyroid cancer CT image includes a thyroid cancer CT image acquisition module 101, a multi-scale segmentation module 102 connected to the thyroid cancer CT image acquisition module 101, a preprocessing module 103 connected to the multi-scale segmentation module 102, a deep residual network training module 104 connected to the preprocessing module 103, and a thyroid cancer CT image classification module 105 connected to the deep residual network training module 104.
[47] The thyroid cancer CT image acquisition module (thyroid cancer CT image collection module) 101 is used for acquiring labeled CT images of thyroid cancer patients. The CT image of each of patient is composed of multiple consecutive image slices corresponding to different phases.
The different phases include a plain scan phase, an arterial phase and a venous phase. The CT image of each of patient includes an ROL, the ROI is delineated slice by slice along an edge of a thyroid primary lesion in the plain scan phase, the arterial phase and the venous phase; and the ROI in each phase are superimposed slice by slice to form a three-dimensional VOL. The ROI in CT images, are obtained by a radiologist with more than 10 years of diagnostic experience. Each patient's image is labeled with label information (lymph node metastasis/lymph node non-metastasis).
[48] The thyroid cancer CT images were collected from 913 thyroid cancer patients subjected to thyroid CT examination in Yantai Yuhuangding Hospital from 2017 to 2020, and label of lymph node metastasis/lymph node non-metastasis in thyroid cancer CT images is obtained through pathological sample detection, that is, the label whether there is lymph node metastasis is determined by pathology result. Since the three-dimensional (3D) structure of each phase of each patient is represented by multi-layer continuous slices, a 3D original CT image matrix is obtained. To reduce the interference of peritumoral on the network model, the present disclosure segments the original
CT image in different sizes according to a position of the delineated VOL
[49] The multi-scale segmentation module (multi-scale clipping module) 102 is used for segmenting CT images (three-dimensional images) of each of the multiple thyroid cancer patients according to different scales, and sequentially intercepting a cubic tumor area, a cubic tumor area expanded by 5 mm and a cubic tumor area expanded by 10 mm, to obtain a tumor image, a tumor image expanded by 5 mm and a tumor image expanded by 10 mm. The multi-scale segmentation module 102 segments the tumor area, the tumor area expanded by 5 mm and the tumor area expanded by 10 mm according to coordinates and a center point of the tumor (coordinates and center position of the tumor in the original image), to obtain a multi-scale three-dimensional image input to the deep residual network, that is, three-dimensional images with three different scales in the thyroid tumor area.
[50] The multi-scale segmentation module 102 specifically includes a voxel spacing conversion unit connected to the thyroid cancer CT image acquisition module 101, a VOI determination unit connected to the voxel spacing conversion unit, and a cropping unit connected to the VOI determination unit.
[51] The voxel spacing conversion unit is used for converting the voxel spacing of the CT images (original CT images) of each of patient to (1 mm, 1 mm, 5 mm) to obtain the converted CT images.
[52] The VOI determination unit is used for determining the length, width, height and center point coordinates L (x, y, z) of the VOI according to the position of the VOI in the converted CT image.
[53] The cropping unit is used for intercepting the cubic tumor area, the cubic tumor area expanded by 5 mm and the cubic tumor area expanded by 10 mm from the converted CT image according to the length, the width, the height and the center point coordinates L (x, y, z) of the VOI, to obtain the tumor image, the tumor image expanded by 5 mm and the tumor image expanded by 10 mm. The cropping unit intercepts the tumor area from the 3D image according to the center position of the tumor. Studies have shown that the peritumoral is also meaningful, and the cubic tumor area expanded by 5 mm and the cubic tumor area expanded by 10 mm are also successively intercepted to obtain the multi-scale three-dimensional image.
[54] The CT images are segmented according to different scales, and the cubic tumor area, the cubic tumor area expanded by 5 mm (including the cubic tumor area) and the cubic tumor area expanded by 10 mm (including the cubic tumor area) are successively intercepted, to obtain the three-dimensional image of three different scales.
[55] The preprocessing module 103 is used for preprocessing the tumor image, the tumor image expanded by 5 mm and the tumor image expanded by 10 mm, respectively, to obtain a training set.
[56] The preprocessing module 103 specifically includes a normalization unit connected to the multi-scale segmentation module 102, a data scaling unit connected to the normalization unit. and a data augmentation unit connected to the data scaling unit.
[57] The normalization unit is used for normalizing each voxel in the tumor image, the tumor image expanded by 5 mm and the tumor image expanded by 10 mm, respectively, to obtain normalized tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm.
The normalization unit (standardization unit) normalizes each voxel to [0,1] according to a formula
Ny = €; — BVO quch that all images can be scaled to a uniform size for network learning. In the formula, ¥; represents an unstandardized CT value of the i-th voxel, g and 4 represent a mean and a standard deviation of the CT values of each voxel in an unstandardized first image block, respectively; and A; represents a standardized (normalized) CT value of the i-th voxel.
[58] The data scaling unit is used for unifying the normalized tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm to a set image size, respectively, to obtain image size-set tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm.
Specifically, the data scaling unit unifies images of different scales to an average size of all images of the scale, such that all images are scaled to a uniform size, which is convenient for network learning.
After that, the data set is separated into a training set and a testing set randomly with a ratio of 8:2, where the training set is used for model training, and the testing set is used for testing performance of the model.
[59] The data augmentation unit is used for conducting data augmentation on the image size-set tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm via flipping, rotation, translation and zooming, to obtain the training set. The training set includes data-enhanced tumor image, tumor image expanded by 5 mm and tumor image expanded by 10 mm. To improve generalization ability of the model, the data augmentation unit enhances the data samples of the training set via tlipping, rotation, translation and zooming to prevent over-fitting. The data augmentation is conducted on the training set, but not on the testing set.
[60] The preprocessing module 103 obtains a training set used for training and optimizing the deep residual network by conducting preprocessing operations such as standardization, data scaling, data augmentation and data set separation on images of different scales.
[61] The deep residual network training module (deep learning network training module) 104 is used for training and optimizing the deep residual network by using the training set to obtain an optimized deep residual network. The deep residual network training module 104 inputs the CT images of three scales into a constructed deep residual classification model to obtain the probability whether there is lymph node metastasis in each scale of lesion or not.
[62] The deep residual network training module 104 specifically includes a deep residual network construction unit connected to the preprocessing module 103, and a deep residual network training unit connected to the deep residual network construction unit.
[63] The deep residual network construction unit is used for constructing a deep residual network. FIG. 2 is a structural diagram of classification based on the deep residual network of the present disclosure. As shown in FIG. 2, the deep residual network is constructed to classify the thyroid CT images.
[64] The deep residual network specifically includes a shallow feature extraction layer connected to the preprocessing module 103, a deep feature extraction layer connected to the shallow feature extraction layer, a skip connection layer separately connected to the shallow feature extraction layer and the deep feature extraction layer, a convolutional layer connected to the skip connection layer, and a classification layer connected to the convolutional layer.
[65] The shallow feature extraction layer is used for using a 64-channel 3x3x3 convolution kernel and an ReLU connected to the 3x3x3 convolution kernel to extract shallow features of the images in the training set to obtain a shallow feature map. The 64-channel 3x3x3 convolution kernel conducts convolution operation, and the ReLU added later conducts nonlinear mapping. Since the shallow feature extraction layer uses only one convolution kernel, only the shallow features of the image are extracted.
[66] The deep feature extraction layer (deep characteristics extraction layer) is used for extracting deep features in the shallow feature map to obtain a deep feature map.
[67] The deep feature extraction layer specifically includes a plurality of RDBs connected with the shallow feature extraction layer, and a 1x1x1 convolutional layer connected with a plurality of the RDBs.
[68] The plurality of the RDBs are connected sequentially, and each RDB is used for extracting the deep features in the shallow feature map using nine 3x3x3 convolution kernels and an ReLU separately connected to each 3x3x3 convolution kernel. The deep features from the original image are fully extracted by using a plurality of the RDBs. A network structure of each RDB is shown in
FIG. 3, the RDB includes nine 3x3x3 convolution and a ReLU operation, each layer is closely connected to increase a receptive field inside each network layer, such that the network can fully learn features of each layer. S RDBs are provided, and an output of the s-th RDB can be obtained by
FE =L AF _}=L0L tit {F;}1)} Inthe formula, £, is the s-th RDB operation, which is equivalent to the convolution operation and Rel. U operation in a convolutional neural network, and
F. is the s-th RDB completely-generated by each convolutional layer inside the RDB. The previous layers can access to the following layer. The output of the i-th convolutional layer of the s-th RDB can be expressed by Fy; = max{d,w, ; » 12 Fem Fais) + B;} . In the formula, W, ; represents a weight of the i-th convolutional layer in the RDB, and IF arg, Fais) represents a feature map in the s-th RDB generated by the convolutional layer [1,2 {i — 13] of the s-1-th
RDB.
[69] The Ix1x1 convolutional layer is used for fusing the deep features extracted by each RDB to obtain the deep feature map. All RDB outputs are cascaded and input to one 1x1x1 convolutional layer, and the outputs are fused with features of each RDB to reduce the number and parameters of the feature map, where the feature maps are reduced to 64. Finally, an identity mapping of the residual network is introduced to improve the convergence speed of the network and improve the gradient of the information flow. A deeper network leads to an easier extraction of richer and deeper features. With the increase in the number of RDBs and convolutional layers, a better performance is easily to be achieved, and a high growth rate also improves the performance of the model, such that 16 RDB blocks can be provided.
[70] The skip connection layer is used for connecting the shallow feature map and deep feature map. To fuse the shallow features with the deep features, the shallow feature map is added to the output of all RDB-cascaded features by using the skip connection. In this way, all feature maps are connected to extract rich discriminative image features. The present disclosure extracts highly-discriminative deep features of CT images, classities the thyroid cancer CT images, and has great value for use.
[71] The convolutional layer is used for further extracting features from the connected shallow feature map and deep feature map with a 7x7x7 convolution kernel and an ReLU connected to the 7x7x7 convolution kernel, to generate a 128-channel feature map. The image features are further extracted through the 7x7x7 convolutional layer to generate the 128-channel feature map, and nonlinear mapping is conducted by using the ReLU.
[72] The classification layer is used for conducting a 3D global average pooling operation on the 128-channel feature map, calculating probability of the lymph node metastasis and the lymph node non-metastasis through the thyroid cancer CT images, and taking a category with the highest probability as a classification result. The classification layer specifically includes an FC layer and a
Softmax that are mutually connected. The FC layer is connected to the convolutional layer and is used for conducting the 3D global average pooling operation on the 128-channel feature map; and the Softmax is used for calculating the probability of the lymph node metastasis and the lymph node non-metastasis through the thyroid cancer CT images, and taking the category with the highest probability as the classification result. The extracted feature map (feature map) is subjected to a 3D global average pooling operation, the classification probability of the lymph node metastasis and the lymph node non-metastasis is obtained using the FC layer and the Softmax, and the category with the largest classification probability is used as a final classification result of metastasis to determine whether there is lymph node metastasis in thyroid cancer CT images.
[73] The residual network alleviates the problem of gradient vanishing and solves feature redundancy, but has poor connection between the features. The dense network can solve the above problem by receiving feature maps of all layers to enhance the propagation between the features.
Moreover, the RDB network perfectly integrates the advantages of the above two networks, thereby maximizing the mining of highly-discriminative deep features.
[74] The deep residual network training unit is used for receiving the training set sent by the preprocessing module 103, and training and optimizing the deep residual network by using the training set to obtain the optimized deep residual network. The deep residual network training unit inputs training sets of different scales into the deep residual network, and conducts model training through an Softmax activation function and network parameters in the deep residual network. The model training uses cross-entropy loss as a loss function, and Adam as an optimization algorithm for iterative solution. The network parameters are initialized by He initialization, with an iteration epoch set to 200 and a network batch size of 32. An initial learning rate is set to le-5, and the learning rate is reduced by 10% at 1/2 of the epoch and by 1% at 3/4 of the epoch. If the data category ratio is quite different, the data set can be subjected to category unbalance processing. For three-dimensional images, resampling can be conducted, that is, the number of randomly-extracted images in each batch is controlled during the training, such that the extracted two types are the same. Error fitting between the training result and a true value can be conducted by using the loss function to minimize the loss function. When the loss function gradually converges, a model corresponding to the lowest point is the best classification model; the optimal network parameters are selected to obtain the best classification model (three optimal network models). The training set of the corresponding scale is input into the corresponding network model, the optimal model is trained to obtain the prediction probability of each multi-scale network, that is, a prediction probability of images of three scales. After obtaining the prediction probability of images of three scales, multi-scale network weighted fusion is conducted, that is, the predicted probabilities of the obtained images of three scales are subjected to weighted fusion to obtain a final prediction probability whether there is lymph node metastasis through the thyroid cancer CT images.
Multi-scale network weighted fusion finds the weights by parameter search, and gives a weight to the output probability of each network. Finally, a weighted sum of the output probabilities of each network is a final fusion probability, as shown in a formula
Score=a*Modell+b*Model2+c*Model3. In the formula, a+b+c=1, and 1>a>0, 1>b>0 and 1>¢>0;
Modell, Model2 and Model3 are the predicted probabilities for prediction of tumor inside, tumor and peritumoral of 5 mm, and tumor and peritumoral of 10 mm in predicting lymph node metastasis, respectively. Score is a probability of a finally-fused lymph node being malignant.
Preferably, it is possible to search for the largest value of area under the curve (AUC) by traversing all parameter spaces from 0 to 1 at an interval of 0.01. The final prediction probability of lymph node metastasis is obtained through weighted fusion of the prediction results of the multi-scale network.
[75] The thyroid cancer CT image classification module 105 is used for inputting thyroid cancer CT images to be classified into the optimized deep residual network for classification, to obtain a classification result of the thyroid cancer CT images. The classification result includes the lymph node metastasis and the lymph node non-metastasis through the thyroid cancer CT images.
[76] The present disclosure provides a deep residual network-based classification system for thyroid cancer CT image that overcomes the shortcomings in the existing diagnosis by the thyroid cancer CT images. Prediction of the thyroid cancer CT images by computer improves the accuracy of prediction and assists radiologists” diagnosis. The present disclosure proposes a new deep learning network classification framework (a model based on deep learning) using the deep learning to extract highly-discriminative deep features of the thyroid cancer CT images.
Therefore, the thyroid cancer CT images can be accurately classified to assist radiologists in lymph node metastasis prediction through the thyroid cancer CT images, thereby assisting radiologists to conduct automated analysis and diagnosis on the thyroid cancer CT images.
[77] Compared with the prior studies, the present disclosure has the following advantages:
[78] 1) The present disclosure proposes a new deep learning-based image classification technology; the present disclosure extracts the deep layered features of the thyroid cancer CT images using residual dense network, where each layer is closely connected such that the features and relationships between the layers can be fully learned; the introduction of skip connections solves the problem of gradient vanishing/gradient explosion, and global residual makes the shallow features and the deep features fully fused.
[79] 2) Compared with common deep learning networks such as ResNet and DenseNet, the present disclosure further strengthens the fusion and propagation of features, and fully learns the high-frequency and detailed features of the images.
[89] 3) The present disclosure extracts the multiple-scale information of thyroid cancer tumors in the thyroid cancer CT images, fuses the features of the tumor and peritumoral and improves the accuracy of model classification.
[81] Each example of the present specification is described in a progressive manner, each example focuses on the difference from other examples, and the same and similar parts between the examples may refer to each other.
[82] In this specification, several examples are used for illustration of the principles and implementations of the present disclosure. The description of the foregoing examples is used to help illustrate the method and the core principles of the present disclosure. In addition, those of ordinary skill in the art can make various modifications in terms of specific implementations and scope of application in accordance with the teachings of the present disclosure. In conclusion, the content of the present specification shall not be construed as a limitation to the present disclosure.

Claims (9)

ConclusiesConclusions 1. Een diepgaand residueel netwerk-gebaseerd classificatiesysteem voor lymfeklierkanker computertomografie (CT) beelden, omvattende: een lymfeklierkanker CT-beeldverwervingsmodule die is ingericht om gelabelde CT-beelden van meerdere van lymfeklierkankerpatiénten te verwerven; een multischaalsegmentatiemodule die is verbonden met de lymfeklierkanker CT- beeldverwervingsmodule, en is ingericht om CT-beelden van elk van de meerdere lymfeklierkankerpatiénten te segmenteren volgens verschillende schalen, en opeenvolgend een kubusvormig tumorgebied, een kubusvormig tumorgebied uitgebreid met 5 mm en een kubusvormig tumorgebied uitgebreid met 10 mm te onderscheppen, om een tumorbeeld, een tumorbeeld uitgebreid met 5 mm en een tumorbeeld uitgebreid met 10 mm te verkrijgen een voorbewerkingsmodule die is verbonden met de multischaalsegmentatiemodule, en is ingericht voor het voorbewerken van het tumorbeeld, het tumorbeeld uitgebreid met 5 mm en het tumorbeeld uitgebreid met 10 mm, respectievelijk om een trainingsset te verkrijgen; een diepgaand-residueel-netwerk-trainingsmodule die is verbonden met de voorbewerkingsmodule, en is ingericht om een diepgaand residueel netwerk te trainen en te optimaliseren gebruikmakende van de trainingsset om een geoptimaliseerd diepgaand residueel netwerk te verkrijgen; en een lymfeklierkanker-CT-beeldclassificatiemodule die is verbonden met de diepgaand-residueel- netwerk-trainingsmodule en is ingericht om te classificeren lymfeklierkanker CT-beelden in te voeren in het geoptimaliseerde diepgaand residuele netwerk om een classificatieresultaat te verkrijgen, waarbij het classificatteresultaat Iymfeklierknoopmetastase en lymfeklierknoop-niet- metastase via de CT-beelden van lymfeklierkanker omvat.An in-depth residual network-based classification system for lymphoma computed tomography (CT) images, comprising: a lymphoma cancer CT image acquisition module configured to acquire labeled CT images from a plurality of lymphoma cancer patients; a multi-scale segmentation module connected to the lymph node cancer CT image acquisition module, and arranged to segment CT images of each of the plurality of lymph node cancer patients according to different scales, and sequentially generate a cube-shaped tumor area, a cube-shaped tumor area extended by 5 mm, and a cube-shaped tumor area extended by 10 mm, to obtain a tumor image, a tumor image extended by 5 mm and a tumor image extended by 10 mm the tumor image expanded by 10 mm, respectively, to obtain a training set; a deep residual network training module connected to the preprocessing module, and arranged to train and optimize a deep residual network using the training set to obtain an optimized deep residual network; and a lymph node cancer CT image classification module connected to the deep residual network training module and configured to classify lymph node cancer CT images entered into the optimized deep residual network to obtain a classification result, the classificat result being lymph node metastasis and lymph node -non-metastasis via the CT images of lymph node cancer. 2. Het classificatiesysteem voor lymfeklierkanker CT-beelden volgens conclusie 1, waarbij het CT- beeld van elk van de patiënt is samengesteld uit meerdere van opeenvolgende beeldsnedes die overeenkomen met verschillende fasen; en waarbij de verschillende fasen een gewone scanfase, een arteriële fase en een veneuze fase omvatten.The lymph node cancer CT image classification system according to claim 1, wherein the CT image of each patient is composed of a plurality of consecutive image slices corresponding to different phases; and wherein the different phases include a normal scanning phase, an arterial phase and a venous phase. 3. Het classificatiesysteem voor lymfeklierkanker CT-beelden volgens conclusie 2, waarbij het CT- beeld van elk van de patiënten een interessegebied (ROI) omvat; waarbij het ROI snede per snede wordt omlijnt langs een rand van een primaire laesie van de schildklier in de gewone scanfase, de arteriële fase en de veneuze fase; en waarbij de ROT's in elke fase snede per snede worden gesuperponeerd om een driedimensionaal (3D) interessevolume (VOI) te vormen.The lymphoma CT image classification system of claim 2, wherein the CT image of each of the patients comprises a region of interest (ROI); wherein the ROI is contoured slice by slice along an edge of a primary lesion of the thyroid in the common scan phase, the arterial phase, and the venous phase; and wherein the ROTs in each phase are superimposed slice by slice to form a three-dimensional (3D) volume of interest (VOI). 4. Het classificatiesysteem voor lymfeklierkanker CT-beelden volgens conclusie 3, waarbij de multischaalsegmentatiemodule omvat: een voxelafstandsomzettingseenheid die is verbonden met de lymfeklierkanker CT- beeldverwervingsmodule en is ingericht om een voxelafstand van het CT-beeld van elk van de 53 patiënten om te zetten om een omgezet CT-beeld te verkrijgen; een VOI-bepalingseenheid die is verbonden met de voxelafstandsomzettingseenheid en is ingericht om een lengte-, een breedte-, een hoogte- en een middelpuntscoördinaat van de VOI volgens een positie van de VOI in het omgezette CT-beeld te bepalen; en een afsnij-eenheid die is verbonden met de VOI-bepalingseenheid en is ingericht om het kubusvormige tumorgebied, het kubusvormige tumorgebied uitgebreid met 5 mm en het kubusvormige tumorgebied uitgebreid met 10 mm af te snijden van het geconverteerde CT-beeld volgens de lengte-, de breedte-, de hoogte- en de middelpuntscoördinaat van de VOL, om het tumorbeeld, het tumorbeeld uitgebreid met 5 mm en het tumorbeeld uitgebreid met 10 mm te verkrijgen.The lymph node cancer CT image classification system according to claim 3, wherein the multiscale segmentation module comprises: a voxel distance conversion unit connected to the lymph node cancer CT image acquisition module and arranged to convert a voxel distance of the CT image of each of the 53 patients to obtain a converted CT image; a VOI determination unit connected to the voxel distance conversion unit and arranged to determine a length, a width, a height and a center point coordinate of the VOI according to a position of the VOI in the converted CT image; and a clipping unit connected to the VOI determination unit and arranged to clip the tumor cube area, the tumor cube area extended by 5 mm, and the tumor cube area extended by 10 mm from the converted CT image according to the length, the width, the height and the center coordinate of the VOL, to obtain the tumor image, the tumor image expanded by 5 mm and the tumor image expanded by 10 mm. 5. Het diepgaand residueel netwerk- gebaseerd classificatiesysteem voor lymfeklierkanker CT- beelden volgens conclusie 1, waarbij de voorbewerkingsmodule omvat: een normalisatie-eenheid die is verbonden met de multischaalsegmentatiemodule en is ingericht om elke voxel in het tumorbeeld, het tumorbeeld respectievelijk uitgebreid met 5 mm en het tumorbeeld uitgebreid met 10 mm te normaliseren om een genormaliseerd tumorbeeld, een genormaliseerd tumorbeeld uitgebreid met 5 mm en een genormaliseerd tumorbeeld uitgebreid met 10 mm te verkrijgen; een gegevensschaaleenheid die verbonden met de normalisatie-eenheid en is ingericht om respectievelijk het genormaliseerde tumorbeeld, het genormaliseerde tumorbeeld vergroot met 5 mm en het genormaliseerde tumorbeeld vergroot met 10 mm te verenigen tot een vooraf ingestelde beeldgrootte om een tumorbeeld, een tumorbeeld met beeldgrootte vergroot met 5 mm en een tamorbeeld met beeldgrootte vergroot met 10 mm met een vooraf ingestelde beeldgrootte te verkrijgen; en een gegevensaanvullingseenheid die is verbonden met de gegevensschaaleenheid en is ingericht om gegevensaanvulling uit te voeren op het tumorbeeld met een vooraf ingestelde beeldgrootte, het tumorbeeld uitgebreid met 5 mm met een vooraf ingestelde beeldgrootte en het tumorbeeld uitgebreid met 10 mm met een vooraf ingestelde beeldgrootte via flipping, rotatie, translatie en zoomen, om de trainingsset te verkrijgen, waarbij de trainingsset een met gegevens aangevuld tamorbeeld, een met gegevens aangevuld tumorbeeld uitgebreid met 5 mm en een met gegevens aangevuld tumorbeeld uitgebreid met 10 mm omvat.The in-depth residual network-based classification system for lymph node cancer CT images according to claim 1, wherein the pre-processing module comprises: a normalization unit connected to the multiscale segmentation module and configured to scan each voxel in the tumor image, the tumor image expanded by 5 mm, respectively and normalize the tumor image expanded by 10 mm to obtain a normalized tumor image, a normalized tumor image expanded by 5 mm, and a normalized tumor image expanded by 10 mm; a data scaling unit connected to the normalization unit and arranged to unite respectively the normalized tumor image, the normalized tumor image enlarged by 5 mm and the normalized tumor image enlarged by 10 mm into a preset image size to form a tumor image, a tumor image with an image size enlarged by 5 mm and a tamor image with image size enlarged by 10 mm to obtain a preset image size; and a data completion unit connected to the data scale unit and arranged to perform data completion on the tumor image with a preset image size, the tumor image expanded by 5 mm with a preset image size, and the tumor image expanded by 10 mm with a preset image size via flipping, rotation, translation and zooming, to obtain the training set, wherein the training set comprises a data augmented tamor image, a data augmented tumor image extended by 5 mm and a data augmented tumor image extended by 10 mm. 6. Het classificatiesysteem voor lymfeklierkanker CT-beelden volgens conclusie 1, waarbij de diepgaande-residuele-netwerk-trainingsmodule omvat: een diepgaand-residuele-netwerkconstructie-eenheid die is verbonden met de voorberwerkingsmodule, en is ingericht om het diepgaand residueel netwerk te construeren; en een diepgaand-residuele-trainingseenheid die is verbonden met de diepgaand-residuele- netwerkconstructie-eenheid en is ingericht om de door de voorbewerkingsmodule verzonden trainingsset te ontvangen, en om het diepe residu-netwerk te trainen en te optimaliseren gebruik makende van de trainingsreeks om het geoptimaliseerde diepgaande residuele netwerk te verkrijgen.The lymph node cancer CT image classification system according to claim 1, wherein the deep residual network training module comprises: a deep residual network construction unit connected to the preprocessing module and arranged to construct the deep residual network; and a deep-residual training unit connected to the deep-residual network construction unit and arranged to receive the training set sent from the pre-processing module, and to train and optimize the deep-residual network using the training sequence to to obtain the optimized in-depth residual network. 7. Het classificatiesysteem voor Iymfeklierkanker CT-beelden volgens conclusie 6, waarbij het diepgaand residuele netwerk omvat: een ondiepe-kenmerkenextractielaag die is verbonden met de voorbewerkingsmodule, en is ingericht om een 64-kanaals 3x3x3 convolutiekernel en een gerectificeerde lineaire eenheid (ReLU) die is verbonden met de 3x3x3 convolutiekernel te gebruiken om ondiepe kenmerken van de beelden in de trainingsset te extraheren om een ondiepe kenmerkenkaart te verkrijgen; een diepe-kenmerkenextractielaag die is verbonden met de ondiepe-kenmerkenextractielaag, en is ingericht om diepe kenmerken in de ondiepe-kenmerkenkaart te extraheren om een diepe- kenmerkenkaart te verkrijgen een skip-verbindingslaag die afzonderlijk is verbonden met de ondiepe-kenmerkenextractielaag en de diepe-kenmerkenextractielaag en is ingericht om de ondiepe-kenmerkenkaart met de diepe- kenmerkenkaart te verbinden een convolutielaag is verbonden met de skip-verbindingslaag en is ingericht om kenmerken te extraheren uit de verbonden ondiepe-kenmerkenkaart en de diepe-kenmerkenkaart met een 7x7x7 convolutiekernel en een ReLU die is verbonden met de 7x7x7 convolutiekernel, om een 128- kanaals kenmerkenkaart te genereren; en een classificatielaag die is verbonden met de convolutielaag en is ingericht om een globale 3D- gemiddelde poolingbewerking uit te voeren op de 128-kanaalskenmerkenkaart, om de waarschijnlijkheid van lymfekliermetastase op de CT-beelden van lymfeklierkanker te berekenen.The lymph node cancer CT image classification system according to claim 6, wherein the deep residual network comprises: a shallow feature extraction layer connected to the pre-processing module and configured to include a 64-channel 3x3x3 convolution kernel and a rectified linear unit (ReLU) is connected to use the 3x3x3 convolution kernel to extract shallow features from the images in the training set to obtain a shallow feature map; a deep feature extraction layer connected to the shallow feature extraction layer and arranged to extract deep features in the shallow feature map to obtain a deep feature map a skip link layer connected separately to the shallow feature extraction layer and the deep feature map feature extraction layer and is configured to connect the shallow feature map to the deep feature map a convolution layer is connected to the skip connection layer and is configured to extract features from the connected shallow feature map and the deep feature map with a 7x7x7 convolution kernel and a ReLU which is connected to the 7x7x7 convolution kernel to generate a 128 channel feature map; and a classification layer connected to the convolution layer and configured to perform a global 3D average pooling operation on the 128 channel feature map to calculate the probability of lymph node metastasis on the lymph node cancer CT images. 8. Het classiticatiesysteem voor lymfeklierkanker CT-beelden volgens conclusie 7, waarbij de diepe-kenmerkenextractielaag omvat: Meerdere residuele dichte blokken (RDB's) die verbonden zijn met de ondiepe- kenmerkextractielaag, waarbij elke RDB opeenvolgend verbonden is, en ingericht is om de diepe kenmerken in de ondiepe-kenmerkkaart te extraheren met behulp van negen 3x3x3 convolutiekernels en een ReLU die afzonderlijk is verbonden met elke 3x3x3 convolutiekernel; en een 1x1x1 convolutielaag die is verbonden met een aantal RDB's, en is ingericht om de door elke RDB geëxtraheerde diepe-kenmerken te fuseren om de diepe-kenmerkenkaart te verkrijgen. The lymph node cancer CT image classification system of claim 7, wherein the deep feature extraction layer comprises: Multiple Residual Dense Blocks (RDBs) connected to the shallow feature extraction layer, each RDB connected sequentially, and configured to extract the deep features in the shallow feature map extractable using nine 3x3x3 convolution kernels and a ReLU connected separately to each 3x3x3 convolution kernel; and a 1x1x1 convolution layer connected to a plurality of RDBs and configured to fuse the deep features extracted by each RDB to obtain the deep feature map. 9, Het classificatiesysteem voor Iymfeklierkanker CT-beelden volgens conclusie 7, waarbij de classificatielaag een volledig verbonden laag (FC-laag) en een Softmax omvat die onderling verbonden zijn; waarbij de FC-laag is verbonden met de convolutielaag en wordt gebruikt voor het uitvoeren van de globale 3D-gemiddelde poolingbewerking op de 128-kanaalskenmerkenkaart; en de Softmax wordt gebruikt voor het berekenen van de waarschijnlijkheid van de lymfekliermetastase in de CT-beelden van de lymfeklierkanker.The classification system for lymphoma CT images according to claim 7, wherein the classification layer comprises a fully connected layer (FC layer) and a Softmax that are interconnected; wherein the FC layer is connected to the convolution layer and used to perform the global 3D average pooling operation on the 128 channel feature map; and the Softmax is used to calculate the probability of the lymph node metastasis in the lymph node cancer CT images.
NL2029876A 2021-07-19 2021-11-23 Deep residual network-based classification system for thyroid cancer computed tomography (ct) images NL2029876B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110812511.3A CN113537357A (en) 2021-07-19 2021-07-19 Thyroid cancer CT image classification system based on depth residual error network

Publications (2)

Publication Number Publication Date
NL2029876A NL2029876A (en) 2023-01-23
NL2029876B1 true NL2029876B1 (en) 2023-03-14

Family

ID=78128656

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2029876A NL2029876B1 (en) 2021-07-19 2021-11-23 Deep residual network-based classification system for thyroid cancer computed tomography (ct) images

Country Status (2)

Country Link
CN (1) CN113537357A (en)
NL (1) NL2029876B1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549413B (en) * 2022-01-19 2023-02-03 华东师范大学 Multi-scale fusion full convolution network lymph node metastasis detection method based on CT image
CN116416239B (en) * 2023-04-13 2024-03-12 中国人民解放军海军军医大学第一附属医院 Pancreatic CT image classification method, image classification model, electronic equipment and medium
CN116797879A (en) * 2023-06-28 2023-09-22 脉得智能科技(无锡)有限公司 Thyroid cancer metastasis lymph node prediction model construction method, system, equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680678B (en) * 2017-10-18 2020-12-01 北京航空航天大学 Thyroid ultrasound image nodule diagnosis system based on multi-scale convolution neural network
US11937973B2 (en) * 2018-05-31 2024-03-26 Mayo Foundation For Medical Education And Research Systems and media for automatically diagnosing thyroid nodules

Also Published As

Publication number Publication date
NL2029876A (en) 2023-01-23
CN113537357A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
NL2029876B1 (en) Deep residual network-based classification system for thyroid cancer computed tomography (ct) images
Usman et al. Volumetric lung nodule segmentation using adaptive roi with multi-view residual learning
Wang et al. Automatic whole heart segmentation using deep learning and shape context
CN109523521B (en) Pulmonary nodule classification and lesion positioning method and system based on multi-slice CT image
CN110309860B (en) Method for classifying malignancy degree of lung nodule based on convolutional neural network
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
Pan et al. Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks
CN114565761B (en) Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
CN111798424B (en) Medical image-based nodule detection method and device and electronic equipment
Cao et al. An automatic breast cancer grading method in histopathological images based on pixel-, object-, and semantic-level features
CN113261012B (en) Method, device and system for processing image
Tripathi et al. HematoNet: Expert level classification of bone marrow cytology morphology in hematological malignancy with deep learning
CN112686899B (en) Medical image analysis method and apparatus, computer device, and storage medium
Manikandan et al. Segmentation and Detection of Pneumothorax using Deep Learning
Pham et al. Chest x-rays abnormalities localization and classification using an ensemble framework of deep convolutional neural networks
Li et al. Structure convolutional extreme learning machine and case-based shape template for HCC nucleus segmentation
Abdulaal et al. A self-learning deep neural network for classification of breast histopathological images
Zhang et al. Lesion detection of computed tomography and magnetic resonance imaging image based on fully convolutional networks
CN115809988A (en) Survival analysis method and system for brain tumor patient
Mir et al. Artificial intelligence-based techniques for analysis of body cavity fluids: a review
Tripathi et al. An Object Aware Hybrid U-Net for Breast Tumour Annotation
Yang et al. Leveraging auxiliary information from emr for weakly supervised pulmonary nodule detection
Qiu et al. MMA-Net: Multiple Morphology-Aware Network for Automated Cobb Angle Measurement
CN112233106A (en) Thyroid cancer ultrasonic image analysis method based on residual capsule network
Saturi et al. Modelling of deep learning enabled lung disease detection and classification on chest X-ray images