CN114398979A - Ultrasonic image thyroid nodule classification method based on feature decoupling - Google Patents

Ultrasonic image thyroid nodule classification method based on feature decoupling Download PDF

Info

Publication number
CN114398979A
CN114398979A CN202210037158.0A CN202210037158A CN114398979A CN 114398979 A CN114398979 A CN 114398979A CN 202210037158 A CN202210037158 A CN 202210037158A CN 114398979 A CN114398979 A CN 114398979A
Authority
CN
China
Prior art keywords
thyroid
decoupling
feature
information
tad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210037158.0A
Other languages
Chinese (zh)
Inventor
马步云
赵世轩
李永杰
陈杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
West China Hospital of Sichuan University
Original Assignee
West China Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by West China Hospital of Sichuan University filed Critical West China Hospital of Sichuan University
Priority to CN202210037158.0A priority Critical patent/CN114398979A/en
Publication of CN114398979A publication Critical patent/CN114398979A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses an ultrasonic image thyroid nodule classification method based on feature decoupling, which is applied to the technical field of image processing and aims at solving the problem that the prior art is limited to changing the input of a neural network to obtain features under different visual fields; the invention establishes a new local/global feature extraction method, the method uses a self-attention mechanism to design a tissue-anatomical decoupling module to connect two paths of 'What' and 'Where', and completes the task of thyroid nodule benign and malignant classification in a form of multi-task learning; the method can adaptively complete local/global feature decoupling in the feature space, has a larger visual field range compared with the existing method, and can acquire more effective and stable features. Its thyroid nodule benign and malignant classification under the ultrasound image achieves diagnostic performance exceeding that of doctors.

Description

Ultrasonic image thyroid nodule classification method based on feature decoupling
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an ultrasonic image thyroid nodule classification technology based on feature decoupling.
Background
Thyroid nodules are a common nodular lesion. According to the epidemiological investigation result of the Chinese medical society on thyroid diseases, the prevalence rate of thyroid nodules is up to 18.6%, wherein malignant nodules (thyroid cancer) account for 5% -15%. In order for thyroid nodule patients to obtain the correct treatment modality, it is important to accurately distinguish between benign and malignant nodules. There are two commonly used methods for diagnosing thyroid nodule malignancy and malignancy: non-invasive thyroid ultrasound imaging and invasive Fine Needle Aspiration Biopsy (FNAB). FNAB is the gold standard for nodule diagnosis, but large scale screening using FNAB can be traumatic to the patient and create significant cost waste. On the contrary, the ultrasound imaging is fast, low cost, radiation-free, and does not cause damage to superficial organs of a patient while obtaining a high-resolution image. The method is suitable for thyroid health examination of people of all ages, and is one of the most common examination methods at present.
In 2009, Horvath et al proposed a Thyroid Imaging Reporting and Data System (TI-RADS). TI-RADS is intended to provide more standard guidance for assessing thyroid nodules, avoiding unnecessary invasive examinations. It is based on thyroid ultrasound images and uses features like shape, orientation, margin, calcification and echo as ultrasound descriptors to rank the risk of thyroid nodule malignancy. However, there are still several obstacles that limit the diagnostic efficacy of TI-RADS. Firstly, the medical field has insufficient knowledge of the characteristics of benign and malignant nodules, and the explanation and description of the nodules by doctors in clinical practice are still controversial. Secondly, the judgment of sonographers is subjective and relies heavily on a large amount of experience, and imbalances in medical resources will greatly increase the difficulty of large-scale screening in remote or resource-poor areas. Therefore, intelligent thyroid nodule classification based on ultrasound images is a key issue.
In recent years, a medical big data analysis method based on imaging omics becomes a research hotspot. It can quantitatively analyze the characteristics of medical image data to obtain disease characteristics that cannot be recognized by the naked eye or are difficult to quantify. Imaging omics are also applied to the benign and malignant classification task of thyroid nodules, and are mainly divided into two main categories: a conventional method and a deep learning method.
The traditional method adopts a manually designed feature extraction method and combines feature selection and a classifier to diagnose. However, the traditional method relies on good contour delineation to ensure the stability of feature extraction, which significantly increases the labor cost, and the subjectivity of the doctor on contour labeling is liable to cause the deviation of features, affecting the generalization performance of the classification model.
The deep learning approach is based on a Deep Neural Network (DNN) that adaptively learns feature extraction and makes predictions by means of data-driven patterns. The DNN can be regarded as a method for end-to-end mapping from an image to a class label, and has functions of feature extraction, feature selection, classification and the like. Thyroid ultrasound images are highly complex (including tissues such as trachea, arteries, muscles) and have significant differences in nodule shape and size, which increase the difficulty of ordinary DNN training. The present invention summarizes the features (i.e., imaging performance) that human physicians rely on in the diagnostic process, and is divided into two broad categories: local features (internal echo intensity, texture, boundary definition, aspect ratio, etc.) and global features (thyroid location, peripheral echo, relative size, etc.). The efficient utilization of the two types of features is beneficial to establishing a DNN model with better performance.
Although DNN has powerful feature extraction capabilities, extracting local and global features from medical images remains challenging. He et al extract local features using thyroid nodule images outlined by doctors as input to a neural network, and obtain good performance in nodule classification using images of a wider range around nodules as global features. Xie et al established three types of inputs for the model of lung nodules, overall appearance, voxel value heterogeneity, and shape heterogeneity to extract features of diversity. Although these studies have made various attempts at extracting local and global features, they are limited to changing the input to the neural network to obtain features in different views.
Disclosure of Invention
In order to solve the technical problems, the invention provides an ultrasonic image thyroid nodule classification method based on feature decoupling, which summarizes the imaging performance based on the diagnosis process into two components of local features and global features, completes the local/global feature decoupling in a feature space by establishing a more appropriate mode, and completes the classification of thyroid nodules.
The technical scheme adopted by the invention is as follows: an ultrasonic image thyroid nodule classification method based on feature decoupling is characterized in that a local/global feature decoupling network is established, and thyroid nodules are classified on a thyroid ultrasonic image through the local/global feature decoupling network;
the local/global feature decoupling network structure comprises two paths, wherein the first path outputs a classification result of thyroid nodules; the first path adopts an ImageNet pre-trained ResNet-18 model as a main trunk, and comprises four TAD modules and four residual modules, wherein the four TAD modules and the four residual modules are arranged in a crossed manner, the TAD modules are used for decoupling a feature map into tissue information and dissection information, the tissue information and the dissection information obtained by the decoupling of the TAD modules are spliced and fused through concatemate to obtain an iconography representation, and the iconography representation is input into the residual modules for feature extraction;
the second path outputs the result of the thyroid nodule segmentation; the second pass includes four decoders, and the output of the fourth residual block in the first pass is used as the input of the first decoder, and the four decoders are also connected to four TAD modules by hopping.
The organization information is information containing local characteristic clues, specifically:
obtaining a feature map from image features extracted from the thyroid ultrasound image processed in step S2;
expanding all pixels of the characteristic diagram according to rows to obtain two pixel sets key and query with the same length and matched positions;
performing self-attention calculation after whitening on the two pixel sets key and query, capturing the long-distance dependence of the features, and generating the attention intention of various tissue information in the thyroid image;
using gating cells Vg=σ(gj) And screening to obtain the tissue information of the focused nodule.
The anatomical information is information containing global characteristic clues, specifically: by setting a weight matrix WmThe weight matrix WmNot shared with the weight matrix of the key, the pixels of the feature map are based on the weight matrix WmThe weighting results in anatomical information.
The TAD module is used for decoupling the characteristic diagram into organization information and anatomy information, and the calculation formula is as follows:
Figure BDA0003468911400000031
wherein x represents the input characteristic, y represents the output characteristic of the TAD module, and i and j represent the characteristic position index; omegaG(xi,xj) Representing a measurement xiAnd xjA function of the degree of embedding similarity,
Figure BDA0003468911400000039
denotes the softmax function,. rho. denotes the ReLU activation function,. sigma. denotes the sigmoid activation function, qi=Wqxi,kj=Wkxj,vj=Wvxj,gj=Wgxj,mj=Wmxj,Wq,Wk,Wv,Wg,WmIs the weight matrix to be learned.
When the local/global feature decoupling network is trained, calculating loss by the network output result and a real value, and optimizing network parameters according to the gradient of a loss function; the loss function includes classification loss
Figure BDA0003468911400000032
And segmentation loss
Figure BDA0003468911400000033
Two parts, the calculation mode is as follows:
Figure BDA0003468911400000034
wherein the content of the first and second substances,
Figure BDA0003468911400000035
representing a two-value cross-entropy loss,
Figure BDA0003468911400000036
represents the odds ratio loss, y represents the category label,
Figure BDA0003468911400000037
representing the prediction class result, Y represents the true label of the segmentation,
Figure BDA0003468911400000038
representing the predicted segmentation result, N being the training batch size, and λ being the hyperparameter.
The thyroid ultrasound image acquisition process comprises the following steps: the probe is used to continuously slide and scan the thyroid region of the patient and an image of the largest portion of the lesion is saved to a database.
Also included is marking a thyroid nodule contour on the thyroid ultrasound image.
The method further comprises the step of adaptively extracting a rough binary segmentation template of the thyroid nodule by adopting a level set algorithm on the thyroid ultrasonic image marked with the thyroid nodule outline.
The invention has the beneficial effects that: the invention provides a new local/global feature expression method; the method can adaptively complete local/global feature decoupling in the feature space, has a larger visual field range compared with the existing method, and can acquire more effective and stable features. The thyroid nodule benign and malignant classification under the ultrasonic image achieves diagnosis performance exceeding that of a doctor; the invention specifically comprises the following advantages:
a characteristic decoupling module is provided, and information interaction of two paths of 'What' and 'Where' is established, so that a multi-task learning framework of classification and segmentation is completed; through the mutual promotion learning of the two paths, the robustness and the generalization which are better than those of the existing method are obtained.
Drawings
Fig. 1 is a schematic diagram of thyroid ultrasound image data and a nodule binary segmentation template according to the present invention.
Fig. 2 is a structural diagram of a local/global feature decoupling network (LoGo-Net) in the present invention.
Fig. 3 is a block diagram of a tissue-anatomical decoupling module of the present invention.
FIG. 4 is a block diagram of a decoder of the "Where" path in the present invention.
Fig. 5 is a schematic diagram of an output saliency map of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments, not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides an ultrasonic image thyroid nodule classification method based on feature decoupling and application thereof, comprising the following steps:
step S1: and (5) thyroid ultrasound image acquisition and labeling.
During the ultrasound examination, the sonographer uses a high frequency linear probe to continuously slide and scan the patient's thyroid region and save an image of the largest portion of the lesion to a database. The ultrasonic equipment adjusts the dynamic range value, the gain value, the imaging depth and the frequency according to the imaging requirement so as to obtain a clear thyroid ultrasonic image. In general, the dynamic range value is set to 40-85 dB, the gain value is set to 50-70 dB, the imaging depth is set to about 2.5-4 cm, and the frequency is set to 6-12 MHz.
The contour of the thyroid nodule is suggested by the physician on each ultrasound image. Specifically, the doctor marks a small number of points near the nodule contour, and the invention adopts a level set algorithm to adaptively extract a rough binary segmentation template of the nodule, as shown in fig. 1. The benign and malignant nature of the thyroid nodules are determined by the ultrasound report of the patient simultaneously with the pathology record, labeling the image with a binary label of 0, 1. When multiple nodules appear, the image is marked as malignant if it contains malignant nodules.
Step S2: and (5) image preprocessing.
The ultrasound image is grayscale normalized and size scaled to a resolution of 224 x 224. The binary segmentation template size is scaled to a resolution of 56 x 56. The invention uses data enhancement to solve the problem of unbalance of positive and negative samples, and performs mirror image inversion on the negative sample with less data volume so as to enable the negative sample to reach the data volume with the same positive sample.
Step S3: and (3) establishing a local/global feature decoupling network (LoGo-Net).
The network is inspired by the knowledge in the ultrasonic diagnosis field and the visual cognition mechanism, a double-channel structure and a processing mode of a human visual system are simulated, and a model based on a multi-task learning framework is established, as shown in fig. 2. The LoGo-Net comprises two paths of 'What' and 'Where', functions of the two paths are mutually promoted and synchronously optimized, and classification and segmentation tasks are respectively completed. In addition, LoGo-Net contains a tissue-anatomical decoupling (TAD) module designed based on the self-attention mechanism, which decouples features into tissue information containing local feature cues and anatomical information containing global feature cues, and connects the "What" and "Where" pathways as carriers of information transfer.
Step S31: establishment of a tissue-anatomical decoupling (TAD) module.
The TAD module is intended to separate from the feature map the tissue information containing local feature cues and the anatomical information containing global feature cues. As shown in FIG. 3, for tissue information, the long distance dependence of the feature is captured by performing a post-whitening self-attention calculation on two sets of pixels (key and query)Based on the relationship, attention of various tissue information in the thyroid gland image is generated. Then using the gating cell Vg=σ(gj) The desired attention map is screened to selectively focus the model on nodules and other related tissue, suppressing tissue information that is not relevant to classification. For anatomical information, a new weight matrix W is setmIt is not shared with the weight matrix of keys, which makes the optimization of the organization and anatomy information independent of each other.
The feature map here specifically refers to the image features extracted from the thyroid ultrasound image processed in step S2, and is a set of features such as contours, edges, and textures.
The pixel set in this step is that all pixels of the feature map are expanded according to rows, and the key and the query are two pixel sets with the same length and matched positions, and the difference is that the two pixel sets are obtained after being weighted by two different weight matrixes respectively.
Step S31 is specifically calculated as follows:
Figure BDA0003468911400000061
where x denotes the input characteristic, y denotes the output characteristic of the TAD module, i and j denote the characteristic position indices, so ωG(xi,xj) Representing a measurement xiAnd xjEmbedding a function of similarity. When ω isG(xi,xj) When instantiated with an embedded Gaussian function, it is equivalent to being along xjSoftmax function of dimension, expressed as
Figure BDA0003468911400000068
ρ (-) denotes the ReLU activation function, σ (-) denotes the sigmoid activation function. q. q.si=Wqxi,kj=Wkxj,vj=Wvxj,gj=Wgxj,mj=WmxjWherein W isq,Wk,Wv,Wg,WmIs the weight matrix to be learned.
Step S32: establishment of the "What" pathway.
The "What" pathway functions to classify the thyroid nodules as benign or malignant. The "What" path uses ImageNet pre-trained ResNet-18 as the backbone of the classification model. The ResNet original structure full connection layer has 1000 neurons, and 2 neurons are replaced in the network to complete the task of classifying the benign and malignant types. As shown in fig. 2, in the embodiment, four TAD modules and four residual modules are included, and before feature extraction of each residual module, the TAD module decouples the feature map into organization information and anatomy information. Where tissue information is sent to the "Where" pathway for information exchange to constrain the location information that it carries the nodule. The tissue information and the anatomical information are spliced and fused to obtain the iconography expression, and the iconography expression is sent to a subsequent residual error module, so that the model can extract richer features including local and global features. The feature map here is the image feature extracted from the thyroid ultrasound image processed in step S2.
Step S33: establishment of the "Where" pathway.
The "Where" pathway accomplishes the task of thyroid nodule segmentation. As shown in fig. 3, the "Where" path in the embodiment contains four decoders. Specifically, the decoder unit is shown in fig. 4, and the calculation formula is:
Figure BDA0003468911400000062
wherein
Figure BDA0003468911400000063
And
Figure BDA0003468911400000064
representing the upper and lower level features, respectively.
Figure BDA0003468911400000065
Representing organization information. Fu(. is an upsampling function of bilinear differences, [. The]Representing a feature splicing operation, WsTwo layers of convolution operations. The "Where" path takes the output of the fourth residual block in step S32 as the initial input of the decoderAnd finally, the organization information output by the four TAD modules is continuously integrated through jump connection, so that more accurate edge details are recovered, and a complete nodule segmentation prediction is obtained.
Step S4: definition of loss function and experimental setup.
When the model is trained, the loss is calculated by the output result and the true value of the model, and the model parameters are optimized according to the gradient of the loss function. The loss function includes classification loss
Figure BDA0003468911400000066
And segmentation loss
Figure BDA0003468911400000067
Two parts, the calculation mode is as follows:
Figure BDA0003468911400000071
wherein
Figure BDA0003468911400000072
Representing a two-value cross-entropy loss,
Figure BDA0003468911400000073
represents the odds ratio loss, y represents the category label,
Figure BDA0003468911400000074
the representative model predicts the class outcome. Y ═ Y1,Y2,…,YNThe true label representing the segmentation is then,
Figure BDA0003468911400000075
representing the segmentation results of the model predictions. N is the training batch size and λ is the hyperparameter, which is set to 0.5 in this embodiment.
The data set results were cross-validated using 5-fold cross-validation, i.e., training data and test data were divided in a 4:1 ratio and cross-validated. During training, a random gradient descent method is used for optimizing a loss function, momentum is set to be 0.9, and weight attenuation is 1 e-5. The initial learning rate is set to 0.001 and multiplied by a factor of 0.95 after each iteration of training is completed. Adopt before the training process to preheat, use the training of smaller learning rate earlier, wait the model relatively stable after again use the learning rate that sets up in advance to train, can obtain better model effect.
Step S5: artificial Intelligence (AI) in conjunction with physician diagnosis.
Through the model trained in S4, the prediction probability of benign and malignant can be obtained after the image is input, the gradient of the input image is obtained through the back propagation of the classification loss function, the absolute value is taken, then the normalization is carried out, and a saliency map is obtained, so that the discrimination area of the model for diagnosis is revealed. When the actual clinical interpretation is carried out, the prediction probability and the saliency map of the AI model are synchronously provided for doctors, and the doctors can make own judgment by combining medical knowledge. The method improves the diagnosis level of doctors facing difficult and complicated cases, effectively reduces missed diagnosis, and has potential to be popularized to early cancer screening programs in remote or resource-poor areas.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (9)

1. An ultrasonic image thyroid nodule classification method based on feature decoupling is characterized in that a local/global feature decoupling network is established, and thyroid nodules are classified on a thyroid ultrasonic image through the local/global feature decoupling network;
the local/global feature decoupling network structure comprises two paths, wherein the first path outputs a classification result of thyroid nodules; the first path adopts an ImageNet pre-trained ResNet-18 model as a main trunk, and comprises four TAD modules and four residual modules, wherein the four TAD modules and the four residual modules are arranged in a crossed manner, the TAD modules are used for decoupling a feature map into tissue information and dissection information, the tissue information and the dissection information obtained by the decoupling of the TAD modules are spliced and fused through concatemate to obtain an iconography representation, and the iconography representation is input into the residual modules for feature extraction;
the second path outputs the result of the thyroid nodule segmentation; the second pass includes four decoders, and the output of the fourth residual block in the first pass is used as the input of the first decoder, and the four decoders are also connected to four TAD modules by hopping.
2. The method for classifying thyroid nodules based on ultrasonic images with feature decoupling as claimed in claim 1, wherein the tissue information is information containing local feature clues, specifically:
obtaining a feature map from image features extracted from the thyroid ultrasound image processed in step S2;
expanding all pixels of the feature map according to rows to obtain two pixel sets key and query with the same length and matched positions:
performing self-attention calculation after whitening on the two pixel sets key and query, capturing the long-distance dependence of the features, and generating the attention intention of various tissue information in the thyroid image;
using gating cells Vg=σ(gj) And screening to obtain the tissue information of the focused nodule.
3. The method for classifying thyroid nodules based on ultrasonic images with feature decoupling as claimed in claim 2, wherein the anatomical information is information containing global feature clues, specifically: by setting a weight matrix WmThe weight matrix WmNot shared with the weight matrix of the key, the pixels of the feature map are based on the weight matrix WmThe weighting results in anatomical information.
4. The feature decoupling-based ultrasonic image thyroid nodule classification method according to claim 3, wherein the TAD module is used for decoupling the feature map into tissue information and anatomy information, and the calculation formula is as follows:
Figure FDA0003468911390000011
wherein x represents the input characteristic, y represents the output characteristic of the TAD module, and i and j represent the characteristic position index; omegaG(xi,xj) Representing a measurement xiAnd xjA function of the degree of embedding similarity,
Figure FDA0003468911390000012
denotes the softmax function,. rho. denotes the ReLU activation function,. sigma. denotes the sigmoid activation function, qi=Wqxi,kj=Wkxj,vj=Wvxj,gj=Wgxj,mj=Wmxj,Wq,Wk,Wv,Wg,WmIs the weight matrix to be learned.
5. The feature decoupling-based ultrasonic image thyroid nodule classification method according to claim 4, wherein the four decoders are further connected with the four TAD modules through jumping, and the position of the thyroid nodule is constrained according to the tissue information output by the TAD modules.
6. The method for classifying thyroid nodules based on an ultrasonic image with characteristic decoupling as claimed in claim 5, wherein when a local/global characteristic decoupling network is trained, the network outputs results and real values to calculate losses, and network parameters are optimized according to the gradient of a loss function; the loss function includes classification loss
Figure FDA0003468911390000021
And segmentation loss
Figure FDA0003468911390000022
Two parts, the calculation mode is as follows:
Figure FDA0003468911390000023
wherein the content of the first and second substances,
Figure FDA0003468911390000024
representing a two-value cross-entropy loss,
Figure FDA0003468911390000025
represents the odds ratio loss, y represents the category label,
Figure FDA0003468911390000026
representing the prediction class result, Y represents the true label of the segmentation,
Figure FDA0003468911390000027
representing the predicted segmentation result, N being the training batch size, and λ being the hyperparameter.
7. The method for classifying thyroid nodules based on ultrasonic images with characteristic decoupling according to claim 6, wherein the thyroid ultrasound images are acquired by the following steps: the probe is used to continuously slide and scan the thyroid region of the patient and an image of the largest portion of the lesion is saved to a database.
8. The feature decoupling based ultrasonic image thyroid nodule classifying method according to claim 7, further comprising marking thyroid nodule contours on a thyroid ultrasound image.
9. The method for classifying thyroid nodules based on feature decoupling ultrasound images as claimed in claim 8, further comprising adaptively extracting a rough binary segmentation template of the nodules by using a level set algorithm on the thyroid ultrasound images marked with thyroid nodule contours.
CN202210037158.0A 2022-01-13 2022-01-13 Ultrasonic image thyroid nodule classification method based on feature decoupling Pending CN114398979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210037158.0A CN114398979A (en) 2022-01-13 2022-01-13 Ultrasonic image thyroid nodule classification method based on feature decoupling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210037158.0A CN114398979A (en) 2022-01-13 2022-01-13 Ultrasonic image thyroid nodule classification method based on feature decoupling

Publications (1)

Publication Number Publication Date
CN114398979A true CN114398979A (en) 2022-04-26

Family

ID=81230824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210037158.0A Pending CN114398979A (en) 2022-01-13 2022-01-13 Ultrasonic image thyroid nodule classification method based on feature decoupling

Country Status (1)

Country Link
CN (1) CN114398979A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663861A (en) * 2022-05-17 2022-06-24 山东交通学院 Vehicle re-identification method based on dimension decoupling and non-local relation
CN115035030A (en) * 2022-05-07 2022-09-09 北京大学深圳医院 Image recognition method, image recognition device, computer equipment and computer-readable storage medium
CN117611806A (en) * 2024-01-24 2024-02-27 北京航空航天大学 Prostate cancer operation incisal margin positive prediction system based on images and clinical characteristics

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019232346A1 (en) * 2018-05-31 2019-12-05 Mayo Foundation For Medical Education And Research Systems and media for automatically diagnosing thyroid nodules
CN110706793A (en) * 2019-09-25 2020-01-17 天津大学 Attention mechanism-based thyroid nodule semi-supervised segmentation method
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN111898560A (en) * 2020-08-03 2020-11-06 华南理工大学 Classification regression feature decoupling method in target detection
CN113159051A (en) * 2021-04-27 2021-07-23 长春理工大学 Remote sensing image lightweight semantic segmentation method based on edge decoupling
CN113177554A (en) * 2021-05-19 2021-07-27 中山大学 Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN113378933A (en) * 2021-06-11 2021-09-10 合肥合滨智能机器人有限公司 Thyroid ultrasound image classification and segmentation network, training method, device and medium
CN113539477A (en) * 2021-06-24 2021-10-22 杭州深睿博联科技有限公司 Decoupling mechanism-based lesion benign and malignant prediction method and device
CN113804766A (en) * 2021-09-15 2021-12-17 大连理工大学 Heterogeneous material tissue uniformity multi-parameter ultrasonic characterization method based on SVR
CN113870289A (en) * 2021-09-22 2021-12-31 浙江大学 Facial nerve segmentation method and device for decoupling and dividing treatment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019232346A1 (en) * 2018-05-31 2019-12-05 Mayo Foundation For Medical Education And Research Systems and media for automatically diagnosing thyroid nodules
CN110706793A (en) * 2019-09-25 2020-01-17 天津大学 Attention mechanism-based thyroid nodule semi-supervised segmentation method
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN111898560A (en) * 2020-08-03 2020-11-06 华南理工大学 Classification regression feature decoupling method in target detection
CN113159051A (en) * 2021-04-27 2021-07-23 长春理工大学 Remote sensing image lightweight semantic segmentation method based on edge decoupling
CN113177554A (en) * 2021-05-19 2021-07-27 中山大学 Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN113378933A (en) * 2021-06-11 2021-09-10 合肥合滨智能机器人有限公司 Thyroid ultrasound image classification and segmentation network, training method, device and medium
CN113539477A (en) * 2021-06-24 2021-10-22 杭州深睿博联科技有限公司 Decoupling mechanism-based lesion benign and malignant prediction method and device
CN113804766A (en) * 2021-09-15 2021-12-17 大连理工大学 Heterogeneous material tissue uniformity multi-parameter ultrasonic characterization method based on SVR
CN113870289A (en) * 2021-09-22 2021-12-31 浙江大学 Facial nerve segmentation method and device for decoupling and dividing treatment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHI-XUAN ZHAO等: "A Local and Global Feature Disentangled Network: Toward Classification of Benign-Malignant Thyroid Nodules From Ultrasound Image" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035030A (en) * 2022-05-07 2022-09-09 北京大学深圳医院 Image recognition method, image recognition device, computer equipment and computer-readable storage medium
CN114663861A (en) * 2022-05-17 2022-06-24 山东交通学院 Vehicle re-identification method based on dimension decoupling and non-local relation
CN117611806A (en) * 2024-01-24 2024-02-27 北京航空航天大学 Prostate cancer operation incisal margin positive prediction system based on images and clinical characteristics
CN117611806B (en) * 2024-01-24 2024-04-12 北京航空航天大学 Prostate cancer operation incisal margin positive prediction system based on images and clinical characteristics

Similar Documents

Publication Publication Date Title
Meng et al. Liver fibrosis classification based on transfer learning and FCNet for ultrasound images
CN108364006B (en) Medical image classification device based on multi-mode deep learning and construction method thereof
Sharif et al. A comprehensive review on multi-organs tumor detection based on machine learning
Zhang et al. Photoacoustic image classification and segmentation of breast cancer: a feasibility study
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN106372390A (en) Deep convolutional neural network-based lung cancer preventing self-service health cloud service system
CN108898175A (en) Area of computer aided model building method based on deep learning gastric cancer pathological section
CN109102491A (en) A kind of gastroscope image automated collection systems and method
CN114398979A (en) Ultrasonic image thyroid nodule classification method based on feature decoupling
Qu et al. Deep learning-based methodology for recognition of fetal brain standard scan planes in 2D ultrasound images
CN111767952B (en) Interpretable lung nodule benign and malignant classification method
CN111681210A (en) Method for identifying benign and malignant breast nodules by shear wave elastogram based on deep learning
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
Zhao et al. A local and global feature disentangled network: toward classification of benign-malignant thyroid nodules from ultrasound image
CN111275706A (en) Shear wave elastic imaging-based ultrasound omics depth analysis method and system
Jarosik et al. Breast lesion classification based on ultrasonic radio-frequency signals using convolutional neural networks
Aslam et al. Liver-tumor detection using CNN ResUNet
CN110459303B (en) Medical image abnormity detection device based on depth migration
CN111462082A (en) Focus picture recognition device, method and equipment and readable storage medium
Huang et al. Breast cancer diagnosis based on hybrid SqueezeNet and improved chef-based optimizer
Liu et al. Automated classification of cervical Lymph-Node-Level from ultrasound using depthwise separable convolutional swin transformer
CN112508943A (en) Breast tumor identification method based on ultrasonic image
Almutairi et al. An efficient USE-Net deep learning model for cancer detection
Yektaei et al. Diagnosis of lung cancer using multiscale convolutional neural network
CN109816665A (en) A kind of fast partition method and device of optical coherence tomographic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination